# Avalanche L1s (/docs/avalanche-l1s) --- title: Avalanche L1s description: Explore the multi-chain architecture of Avalanche ecosystem. --- An Avalanche L1 is a sovereign network which defines its own rules regarding its membership and token economics. It is composed of a dynamic subset of Avalanche validators working together to achieve consensus on the state of one or more blockchains. Each blockchain is validated by exactly one Avalanche L1, while an Avalanche L1 can validate many blockchains. Avalanche's [Primary Network](/docs/primary-network) is a special Avalanche L1 running three blockchains: - The Platform Chain [(P-Chain)](/docs/primary-network#p-chain-platform-chain) - The Contract Chain [(C-Chain)](/docs/primary-network#c-chain-contract-chain) - The Exchange Chain [(X-Chain)](/docs/primary-network#x-chain-exchange-chain) ![image](/images/subnet1.png) Every validator of an Avalanche L1 **must** sync the P-Chain of the Primary Network for interoperability. Node operators that validate an Avalanche L1 with multiple chains do not need to run multiple machines for validation. For example, the Primary Network is an Avalanche L1 with three coexisting chains, all of which can be validated by a single node, or a single machine. ## Advantages ### Independent Networks - Avalanche L1s use virtual machines to specify their own execution logic, determine their own fee regime, maintain their own state, facilitate their own networking, and provide their own security. - Each Avalanche L1's performance is isolated from other Avalanche L1s in the ecosystem, so increased usage on one Avalanche L1 won't affect another. - Avalanche L1s can have their own token economics with their own native tokens, fee markets, and incentives determined by the Avalanche L1 deployer. - One Avalanche L1 can host multiple blockchains with customized [virtual machines](/docs/primary-network/virtual-machines). ### Native Interoperability Avalanche Warp Messaging enables native cross-Avalanche L1 communication and allows Virtual Machine (VM) developers to implement arbitrary communication protocols between any two Avalanche L1s. ### Accommodate App-Specific Requirements Different blockchain-based applications may require validators to have certain properties such as large amounts of RAM or CPU power. an Avalanche L1 could require that validators meet certain [hardware requirements](/docs/nodes/system-requirements#hardware-and-operating-systems) so that the application doesn't suffer from low performance due to slow validators. ### Launch Networks Designed With Compliance Avalanche's L1 architecture makes regulatory compliance manageable. As mentioned above, an Avalanche L1 may require validators to meet a set of requirements. Some examples of requirements the creators of an Avalanche L1 may choose include: - Validators must be located in a given country. - Validators must pass KYC/AML checks. - Validators must hold a certain license. ### Control Privacy of On-Chain Data Avalanche L1s are ideal for organizations interested in keeping their information private. Institutions conscious of their stakeholders' privacy can create a private Avalanche L1 where the contents of the blockchains would be visible only to a set of pre-approved validators. Define this at creation with a [single parameter](/docs/nodes/configure/avalanche-l1-configs#private-avalanche-l1). ### Validator Sovereignty In a heterogeneous network of blockchains, some validators will not want to validate certain blockchains because they simply have no interest in those blockchains. The Avalanche L1 model enables validators to concern themselves only with blockchain networks they choose to participate in. This greatly reduces the computational burden on validators. ## Why Build Your Own Avalanche L1 There are many advantages to running your own Avalanche L1. If you find one or more of these a good match for your project then an Avalanche L1 might be a good solution for you. ### We Want Our Own Gas Token C-Chain is an Ethereum Virtual Machine (EVM) chain; it requires the gas fees to be paid in its native token. That is, the application may create its own utility tokens (ERC-20) on the C-Chain, but the gas must be paid in AVAX. In the meantime, [Subnet-EVM](https://github.com/ava-labs/subnet-evm) effectively creates an application-specific EVM-chain with full control over native(gas) coins. The operator can pre-allocate the native tokens in the chain genesis, and mint more using the [Subnet-EVM](https://github.com/ava-labs/subnet-evm) precompile contract. And these fees can be either burned (as AVAX burns in C-Chain) or configured to be sent to an address which can be a smart contract. Note that the Avalanche L1 gas token is specific to the application in the chain, thus unknown to the external parties. Moving assets to other chains requires trusted bridge contracts (or upcoming cross Avalanche L1 communication feature). ### We Want Higher Throughput The primary goal of the gas limit on C-Chain is to restrict the block size and therefore prevent network saturation. If a block can be arbitrarily large, it takes longer to propagate, potentially degrading the network performance. The C-Chain gas limit acts as a deterrent against any system abuse but can be quite limiting for high throughput applications. Unlike C-Chain, Avalanche L1 can be single-tenant, dedicated to the specific application, and thus host its own set of validators with higher bandwidth requirements, which allows for a higher gas limit thus higher transaction throughput. Plus, [Subnet-EVM](https://github.com/ava-labs/subnet-evm) supports fee configuration upgrades that can be adaptive to the surge in application traffic. Avalanche L1 workloads are isolated from the Primary Network; which means, the noisy neighbor effect of one workload (for example NFT mint on C-Chain) cannot destabilize the Avalanche L1 or surge its gas price. This failure isolation model in the Avalanche L1 can provide higher application reliability. ### We Want Strict Access Control The C-Chain is open and permissionless where anyone can deploy and interact with contracts. However, for regulatory reasons, some applications may need a consistent access control mechanism for all on-chain transactions. With [Subnet-EVM](https://github.com/ava-labs/subnet-evm), an application can require that "only authorized users may deploy contracts or make transactions." Allow-lists are only updated by the administrators, and the allow list itself is implemented within the precompile contract, thus more transparent and auditable for compliance matters. ### We Need EVM Customization If your project is deployed on the C-Chain then your execution environment is dictated by the setup of the C-Chain. Changing any of the execution parameters means that the configuration of the C-Chain would need to change, and that is expensive, complex and difficult to change. So if your project needs some other capabilities, different execution parameters or precompiles that C-Chain does not provide, then Avalanche L1s are a solution you need. You can configure the EVM in an Avalanche L1 to run however you want, adding precompiles, and setting runtime parameters to whatever your project needs. ### We Need Custom Validator Management With the Etna upgrade, L1s can implement their own validator management logic through a _ValidatorManager_ smart contract. This gives you complete control over your validator set, allowing you to define custom staking rules, implement permissionless proof-of-stake with your own token, or create permissioned proof-of-authority networks. The validator management can be handled directly through smart contracts, giving you programmatic control over validator selection and rewards distribution. ### We Want to Build a Sovereign Network L1s on Avalanche are truly sovereign networks that operate independently without relying on other systems. You have complete control over your network's consensus mechanisms, transaction processing, and security protocols. This independence allows you to scale horizontally without dependencies on other networks while maintaining full control over your network parameters and upgrades. This sovereignty is particularly important for projects that need complete autonomy over their blockchain's operation and evolution. ## When to Choose an Avalanche L1 Here we presented some considerations in favor of running your own Avalanche L1 vs. deploying on the C-Chain. If an application has relatively low transaction rate and no special circumstances that would make the C-Chain a non-starter, you can begin with C-Chain deployment to leverage existing technical infrastructure, and later expand to an Avalanche L1. That way you can focus on working on the core of your project and once you have a solid product/market fit and have gained enough traction that the C-Chain is constricting you, plan a move to your own Avalanche L1. Of course, we're happy to talk to you about your architecture and help you choose the best path forward. Feel free to reach out to us on [Discord](https://chat.avalabs.org/) or other [community channels](https://www.avax.network/community) we run. ## Develop Your Own Avalanche L1 Avalanche L1s on Avalanche are deployed by default with [Subnet-EVM](https://github.com/ava-labs/subnet-evm#subnet-evm), a fork of go-ethereum. It implements the Ethereum Virtual Machine and supports Solidity smart contracts as well as most other Ethereum client functionality. To get started, check out our [L1 Toolbox](/tools/l1-toolbox) or the tutorials in the [Avalanche CLI](/docs/tooling/avalanche-cli) section. # Simple VM in Any Language (/docs/avalanche-l1s/simple-vm-any-language) --- title: Simple VM in Any Language description: Learn how to implement a simple virtual machine in any language. --- This is a language-agnostic high-level documentation explaining the basics of how to get started at implementing your own virtual machine from scratch. Avalanche virtual machines are grpc servers implementing Avalanche's [Proto interfaces](https://buf.build/ava-labs/avalanche). This means that it can be done in [any language that has a grpc implementation](https://grpc.io/docs/languages/). ## Minimal Implementation To get the process started, at the minimum, you will to implement the following interfaces: - [`vm.Runtime`](https://buf.build/ava-labs/avalanche/docs/main:vm.runtime) (Client) - [`vm.VM`](https://buf.build/ava-labs/avalanche/docs/main:vm) (Server) To build a blockchain taking advantage of AvalancheGo's consensus to build blocks, you will need to implement: - [AppSender](https://buf.build/ava-labs/avalanche/docs/main:appsender) (Client) - [Messenger](https://buf.build/ava-labs/avalanche/docs/main:messenger) (Client) To have a json-RPC endpoint, `/ext/bc/subnetId/rpc` exposed by AvalancheGo, you will need to implement: - [`Http`](https://buf.build/ava-labs/avalanche/docs/main:http) (Server) You can and should use a tool like `buf` to generate the (Client/Server) code from the interfaces as stated in the [Avalanche module](https://buf.build/ava-labs/avalanche)'s page. There are _server_ and _client_ interfaces to implement. AvalancheGo calls the _server_ interfaces exposed by your VM and your VM calls the _client_ interfaces exposed by AvalancheGo. ## Starting Process Your VM is started by AvalancheGo launching your binary. Your binary is started as a sub-process of AvalancheGo. While launching your binary, AvalancheGo passes an environment variable `AVALANCHE_VM_RUNTIME_ENGINE_ADDR` containing an url. We must use this url to initialize a `vm.Runtime` client. Your VM, after having started a grpc server implementing the VM interface must call the [`vm.Runtime.InitializeRequest`](https://buf.build/ava-labs/avalanche/docs/main:vm.runtime#vm.runtime.InitializeRequest) with the following parameters. - `protocolVersion`: It must match the `supported plugin version` of the [AvalancheGo release](https://github.com/ava-labs/AvalancheGo/releases) you are using. It is always part of the release notes. - `addr`: It is your grpc server's address. It must be in the following format `host:port` (example `localhost:12345`) ## VM Initialization The service methods are described in the same order as they are called. You will need to implement these methods in your server. ### Pre-Initialization Sequence AvalancheGo starts/stops your process multiple times before launching the real initialization sequence. 1. [VM.Version](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.Version) - Return: your VM's version. 2. [VM.CreateStaticHandler](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.CreateStaticHandlers) - Return: an empty array - (Not absolutely required). 3. [VM.Shutdown](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.Shutdown) - You should gracefully stop your process. - Return: Empty ### Initialization Sequence 1. [VM.CreateStaticHandlers](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.CreateStaticHandlers) - Return an empty array - (Not absolutely required). 2. [VM.Initialize](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.Initialize) - Param: an [InitializeRequest](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.InitializeRequest). - You must use this data to initialize your VM. - You should add the genesis block to your blockchain and set it as the last accepted block. - Return: an [InitializeResponse](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.InitializeResponse) containing data about the genesis extracted from the `genesis_bytes` that was sent in the request. 3. [VM.VerifyHeightIndex](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.VerifyHeightIndex) - Return: a [VerifyHeightIndexResponse](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VerifyHeightIndexResponse) with the code `ERROR_UNSPECIFIED` to indicate that no error has occurred. 4. [VM.CreateHandlers](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.CreateHandlers) - To serve json-RPC endpoint, `/ext/bc/subnetId/rpc` exposed by AvalancheGo - See [json-RPC](#json-rpc) for more detail - Create a [`Http`](https://buf.build/ava-labs/avalanche/docs/main:http) server and get its url. - Return: a `CreateHandlersResponse` containing a single item with the server's url. (or an empty array if not implementing the json-RPC endpoint) 5. [VM.StateSyncEnabled](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.StateSyncEnabled) - Return: `true` if you want to enable StateSync, `false` otherwise. 6. [VM.SetState](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.SetState) _If you had specified `true` in the `StateSyncEnabled` result_ - Param: a [SetStateRequest](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.SetStateRequest) with the `StateSyncing` value - Set your blockchain's state to `StateSyncing` - Return: a [SetStateResponse](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.SetStateResponse) built from the genesis block. 7. [VM.GetOngoingSyncStateSummary](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.GetOngoingSyncStateSummary) _If you had specified `true` in the `StateSyncEnabled` result_ - Return: a [GetOngoingSyncStateSummaryResponse](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.GetOngoingSyncStateSummaryResponse) built from the genesis block. 8. [VM.SetState](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.SetState) - Param: a [SetStateRequest](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.SetStateRequest) with the `Bootstrapping` value - Set your blockchain's state to `Bootstrapping` - Return: a [SetStateResponse](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.SetStateResponse) built from the genesis block. 9. [VM.SetPreference](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.SetPreference) - Param: `SetPreferenceRequest` containing the preferred block ID - Return: Empty 10. [VM.SetState](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.SetState) - Param: a [SetStateRequest](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.SetStateRequest) with the `NormalOp` value - Set your blockchain's state to `NormalOp` - Return: a [SetStateResponse](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.SetStateResponse) built from the genesis block. 11. [VM.Connected](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.Connected) (for every other node validating this Avalanche L1 in the network) - Param: a [ConnectedRequest](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.ConnectedRequest) with the NodeID and the version of AvalancheGo. - Return: Empty 12. [VM.Health](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.Health) - Param: Empty - Return: a [HealthResponse](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.HealthResponse) with an empty `details` property. 13. [VM.ParseBlock](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.ParseBlock) - Param: A byte array containing a Block (the genesis block in this case) - Return: a [ParseBlockResponse](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.ParseBlockResponse) built from the last accepted block. At this point, your VM is fully started and initialized. ### Building Blocks #### Transaction Gossiping Sequence When your VM receives transactions (for example using the [json-RPC](#json-rpc) endpoints), it can gossip them to the other nodes by using the [AppSender](https://buf.build/ava-labs/avalanche/docs/main:appsender) service. Supposing we have a 3 nodes network with nodeX, nodeY, nodeZ. Let's say NodeX has received a new transaction on it's json-RPC endpoint. [`AppSender.SendAppGossip`](https://buf.build/ava-labs/avalanche/docs/main:appsender#appsender.AppSender.SendAppGossip) (_client_): You must serialize your transaction data into a byte array and call the `SendAppGossip` to propagate the transaction. AvalancheGo then propagates this to the other nodes. [VM.AppGossip](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.AppGossip): You must deserialize the transaction and store it for the next block. - Param: A byte array containing your transaction data, and the NodeID of the node which sent the gossip message. - Return: Empty #### Block Building Sequence Whenever your VM is ready to build a new block, it will initiate the block building process by using the [Messenger](https://buf.build/ava-labs/avalanche/docs/main:messenger) service. Supposing that nodeY wants to build the block. you probably will implement some kind of background worker checking every second if there are any pending transactions: _client_ [`Messenger.Notify`](https://buf.build/ava-labs/avalanche/docs/main:messenger#messenger.Messenger.Notify): You must issue a notify request to AvalancheGo by calling the method with the `MESSAGE_BUILD_BLOCK` value. 1. [VM.BuildBlock](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.BuildBlock) - Param: Empty - You must build a block with your pending transactions. Serialize it to a byte array. - Store this block in memory as a "pending blocks" - Return: a [BuildBlockResponse](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.BuildBlockResponse) from the newly built block and it's associated data (`id`, `parent_id`, `height`, `timestamp`). 2. [VM.BlockVerify](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.BlockVerify) - Param: The byte array containing the block data - Return: the block's timestamp 3. [VM.SetPreference](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.SetPreference) - Param: The block's ID - You must mark this block as the next preferred block. - Return: Empty 1. [VM.ParseBlock](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.ParseBlock) - Param: A byte array containing a the newly built block's data - Store this block in memory as a "pending blocks" - Return: a [ParseBlockResponse](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.ParseBlockResponse) built from the last accepted block. 2. [VM.BlockVerify](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.BlockVerify) - Param: The byte array containing the block data - Return: the block's timestamp 3. [VM.SetPreference](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.SetPreference) - Param: The block's ID - You must mark this block as the next preferred block. - Return: Empty [VM.BlockAccept](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.BlockAccept): You must accept this block as your last final block. - Param: The block's ID - Return: Empty #### Managing Conflicts Conflicts happen when two or more nodes propose the next block at the same time. AvalancheGo takes care of this and decides which block should be considered final, and which blocks should be rejected using Snowman consensus. On the VM side, all there is to do is implement the `VM.BlockAccept` and `VM.BlockReject` methods. _nodeX proposes block `0x123...`, nodeY proposes block `0x321...` and nodeZ proposes block `0x456`_ There are three conflicting blocks (different hashes), and if we look at our VM's log files, we can see that AvalancheGo uses Snowman to decide which block must be accepted. ```bash ... snowman/voter.go:58 filtering poll results ... ... snowman/voter.go:65 finishing poll ... ... snowman/voter.go:87 Snowman engine can't quiesce ... ... snowman/voter.go:58 filtering poll results ... ... snowman/voter.go:65 finishing poll ... ... snowman/topological.go:600 accepting block ``` Supposing that AvalancheGo accepts block `0x123...`. The following RPC methods are called on all nodes: 1. [VM.BlockAccept](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.BlockAccept): You must accept this block as your last final block. - Param: The block's ID (`0x123...`) - Return: Empty 2. [VM.BlockReject](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.BlockReject): You must mark this block as rejected. - Param: The block's ID (`0x321...`) - Return: Empty 3. [VM.BlockReject](https://buf.build/ava-labs/avalanche/docs/main:vm#vm.VM.BlockReject): You must mark this block as rejected. - Param: The block's ID (`0x456...`) - Return: Empty ### JSON-RPC To enable your json-RPC endpoint, you must implement the [HandleSimple](https://buf.build/ava-labs/avalanche/docs/main:http#http.HTTP.HandleSimple) method of the [`Http`](https://buf.build/ava-labs/avalanche/docs/main:http) interface. - Param: a [HandleSimpleHTTPRequest](https://buf.build/ava-labs/avalanche/docs/main:http#http.HandleSimpleHTTPRequest) containing the original request's method, url, headers, and body. - Analyze, deserialize and handle the request. For example: if the request represents a transaction, we must deserialize it, check the signature, store it and gossip it to the other nodes using the [messenger client](#block-building-sequence)). - Return the [HandleSimpleHTTPResponse](https://buf.build/ava-labs/avalanche/docs/main:http#http.HandleSimpleHTTPResponse) response that will be sent back to the original sender. This server is registered with AvalancheGo during the [initialization process](#initialization-sequence) when the `VM.CreateHandlers` method is called. You must simply respond with the server's url in the `CreateHandlersResponse` result. # Introduction (/docs/avalanche-l1s/virtual-machines-index) --- title: Introduction description: Learn about the execution layer of a blockchain network. --- A Virtual Machine (VM) is a blueprint for a blockchain. Blockchains are instantiated from a VM, similar to how objects are instantiated from a class definition. VMs can define anything you want, but will generally define transactions that are executed and how blocks are created. ## Blocks and State Virtual Machines deal with blocks and state. The functionality provided by VMs is to: - Define the representation of a blockchain's state - Represent the operations in that state - Apply the operations in that state Each block in the blockchain contains a set of state transitions. Each block is applied in order from the blockchain's initial genesis block to its last accepted block to reach the latest state of the blockchain. ## Blockchain A blockchain relies on two major components: The **Consensus Engine** and the **VM**. The VM defines application specific behavior and how blocks are built and parsed to create the blockchain. All VMs run on top of the Avalanche Consensus Engine, which allows nodes in the network to agree on the state of the blockchain. Here's a quick example of how VMs interact with consensus: 1. A node wants to update the blockchain's state 2. The node's VM will notify the consensus engine that it wants to update the state 3. The consensus engine will request the block from the VM 4. The consensus engine will verify the returned block using the VM's implementation of `Verify()` 5. The consensus engine will get the network to reach consensus on whether to accept or reject the newly verified block. Every virtuous (well-behaved) node on the network will have the same preference for a particular block 6. Depending upon the consensus results, the engine will either accept or reject the block. What happens when a block is accepted or rejected is specific to the implementation of the VM AvalancheGo provides the consensus engine for every blockchain on the Avalanche Network. The consensus engine relies on the VM interface to handle building, parsing, and storing blocks as well as verifying and executing on behalf of the consensus engine. This decoupling between the application and consensus layer allows developers to build their applications quickly by implementing virtual machines, without having to worry about the consensus layer managed by Avalanche which deals with how nodes agree on whether or not to accept a block. ## Installing a VM VMs are supplied as binaries to a node running `AvalancheGo`. These binaries must be named the VM's assigned **VMID**. A VMID is a 32-byte hash encoded in CB58 that is generated when you build your VM. In order to install a VM, its binary must be installed in the `AvalancheGo` plugin path. See [here](/docs/nodes/configure/configs-flags#--plugin-dir-string) for more details. Multiple VMs can be installed in this location. Each VM runs as a separate process from AvalancheGo and communicates with `AvalancheGo` using gRPC calls. This functionality is enabled by **RPCChainVM**, a special VM which wraps around other VM implementations and bridges the VM and AvalancheGo, establishing a standardized communication protocol between them. During VM creation, handshake messages are exchanged via **RPCChainVM** between AvalancheGo and the VM installation. Ensure matching **RPCChainVM** protocol versions to avoid errors, by updating your VM or using a [different version of AvalancheGo](https://github.com/ava-labs/AvalancheGo/releases). Note that some VMs may not support the latest protocol version. ### API Handlers Users can interact with a blockchain and its VM through handlers exposed by the VM's API. VMs expose two types of handlers to serve responses for incoming requests: - **Blockchain Handlers**: Referred to as handlers, these expose APIs to interact with a blockchain instantiated by a VM. The API endpoint will be different for each chain. The endpoint for a handler is `/ext/bc/[chainID]`. - **VM Handlers**: Referred to as static handlers, these expose APIs to interact with the VM directly. One example API would be to parse genesis data to instantiate a new blockchain. The endpoint for a static handler is `/ext/vm/[vmID]`. For any readers familiar with object-oriented programming, static and non-static handlers on a VM are analogous to static and non-static methods on a class. Blockchain handlers can be thought of as methods on an object, whereas VM handlers can be thought of as static methods on a class. ### Instantiate a VM The `vm.Factory` interface is implemented to create new VM instances from which a blockchain can be initialized. The factory's `New` method shown below provides `AvalancheGo` with an instance of the VM. It's defined in the [`factory.go`](https://github.com/ava-labs/timestampvm/blob/main/timestampvm/factory.go) file of the `timestampvm` repository. ```go // Returning a new VM instance from VM's factory func (f *Factory) New(*snow.Context) (interface{}, error) { return &vm.VM{}, nil } ``` ### Initializing a VM to Create a Blockchain Before a VM can run, AvalancheGo will initialize it by invoking its `Initialize` method. Here, the VM will bootstrap itself and sets up anything it requires before it starts running. This might involve setting up its database, mempool, genesis state, or anything else the VM requires to run. ```go if err := vm.Initialize( ctx.Context, vmDBManager, genesisData, chainConfig.Upgrade, chainConfig.Config, msgChan, fxs, sender, ); ``` You can refer to the [implementation](https://github.com/ava-labs/timestampvm/blob/main/timestampvm/vm.go#L75) of `vm.initialize` in the TimestampVM repository. ## Interfaces Every VM should implement the following interfaces: ### `block.ChainVM` To reach a consensus on linear blockchains, Avalanche uses the Snowman consensus engine. To be compatible with Snowman, a VM must implement the `block.ChainVM` interface. For more information, see [here](https://github.com/ava-labs/avalanchego/blob/master/snow/engine/snowman/block/vm.go). ```go title="snow/engine/snowman/block/vm.go" // ChainVM defines the required functionality of a Snowman VM. // // A Snowman VM is responsible for defining the representation of the state, // the representation of operations in that state, the application of operations // on that state, and the creation of the operations. Consensus will decide on // if the operation is executed and the order operations are executed. // // For example, suppose we have a VM that tracks an increasing number that // is agreed upon by the network. // The state is a single number. // The operation is setting the number to a new, larger value. // Applying the operation will save to the database the new value. // The VM can attempt to issue a new number, of larger value, at any time. // Consensus will ensure the network agrees on the number at every block height. type ChainVM interface { common.VM Getter Parser // Attempt to create a new block from data contained in the VM. // // If the VM doesn't want to issue a new block, an error should be // returned. BuildBlock() (snowman.Block, error) // Notify the VM of the currently preferred block. // // This should always be a block that has no children known to consensus. SetPreference(ids.ID) error // LastAccepted returns the ID of the last accepted block. // // If no blocks have been accepted by consensus yet, it is assumed there is // a definitionally accepted block, the Genesis block, that will be // returned. LastAccepted() (ids.ID, error) } // Getter defines the functionality for fetching a block by its ID. type Getter interface { // Attempt to load a block. // // If the block does not exist, an error should be returned. // GetBlock(ids.ID) (snowman.Block, error) } // Parser defines the functionality for fetching a block by its bytes. type Parser interface { // Attempt to create a block from a stream of bytes. // // The block should be represented by the full byte array, without extra // bytes. ParseBlock([]byte) (snowman.Block, error) } ``` ### `common.VM` `common.VM` is a type that every `VM` must implement. For more information, you can see the full file [here](https://github.com/ava-labs/avalanchego/blob/master/snow/engine/common/vm.go). ```go title="snow/engine/common/vm.go" // VM describes the interface that all consensus VMs must implement type VM interface { // Contains handlers for VM-to-VM specific messages AppHandler // Returns nil if the VM is healthy. // Periodically called and reported via the node's Health API. health.Checkable // Connector represents a handler that is called on connection connect/disconnect validators.Connector // Initialize this VM. // [ctx]: Metadata about this VM. // [ctx.networkID]: The ID of the network this VM's chain is running on. // [ctx.chainID]: The unique ID of the chain this VM is running on. // [ctx.Log]: Used to log messages // [ctx.NodeID]: The unique staker ID of this node. // [ctx.Lock]: A Read/Write lock shared by this VM and the consensus // engine that manages this VM. The write lock is held // whenever code in the consensus engine calls the VM. // [dbManager]: The manager of the database this VM will persist data to. // [genesisBytes]: The byte-encoding of the genesis information of this // VM. The VM uses it to initialize its state. For // example, if this VM were an account-based payments // system, `genesisBytes` would probably contain a genesis // transaction that gives coins to some accounts, and this // transaction would be in the genesis block. // [toEngine]: The channel used to send messages to the consensus engine. // [fxs]: Feature extensions that attach to this VM. Initialize( ctx *snow.Context, dbManager manager.Manager, genesisBytes []byte, upgradeBytes []byte, configBytes []byte, toEngine chan<- Message, fxs []*Fx, appSender AppSender, ) error // Bootstrapping is called when the node is starting to bootstrap this chain. Bootstrapping() error // Bootstrapped is called when the node is done bootstrapping this chain. Bootstrapped() error // Shutdown is called when the node is shutting down. Shutdown() error // Version returns the version of the VM this node is running. Version() (string, error) // Creates the HTTP handlers for custom VM network calls. // // This exposes handlers that the outside world can use to communicate with // a static reference to the VM. Each handler has the path: // [Address of node]/ext/VM/[VM ID]/[extension] // // Returns a mapping from [extension]s to HTTP handlers. // // Each extension can specify how locking is managed for convenience. // // For example, it might make sense to have an extension for creating // genesis bytes this VM can interpret. CreateStaticHandlers() (map[string]*HTTPHandler, error) // Creates the HTTP handlers for custom chain network calls. // // This exposes handlers that the outside world can use to communicate with // the chain. Each handler has the path: // [Address of node]/ext/bc/[chain ID]/[extension] // // Returns a mapping from [extension]s to HTTP handlers. // // Each extension can specify how locking is managed for convenience. // // For example, if this VM implements an account-based payments system, // it have an extension called `accounts`, where clients could get // information about their accounts. CreateHandlers() (map[string]*HTTPHandler, error) } ``` ### `snowman.Block` The `snowman.Block` interface It define the functionality a block must implement to be a block in a linear Snowman chain. For more information, you can see the full file [here](https://github.com/ava-labs/avalanchego/blob/master/snow/consensus/snowman/block.go). ```go title="snow/consensus/snowman/block.go" // Block is a possible decision that dictates the next canonical block. // // Blocks are guaranteed to be Verified, Accepted, and Rejected in topological // order. Specifically, if Verify is called, then the parent has already been // verified. If Accept is called, then the parent has already been accepted. If // Reject is called, the parent has already been accepted or rejected. // // If the status of the block is Unknown, ID is assumed to be able to be called. // If the status of the block is Accepted or Rejected; Parent, Verify, Accept, // and Reject will never be called. type Block interface { choices.Decidable // Parent returns the ID of this block's parent. Parent() ids.ID // Verify that the state transition this block would make if accepted is // valid. If the state transition is invalid, a non-nil error should be // returned. // // It is guaranteed that the Parent has been successfully verified. Verify() error // Bytes returns the binary representation of this block. // // This is used for sending blocks to peers. The bytes should be able to be // parsed into the same block on another node. Bytes() []byte // Height returns the height of this block in the chain. Height() uint64 } ``` ### `choices.Decidable` This interface is a superset of every decidable object, such as transactions, blocks, and vertices. For more information, you can see the full file [here](https://github.com/ava-labs/avalanchego/blob/master/snow/choices/decidable.go). ```go title="snow/choices/decidable.go" // Decidable represents element that can be decided. // // Decidable objects are typically thought of as either transactions, blocks, or // vertices. type Decidable interface { // ID returns a unique ID for this element. // // Typically, this is implemented by using a cryptographic hash of a // binary representation of this element. An element should return the same // IDs upon repeated calls. ID() ids.ID // Accept this element. // // This element will be accepted by every correct node in the network. Accept() error // Reject this element. // // This element will not be accepted by any correct node in the network. Reject() error // Status returns this element's current status. // // If Accept has been called on an element with this ID, Accepted should be // returned. Similarly, if Reject has been called on an element with this // ID, Rejected should be returned. If the contents of this element are // unknown, then Unknown should be returned. Otherwise, Processing should be // returned. Status() Status } ``` # WAGMI Avalanche L1 (/docs/avalanche-l1s/wagmi-avalanche-l1) --- title: WAGMI Avalanche L1 description: Learn about the WAGMI Avalanche L1 in this detailed case study. --- This is one of the first cases of using Avalanche L1s as a proving ground for changes in a production VM (Coreth). Many underestimate how useful the isolation of Avalanche L1s is for performing complex VM testing on a live network (without impacting the stability of the primary network). We created a basic WAGMI Explorer [https://subnets-test.avax.network/wagmi](https://subnets-test.avax.network/wagmi) that surfaces aggregated usage statistics about the Avalanche L1. - SubnetID: [28nrH5T2BMvNrWecFcV3mfccjs6axM1TVyqe79MCv2Mhs8kxiY](https://explorer-xp.avax-test.network/avalanche-l1/28nrH5T2BMvNrWecFcV3mfccjs6axM1TVyqe79MCv2Mhs8kxiY?tab=validators) - ChainID: [2ebCneCbwthjQ1rYT41nhd7M76Hc6YmosMAQrTFhBq8qeqh6tt](https://testnet.avascan.info/blockchain/2ebCneCbwthjQ1rYT41nhd7M76Hc6YmosMAQrTFhBq8qeqh6tt) ### Network Parameters[​](#network-parameters "Direct link to heading") - NetworkID: 11111 - ChainID: 11111 - Block Gas Limit: 20,000,000 (2.5x C-Chain) - 10s Gas Target: 100,000,000 (~6.67x C-Chain) - Min Fee: 1 Gwei (4% of C-Chain) - Target Block Rate: 2s (C-Chain now uses 1s) The genesis file of WAGMI can be found [here](https://github.com/ava-labs/public-chain-assets/blob/1951594346dcc91682bdd8929bcf8c1bf6a04c33/chains/11111/genesis.json). ### Adding WAGMI to Core[​](#adding-wagmi-to-core "Direct link to heading") - Network Name: WAGMI - RPC URL: [https://subnets.avax.network/wagmi/wagmi-chain-testnet/rpc] - WS URL: wss://avalanche-l1s.avax.network/wagmi/wagmi-chain-testnet/ws - Chain ID: 11111 - Symbol: WGM - Explorer: [https://subnets.avax.network/wagmi/wagmi-chain-testnet/explorer] This can be used with other wallets too, such as MetaMask. Case Study: WAGMI Upgrades[​](#case-study-wagmi-upgrades "Direct link to heading") ---------------------------------------------------------------------------------- This case study uses [WAGMI](https://subnets-test.avax.network/wagmi) Avalanche L1 upgrade to show how a network upgrade on an EVM-based (Ethereum Virtual Machine) Avalanche L1 can be done simply, and how the resulting upgrade can be used to dynamically control fee structure on the Avalanche L1. ### Introduction[​](#introduction "Direct link to heading") [Subnet-EVM](https://github.com/ava-labs/subnet-evm) aims to provide an easy to use toolbox to customize the EVM for your blockchain. It is meant to run out of the box for many Avalanche L1s without any modification. But what happens when you want to add a new feature updating the rules of your EVM? Instead of hard coding the timing of network upgrades in client code like most EVM chains, requiring coordinated deployments of new code, [Subnet-EVM v0.2.8](https://github.com/ava-labs/subnet-evm/releases/tag/v0.2.8) introduces the long awaited feature to perform network upgrades by just using a few lines of JSON in a configuration file. ### Network Upgrades: Enable/Disable Precompiles[​](#network-upgrades-enabledisable-precompiles "Direct link to heading") Detailed description of how to do this can be found in [Customize an Avalanche L1](/docs/avalanche-l1s/evm-configuration/customize-avalanche-l1#network-upgrades-enabledisable-precompiles) tutorial. Here's a summary: 1. Network Upgrade utilizes existing precompiles on the Subnet-EVM: - ContractDeployerAllowList, for restricting smart contract deployers - TransactionAllowList, for restricting who can submit transactions - NativeMinter, for minting native coins - FeeManager, for configuring dynamic fees - RewardManager, for enabling block rewards 2. Each of these precompiles can be individually enabled or disabled at a given timestamp as a network upgrade, or any of the parameters governing its behavior changed. 3. These upgrades must be specified in a file named `upgrade.json` placed in the same directory where [`config.json`](/docs/avalanche-l1s/evm-configuration/customize-avalanche-l1#avalanchego-chain-configs) resides: `{chain-config-dir}/{blockchainID}/upgrade.json`. ### Preparation[​](#preparation "Direct link to heading") To prepare for the first WAGMI network upgrade, on August 15, 2022, we had announced on [X](https://x.com/AaronBuchwald/status/1559249414102720512) and shared on other social media such as Discord. For the second upgrade, on February 24, 2024, we had another announcement on [X](https://x.com/jceyonur/status/1760777031858745701?s=20). ### Deploying upgrade.json[​](#deploying-upgradejson "Direct link to heading") The content of the `upgrade.json` is: ```json { "precompileUpgrades": [ { "feeManagerConfig": { "adminAddresses": ["0x6f0f6DA1852857d7789f68a28bba866671f3880D"], "blockTimestamp": 1660658400 } }, { "contractNativeMinterConfig": { "blockTimestamp": 1708696800, "adminAddresses": ["0x6f0f6DA1852857d7789f68a28bba866671f3880D"], "managerAddresses": ["0xadFA2910DC148674910c07d18DF966A28CD21331"] } } ] } ``` With the above `upgrade.json`, we intend to perform two network upgrades: 1. The first upgrade is to activate the FeeManager precompile: - `0x6f0f6DA1852857d7789f68a28bba866671f3880D` is named as the new Admin of the FeeManager precompile. - `1660658400` is the [Unix timestamp](https://www.unixtimestamp.com/) for Tue Aug 16 2022 14:00:00 GMT+0000 (future time when we made the announcement) when the new FeeManager change would take effect. 2. The second upgrade is to activate the NativeMinter precompile: - `0x6f0f6DA1852857d7789f68a28bba866671f3880D` is named as the new Admin of the NativeMinter precompile. - `0xadFA2910DC148674910c07d18DF966A28CD21331` is named as the new Manager of the NativeMinter precompile. Manager addresses are enabled after Durango upgrades which occurred on February 13, 2024. - `1708696800` is the [Unix timestamp](https://www.unixtimestamp.com/) for Fri Feb 23 2024 14:00:00 GMT+0000 (future time when we made the announcement) when the new NativeMinter change would take effect. Detailed explanations of feeManagerConfig can be found in [here](/docs/avalanche-l1s/evm-configuration/customize-avalanche-l1#configuring-dynamic-fees), and for the contractNativeMinterConfig in [here](/docs/avalanche-l1s/evm-configuration/customize-avalanche-l1#minting-native-coins). We place the `upgrade.json` file in the chain config directory, which in our case is `~/.avalanchego/configs/chains/2ebCneCbwthjQ1rYT41nhd7M76Hc6YmosMAQrTFhBq8qeqh6tt/`. After that, we restart the node so the upgrade file is loaded. When the node restarts, AvalancheGo reads the contents of the JSON file and passes it into Subnet-EVM. We see a log of the chain configuration that includes the updated precompile upgrade. It looks like this: ```bash INFO [02-22|18:27:06.473] <2ebCneCbwthjQ1rYT41nhd7M76Hc6YmosMAQrTFhBq8qeqh6tt Chain> github.com/ava-labs/subnet-evm/core/blockchain.go:335: Upgrade Config: {"precompileUpgrades":[{"feeManagerConfig":{"adminAddresses":["0x6f0f6da1852857d7789f68a28bba866671f3880d"],"blockTimestamp":1660658400}},{"contractNativeMinterConfig":{"adminAddresses":["0x6f0f6da1852857d7789f68a28bba866671f3880d"],"managerAddresses":["0xadfa2910dc148674910c07d18df966a28cd21331"],"blockTimestamp":1708696800}}]} ``` We note that `precompileUpgrades` correctly shows the upcoming precompile upgrades. Upgrade is locked in and ready. ### Activations[​](#activations "Direct link to heading") When the time passed 10:00 AM EDT August 16, 2022 (Unix timestamp 1660658400), the `upgrade.json` had been executed as planned and the new FeeManager admin address has been activated. From now on, we don't need to issue any new code or deploy anything on the WAGMI nodes to change the fee structure. Let's see how it works in practice! For the second upgrade on February 23, 2024, the same process was followed. The `upgrade.json` had been executed after Durango, as planned, and the new NativeMinter admin and manager addresses have been activated. ### Using Fee Manager[​](#using-fee-manager "Direct link to heading") The owner `0x6f0f6DA1852857d7789f68a28bba866671f3880D` can now configure the fees on the Avalanche L1 as they see fit. To do that, all that's needed is access to the network, the private key for the newly set manager address and making calls on the precompiled contract. We will use [Remix](https://remix.ethereum.org/) online Solidity IDE and the [Core Browser Extension](https://support.avax.network/en/articles/6066879-core-extension-how-do-i-add-the-core-extension). Core comes with WAGMI network built-in. MetaMask will do as well but you will need to [add WAGMI](/docs/avalanche-l1s/wagmi-avalanche-l1) yourself. First using Core, we open the account as the owner `0x6f0f6DA1852857d7789f68a28bba866671f3880D`. Then we connect Core to WAGMI, Switch on the `Testnet Mode` by clicking the Tools icon in the sidebar, then selecting Settings: ![Core Testnet mode](/images/wagmi1.png) And then open the `Manage Networks` menu in the networks dropdown. Select WAGMI there by clicking the star icon: ![Core network selection](/images/wagmi2.png) We then switch to WAGMI in the networks dropdown. We are ready to move on to Remix now, so we open it in the browser. First, we check that Remix sees the extension and correctly talks to it. We select `Deploy & run transactions` icon on the left edge, and on the Environment dropdown, select `Injected Provider`. We need to approve the Remix network access in the Core browser extension. When that is done, `Custom (11111) network` is shown: ![Injected provider](/images/wagmi3.png) Good, we're talking to WAGMI Avalanche L1. Next we need to load the contracts into Remix. Using 'load from GitHub' option from the Remix home screen we load two contracts: - [IAllowList.sol](https://github.com/ava-labs/subnet-evm/blob/master/contracts/contracts/interfaces/IAllowList.sol) - and [IFeeManager.sol](https://github.com/ava-labs/subnet-evm/blob/master/contracts/contracts/interfaces/IFeeManager.sol). IFeeManager is our precompile, but it references the IAllowList, so we need that one as well. We compile IFeeManager.sol and use deployed contract at the precompile address `0x0200000000000000000000000000000000000003` used on the [Avalanche L1](https://github.com/ava-labs/subnet-evm/blob/master/precompile/contracts/feemanager/module.go#L21). ![Deployed contract](/images/wagmi4.png) Now we can interact with the FeeManager precompile from within Remix via Core. For example, we can use the `getFeeConfig` method to check the current fee configuration. This action can be performed by anyone as it is just a read operation. Once we have the new desired configuration for the fees on the Avalanche L1, we can use the `setFeeConfig` to change the parameters. This action can **only** be performed by the owner `0x6f0f6DA1852857d7789f68a28bba866671f3880D` as the `adminAddress` specified in the [`upgrade.json` above](#deploying-upgradejson). ![setFeeConfig](/images/wagmi5.png) When we call that method by pressing the `transact` button, a new transaction is posted to the Avalanche L1, and we can see it on [the explorer](https://subnets-test.avax.network/wagmi/block/0xad95ccf04f6a8e018ece7912939860553363cc23151a0a31ea429ba6e60ad5a3): ![transaction](/images/wagmi6.png) Immediately after the transaction is accepted, the new fee config takes effect. We can check with the `getFeeCofig` that the values are reflected in the active fee config (again this action can be performed by anyone): ![getFeeConfig](/images/wagmi7.png) That's it, fees changed! No network upgrades, no complex and risky deployments, just making a simple contract call and the new fee configuration is in place! ### Using NativeMinter[​](#using-nativeminter "Direct link to heading") For the NativeMinter, we can use the same process to connect to the Avalanche L1 and interact with the precompile. We can load INativeMinter interface using 'load from GitHub' option from the Remix home screen with following contracts: - [IAllowList.sol](https://github.com/ava-labs/subnet-evm/blob/master/contracts/contracts/interfaces/IAllowList.sol) - and [INativeMinter.sol](https://github.com/ava-labs/subnet-evm/blob/master/contracts/contracts/interfaces/INativeMinter.sol). We can compile them and interact with the deployed contract at the precompile address `0x0200000000000000000000000000000000000001` used on the [Avalanche L1](https://github.com/ava-labs/subnet-evm/blob/master/precompile/contracts/nativeminter/module.go#L22). ![Deployed contract](/images/wagmi8.png) The native minter precompile is used to mint native coins to specified addresses. The minted coins is added to the current supply and can be used by the recipient to pay for gas fees. For more information about the native minter precompile see [here](/docs/avalanche-l1s/evm-configuration/customize-avalanche-l1#minting-native-coins). `mintNativeCoin` method can be only called by enabled, manager and admin addresses. For this upgrade we have added both an admin and a manager address in [`upgrade.json` above](#deploying-upgradejson). The manager address was available after Durango upgrades which occurred on February 13, 2024. We will use the manager address `0xadfa2910dc148674910c07d18df966a28cd21331` to mint native coins. ![mintNativeCoin](/images/wagmi9.png) When we call that method by pressing the `transact` button, a new transaction is posted to the Avalanche L1, and we can see it on [the explorer](https://subnets-test.avax.network/wagmi/tx/0xc4aaba7b5863c1b8f6664ac1d483e2d7d392ab58d1a8feb0b6c318cbae7f1e93): ![tx](/images/wagmi10.png) As a result of this transaction, the native minter precompile minted a new native coin (1 WGM) to the recipient address `0xB78cbAa319ffBD899951AA30D4320f5818938310`. The address page on the explorer [here](https://subnets-test.avax.network/wagmi/address/0xB78cbAa319ffBD899951AA30D4320f5818938310) shows no incoming transaction; this is because the 1 WGM was directly minted by the EVM itself, without any sender. ### Conclusion[​](#conclusion "Direct link to heading") Network upgrades can be complex and perilous procedures to carry out safely. Our continuing efforts with Avalanche L1s is to make upgrades as painless and simple as possible. With the powerful combination of stateful precompiles and network upgrades via the upgrade configuration files we have managed to greatly simplify both the network upgrades and network parameter changes. This in turn enables much safer experimentation and many new use cases that were too risky and complex to carry out with high-coordination efforts required with the traditional network upgrade mechanisms. We hope this case study will help spark ideas for new things you may try on your own. We're looking forward to seeing what you have built and how easy upgrades help you in managing your Avalanche L1s! If you have any questions or issues, feel free to contact us on our [Discord](https://chat.avalabs.org/). Or just reach out to tell us what exciting new things you have built! # ACP-103: Dynamic Fees (/docs/acps/103-dynamic-fees) --- title: "ACP-103: Dynamic Fees" description: "Details for Avalanche Community Proposal 103: Dynamic Fees" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/103-dynamic-fees/README.md --- | ACP | 103 | | :--- | :--- | | **Title** | Add Dynamic Fees to the P-Chain | | **Author(s)** | Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu)), Alberto Benegiamo ([@abi87](https://github.com/abi87)), Stephen Buttolph ([@StephenButtolph](https://github.com/StephenButtolph)) | | **Status** | Activated ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/104)) | | **Track** | Standards | ## Abstract Introduce a dynamic fee mechanism to the P-Chain. Preview a future transition to a multidimensional fee mechanism. ## Motivation Blockchains are resource-constrained environments. Users are charged for the execution and inclusion of their transactions based on the blockchain's transaction fee mechanism. The mechanism should fluctuate based on the supply of and demand for said resources to serve as a deterrent against spam and denial-of-service attacks. With a fixed fee mechanism, users are provided with simplicity and predictability but network congestion and resource constraints are not taken into account. There is no incentive for users to withhold transactions since the cost is fixed regardless of the demand. The fee does not adjust the execution and inclusion fee of transactions to the market clearing price. The C-Chain, in [Apricot Phase 3](https://medium.com/avalancheavax/apricot-phase-three-c-chain-dynamic-fees-432d32d67b60), employs a dynamic fee mechanism to raise the price during periods of high demand and lowering the price during periods of low demand. As the price gets too expensive, network utilization will decrease, which drops the price. This ensures the execution and inclusion fee of transactions closely matches the market clearing price. The P-Chain currently operates under a fixed fee mechanism. To more robustly handle spikes in load expected from introducing the improvements in [ACP-77](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/77-reinventing-subnets/README.md), it should be migrated to a dynamic fee mechanism. The X-Chain also currently operates under a fixed fee mechanism. However, due to the current lower usage and lack of new feature introduction, the migration of the X-Chain to a dynamic fee mechanism is deferred to a later ACP to reduce unnecessary additional technical complexity. ## Specification ### Dimensions There are four dimensions that will be used to approximate the computational cost of, or "gas" consumed in, a transaction: 1. Bandwidth $B$ is the amount of network bandwidth used for transaction broadcast. This is set to the size of the transaction in bytes. 2. Reads $R$ is the number of state/database reads used in transaction execution. 3. Writes $W$ is the number of state/database writes used in transaction execution. 4. Compute $C$ is the total amount of compute used to verify and execute a transaction, measured in microseconds. The gas consumed $G$ in a transaction is: $$G = B + 1000R + 1000W + 4C$$ A future ACP could remove the merging of these dimensions to granularly meter usage of each resource in a multidimensional scheme. ### Mechanism This mechanism aims to maintain a target gas consumption $T$ per second and adjusts the fee based on the excess gas consumption $x$, defined as the difference between the current gas consumption and $T$. Prior to the activation of this mechanism, $x$ is initialized: $$x = 0$$ At the start of building/executing block $b$, $x$ is updated: $$x = \max(x - T \cdot \Delta{t}, 0)$$ Where $\Delta{t}$ is the number of seconds between $b$'s block timestamp and $b$'s parent's block timestamp. The gas price for block $b$ is: $$M \cdot \exp\left(\frac{x}{K}\right)$$ Where: - $M$ is the minimum gas price - $\exp\left(x\right)$ is an approximation of $e^x$ following the EIP-4844 specification ```python # Approximates factor * e ** (numerator / denominator) using Taylor expansion def fake_exponential(factor: int, numerator: int, denominator: int) -> int: i = 1 output = 0 numerator_accum = factor * denominator while numerator_accum > 0: output += numerator_accum numerator_accum = (numerator_accum * numerator) // (denominator * i) i += 1 return output // denominator ``` - $K$ is a constant to control the rate of change of the gas price After processing block $b$, $x$ is updated with the total gas consumed in the block $G$: $$x = x + G$$ Whenever $x$ increases by $K$, the gas price increases by a factor of `~2.7`. If the gas price gets too expensive, average gas consumption drops, and $x$ starts decreasing, dropping the price. The gas price constantly adjusts to make sure that, on average, the blockchain consumes $T$ gas per second. A [token bucket](https://en.wikipedia.org/wiki/Token_bucket) is employed to meter the maximum rate of gas consumption. Define $C$ as the capacity of the bucket, $R$ as the amount of gas to add to the bucket per second, and $r$ as the amount of gas currently in the bucket. Prior to the activation of this mechanism, $r$ is initialized: $$r = 0$$ At the beginning of processing block $b$, $r$ is set: $$r = \min\left(r + R \cdot \Delta{t}, C\right)$$ Where $\Delta{t}$ is the number of seconds between $b$'s block timestamp and $b$'s parent's block timestamp. The maximum gas consumed in a given $\Delta{t}$ is $r + R \cdot \Delta{t}$. The upper bound across all $\Delta{t}$ is $C + R \cdot \Delta{t}$. After processing block $b$, the total gas consumed in $b$, or $G$, will be known. If $G \gt r$, $b$ is considered an invalid block. If $b$ is a valid block, $r$ is updated: $$r = r - G$$ A block gas limit does not need to be set as it is implicitly derived from $r$. The parameters at activation are: | Parameter | P-Chain Configuration| | - | - | | $T$ - target gas consumed per second | 50,000 | | $M$ - minimum gas price | 1 nAVAX | | $K$ - gas price update constant | 2_164_043 | | $C$ - maximum gas capacity | 1,000,000 | | $R$ - gas capacity added per second | 100,000 | $K$ was chosen such that at sustained maximum capacity ($R=100,000$ gas/second), the fee rate will double every ~30 seconds. As the network gains capacity to handle additional load, this algorithm can be tuned to increase the gas consumption rate. #### A note on $e^x$ There is a subtle reason why an exponential adjustment function was chosen: The adjustment function should be _equally_ reactive irrespective of the actual fee. Define $b_n$ as the current block's gas fee, $b_{n+1}$ as the next block's gas fee, and $x$ as the excess gas consumption. Let's use a linear adjustment function: $$b_{n+1} = b_n + 10x$$ Assume $b_n = 100$ and the current block is 1 unit above target utilization, or $x = 1$. Then, $b_{n+1} = 100 + 10 \cdot 1 = 110$, an increase of `10%`. If instead $b_n = 10,000$, $b_{n+1} = 10,000 + 10 \cdot 1 = 10,010$, an increase of `0.1%`. The fee is _less_ reactive as the fee increases. This is because the rate of change _does not scale_ with $x$. Now, let's use an exponential adjustment function: $$b_{n+1} = b_n \cdot e^x$$ Assume $b_n = 100$ and the current block is 1 unit above target utilization, or $x = 1$. Then, $b_{n+1} = 100 \cdot e^1 \approx 271.828$, an increase of `171%`. If instead $b_n = 10,000$, $b_{n+1} = 10,000 \cdot e^1 \approx 27,182.8$, an increase of `171%` again. The fee is _equally_ reactive as the fee increases. This is because the rate of change _scales_ with $x$. ### Block Building Procedure When a transaction is constructed on the P-Chain, the amount of $AVAX burned is given by `sum($AVAX outputs) - sum($AVAX inputs)`. The amount of gas consumed by the transaction can be deterministically calculated after construction. Dividing the amount of $AVAX burned by the amount of gas consumed yields the maximum gas price that the transaction can pay. Instead of using a FIFO queue for the mempool (like the P-Chain does now), the mempool should use a priority queue ordered by the maximum gas price of each transaction. This ensures that higher paying transactions are included first. ## Backwards Compatibility Modification of a fee mechanism is an execution change and requires a mandatory upgrade for activation. Implementers must take care to not alter the execution behavior prior to activation. After this ACP is activated, any transaction issued on the P-Chain must account for the fee mechanism defined above. Users are responsible for reconstructing their transactions to include a larger fee for quicker inclusion when the fee increases. ## Reference Implementation ACP-103 was implemented into AvalancheGo behind the `Etna` upgrade flag. The full body of work can be found tagged with the `acp103` label [here](https://github.com/ava-labs/avalanchego/pulls?q=is%3Apr+label%3Aacp103). ## Security Considerations The current fixed fee mechanism on the X-Chain and P-Chain does not robustly handle spikes in load. Migrating the P-Chain to a dynamic fee mechanism will ensure that any additional load caused by demand for new P-Chain features (such as those introduced in [ACP-77](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/77-reinventing-subnets/README.md)) is properly priced given allotted processing capacity. The X-Chain, in comparison, currently has significantly lower usage, making it less likely for the demand for blockspace on it to exceed the current static fee rates. If necessary or desired, a future ACP can reuse the mechanism introduced here to add dynamic fee rates to the X-Chain. ## Acknowledgements Thank you to [@aaronbuchwald](https://github.com/aaronbuchwald) and [@patrick-ogrady](https://github.com/patrick-ogrady) for providing feedback prior to publication. Thank you to the authors of [EIP-4844](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-4844.md) for creating the fee design that inspired the above mechanism. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-108: Evm Event Importing (/docs/acps/108-evm-event-importing) --- title: "ACP-108: Evm Event Importing" description: "Details for Avalanche Community Proposal 108: Evm Event Importing" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/108-evm-event-importing/README.md --- | ACP | 108 | | :--- | :--- | | **Title** | EVM Event Importing Standard | | **Author(s)** | Michael Kaplan ([@mkaplan13](https://github.com/mkaplan13)) | | **Status** | Proposed ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/114)) | | **Track** | Best Practices Track | ## Abstract Defines a standard smart contract interface and abstract implementation for importing EVM events from any blockchain within Avalanche using [Avalanche Warp Messaging](https://docs.avax.network/build/cross-chain/awm/overview). ## Motivation The implementation of Avalanche Warp Messaging within `coreth` and `subnet-evm` exposes a [mechanism for getting authenticated hashes of blocks](https://github.com/ava-labs/subnet-evm/blob/master/contracts/contracts/interfaces/IWarpMessenger.sol#L43) that have been accepted on blockchains within Avalanche. Proofs of acceptance of blocks, such as those introduced in [ACP-75](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/75-acceptance-proofs), can be used to prove arbitrary events and state changes that occured in those blocks. However, there is currently no clear standard for using authenticated block hashes in smart contracts within Avalanche, making it difficult to build applications that leverage this mechanism. In order to make effective use of authenticated block hashes, contracts must be provided encoded block headers that match the authenticated block hashes and also Merkle proofs that are verified against the state or receipts root contained in the block header. With a standard interface and abstract contract implemetation that handles the authentication of block hashes and verification of Merkle proofs, smart contract developers on Avalanche will be able to much more easily create applications that leverage data from other Avalanche blockchains. These type of cross-chain application do not require any direct interaction on the source chain. ## Specification ### Event Importing Interface We propose that smart contracts importing EVM events emitted by other blockchains within Avalanche implement the following interface. #### Methods Imports the EVM event uniquely identified by the source blockchain ID, block header, transaction index, and log index. The `blockHeader` must be validated to match the authenticated block hash from the `sourceBlockchainID`. The specification for EVM block headers can be found [here](https://github.com/ava-labs/subnet-evm/blob/master/core/types/block.go#L73). The `txIndex` identifies the key of receipts trie of the given block header that the `receiptProof` must prove inclusion of. The value obtained by verifying the `receiptProof` for that key is the encoded transaction receipt. The specification for EVM transaction receipts can be found [here](https://github.com/ava-labs/subnet-evm/blob/master/core/types/receipt.go#L62). The `logIndex` identifies which event log from the given transaction receipt is to be imported. Must emit an `EventImported` event upon success. ```solidity function importEvent( bytes32 sourceBlockchainID, bytes calldata blockHeader, uint256 txIndex, bytes[] calldata receiptProof, uint256 logIndex ) external; ``` This interface does not require that the Warp precompile is used to authenticate block hashes. Implementations could: - Use the Warp precompile to authenticate block hashes provided directly in the transaction calling `importEvent`. - Check previously authenticated block hashes using an external contract. - Allows for a block hash to be authenticated once and used in arbitrarily many transactions afterwards. - Allows for alternative authentication mechanisms to be used, such as trusted oracles. #### Events Must trigger when an EVM event is imported. ```solidity event EventImported( bytes32 indexed sourceBlockchainID, bytes32 indexed sourceBlockHash, address indexed loggerAddress, uint256 txIndex, uint256 logIndex ); ``` ### Event Importing Abstract Contract Applications importing EVM events emitted by other blockchains within Avalanche should be able to use a standard abstract implementation of the `importEvent` interface. This abstract implementation must handle: - Authenticating block hashes from other chains. - Verifying that the encoded `blockHeader` matches the imported block hash. - Verifying the Merkle `receiptProof` for the given `txIndex` against the receipt root of the provided `blockHeader`. - Decoding the event log identified by `logIndex` from the receipt obtained from verifying the `receiptProof`. As noted above, implementations could directly use the Warp precompile's `getVerifiedWarpBlockHash` interface method for authenticating block hashes, as is done in the reference implementation [here](https://github.com/ava-labs/event-importer-poc/blob/main/contracts/src/EventImporter.sol#L51). Alternatively, implementations could use the `sourceBlockchainID` and `blockHeader` provided in the parameters to check with an external contract that the block has been accepted on the given chain. The specifics of such an external contract are outside the scope of this ACP, but for illustrative purposes, this could look along the lines of: ```solidity bool valid = blockHashRegistry.checkAuthenticatedBlockHash( sourceBlockchainID, keccack256(blockHeader) ); require(valid, "Invalid block header"); ``` Inheriting contracts should only need to define the logic to be executed when an event is imported. This is done by providing an implementation of the following internal function, called by `importEvent`. ```solidity function _onEventImport(EVMEventInfo memory eventInfo) internal virtual; ``` Where the `EVMEventInfo` struct is defined as: ```solidity struct EVMLog { address loggerAddress; bytes32[] topics; bytes data; } struct EVMEventInfo { bytes32 blockchainID; uint256 blockNumber; uint256 txIndex; uint256 logIndex; EVMLog log; } ``` The `EVMLog` struct is meant to match the `Log` type definition in the EVM [here](https://github.com/ava-labs/subnet-evm/blob/master/core/types/log.go#L39). ## Reference Implementation See reference implementation on [Github here](https://github.com/ava-labs/event-importer-poc). In addition to implementing the interface and abstract contract described above, the reference implementation shows how transactions can be constructed to import events using Warp block hash signatures. ## Open Questions See [here](https://github.com/ava-labs/event-importer-poc?tab=readme-ov-file#open-questions-and-considerations). ## Security Considerations The correctness of a contract using block hashes to prove that a specific event was emitted within that block depends on the correctness of: 1. The mechanism for authenticating that a block hash was finalized on another blockchain. 2. The Merkle proof validation library used to prove that a specific transaction receipt was included in the given block. For considerations on using Avalanche Warp Messaging to authenticate block hashes, see [here](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/30-avalanche-warp-x-evm#security-considerations). To improve confidence in the correctness of the Merkle proof validation used in implementations, well-audited and widely used libraries should be used. ## Acknowledgements Using Merkle proofs to verify events/state against root hashes is not a new idea. Protocols such as [IBC](https://ibc.cosmos.network/v8/), [Rainbow Bridge](https://github.com/Near-One/rainbow-bridge), and [LayerZero](https://layerzero.network/publications/LayerZero_Whitepaper_V1.1.0.pdf), among others, have previously suggested using Merkle proofs in a similar manner. Thanks to [@aaronbuchwald](https://github.com/aaronbuchwald) for proposing the `getVerifiedWarpBlockHash` interface be included in the AWM implemenation within Avalanche EVMs, which enables this type of use case. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-113: Provable Randomness (/docs/acps/113-provable-randomness) --- title: "ACP-113: Provable Randomness" description: "Details for Avalanche Community Proposal 113: Provable Randomness" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/113-provable-randomness/README.md --- | ACP | 113 | | :------------ | :------------------------------------------------------------------------------------ | | **Title** | Provable Virtual Machine Randomness | | **Author(s)** | Tsachi Herman [http://github.com/tsachiherman](http://github.com/tsachiherman) | | **Status** | Stale ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/142)) | | **Track** | Standards | ## Future Work This ACP was marked as stale due to its documented security concerns. In order to safely utilize randomness produced by this mechanism, the consumer of the randomness must: 1. Define a security threshold `x` which is the maximum number of consecutive blocks which can be proposed by a malicious entity. 2. After committing to a request for randomness, the consumer must wait for `x` blocks. 3. After waiting for `x` blocks, the consumer must verify that the randomness was not biased during the `x` blocks. 4. If the randomness was biased, it would be insufficient to request randomness again, as this would allow the malicious block producer to discard any randomness that it did not like. If using the randomness mechanism proposed in this ACP, the consumer of the randomness must be able to terminate the request for randomness in such a way that no participant would desire the outcome. Griefing attacks would likely result from such a construction. ### Alternative Mechanisms There are alternative mechanisms that would not result in such security concerns, such as: - Utilizing a deterministic threshold signature scheme to finalize a block in consensus would allow the threshold signature to be used during the execution of the block. - Utilizing threshold commit-reveal schemes that guarantee that committed values will always be revealed in a timely manner. However, these mechanisms are likely too costly to be introduced into the Avalanche Primary Network due to its validator set size. It is left to a future ACP to specify the implementation of one of these alternative schemes for L1 networks with smaller sized validator sets. ## Abstract Avalanche offers developers flexibility through subnets and EVM-compatible smart contracts. However, the platform's deterministic block execution limits the use of traditional random number generators within these contracts. To address this, a mechanism is proposed to generate verifiable, non-cryptographic random number seeds on the Avalanche platform. This method ensures uniformity while allowing developers to build more versatile applications. ## Motivation Reliable randomness is essential for building exciting applications on Avalanche. Games, participant selection, dynamic content, supply chain management, and decentralized services all rely on unpredictable outcomes to function fairly. Randomness also fuels functionalities like unique identifiers and simulations. Without a secure way to generate random numbers within smart contracts, Avalanche applications become limited. Avalanche's traditional reliance on external oracles for randomness creates complexity and bottlenecks. These oracles inflate costs, hinder transaction speed, and are cumbersome to integrate. As Avalanche scales to more Subnets, this dependence on external systems becomes increasingly unsustainable. A solution for verifiable random number generation within Avalanche solves these problems. It provides fair randomness functionality across the chains, at no additional cost. This paves the way for a more efficient Avalanche ecosystem. ## Specification ### Changes Summary The existing Avalanche protocol breaks the block building into two parts : external and internal. The external block is the Snowman++ block, whereas the internal block is the actual virtual machine block. To support randomness, a BLS based VRF implementation is used, that would be recursively signing its own signatures as its message. Since the BLS signatures are deterministic, they provide a great way to construct a reliable VRF. For proposers that do not have a BLS key associated with their node, the hash of the signature from the previous round is used in place of their signature. In order to bootstrap the signatures chain, a missing signature would be replaced with a byte slice that is the hash product of a verifiable and trustable seed. The changes proposed here would affect the way a blocks are validated. Therefore, when this change gets implemented, it needs to be deployed as a mandatory upgrade. ``` +-----------------------+ +-----------------------+ | Block n | <-------- | Block n+1 | +-----------------------+ +-----------------------+ | VRF-Sig(n) | | VRF-Sig(n+1) | | ... | | ... | +-----------------------+ +-----------------------+ +-----------------------+ +-----------------------+ | VM n | | VM n+1 | +-----------------------+ +-----------------------+ | VRF-Out(n) | | VRF-Out(n+1) | +-----------------------+ +-----------------------+ VRF-Sig(n+1) = Sign(VRF-Sig(n), Block n+1 proposer's BLS key) VRF-Out(n) = Hash(VRF-Sig(n)) ``` ### Changes Details #### Step 1. Adding BLS signature to proposed blocks ```go type statelessUnsignedBlock struct { … vrfSig []byte `serialize:”true”` } ``` #### Step 2. Populate signature When a block proposer attempts to build a new block, it would need to use the parent block as a reference. The `vrfSig` field within each block is going to be daisy-chained to the `vrfSig` field from it's parent block. Populating the `vrfSig` would following this logic: 1. The current proposer has a BLS key a. If the parent block has an empty `vrfSig` signature, the proposer would sign the bootStrappingBlockSignature with its BLS key. See the bootStrappingBlockSignature details below. This is the base case. b. If the parent block does not have an empty `vrfSig` signature, that signature would be signed using the proposer’s BLS key. 2. The current proposer does not have a BLS key a. If the parent block has a non-empty `vrfSig` signature, the proposer would set the proposed block `vrfSig` to the 32 byte hash result of the following preimage: ``` +-------------------------+----------+------------+ | prefix : | [8]byte | "rng-derv" | +-------------------------+----------+------------+ | vrfSig : | [96]byte | 96 bytes | +-------------------------+----------+------------+ ``` b. If the parent block has an empty `vrfSig` signature, the proposer would leave the `vrfSig` on the new block empty. The bootStrappingBlockSignature that would be used above is the hash of the following preimage: ``` +-----------------------+----------+------------+ | prefix : | [8]byte | "rng-root" | +-----------------------+----------+------------+ | networkID: | uint32 | 4 bytes | +-----------------------+----------+------------+ | chainID : | [32]byte | 32 bytes | +-----------------------+----------+------------+ ``` #### Step 3. Signature Verification This signature verification would perform the exact opposite of what was done in step 2, and would verify the cryptographic correctness of the operation. Validating the `vrfSig` would following this logic: 1. The proposer has a BLS key a. If the parent block's `vrfSig` was non-empty , then the `vrfSig` in the proposed block is verified to be a valid BLS signature of the parent block's `vrfSig` value for the proposer's BLS public key. b. If the parent block's `vrfSig` was empty, then a BLS signature verification of the proposed block `vrfSig` against the proposer’s BLS public key and bootStrappingBlockSignature would take place. 2. The proposer does not have a BLS key a. If the parent block had a non-empty `vrfSig`, then the hash of the preimage ( as described above ) would be compared against the proposed `vrfSig`. b. If the parent block has an empty `vrfSig` then the proposer's `vrfSig` would be validated to be empty. #### Step 4. Extract the VRF Out and pass to block builders Calculating the VRF Out would be done by hashing the preimage of the following struct: ``` +-----------------------+----------+------------+ | prefix : | [8]byte | "vrfout " | +-----------------------+----------+------------+ | vrfout: | [96]byte | 96 bytes | +-----------------------+----------+------------+ ``` Before calculating the VRF Out, the method needs to explicitly check the case where the `vrfSig` is empty. In that case, the output of the VRF Out needs to be empty as well. ## Backwards Compatibility The above design has taken backward compatibility considerations. The chain would keep working as before, and at some point, would have the newly added `vrfSig` populated. From usage perspective, each VM would need to make its own decision on whether it should use the newly provided random seed. Initially, this random seed would be all zeros - and would get populated once the feature rolled out to a sufficient number of nodes. Also, as mentioned in the summary, these changes would necessitate a network upgrade. ## Reference Implementation A full reference implementation has not been provided yet. It will be provided once this ACP is considered `Implementable`. ## Security Considerations Virtual machine random seeds, while appearing to offer a source of randomness within smart contracts, fall short when it comes to cryptographic security. Here's a breakdown of the critical issues: - Limited Permutation Space: The number of possible random values is derived from the number of validators. While no validator, nor a validator set, would be able to manipulate the randomness into any single value, a nefarious actor(s) might be able to exclude specific numbers. - Predictability Window: The seed value might be accessible to other parties before the smart contract can benefit from its uniqueness. This predictability window creates a vulnerability. An attacker could potentially observe the seed generation process and predict the sequence of "random" numbers it will produce, compromising the entire cryptographic foundation of your smart contract. Despite these limitations appearing severe, attackers face significant hurdles to exploit them. First, the attacker can't control the random number, limiting the attack's effectiveness to how that number is used. Second, a substantial amount of AVAX is needed. And last, such an attack would likely decrease AVAX's value, hurting the attacker financially. One potential attack vector involves collusion among multiple proposers to manipulate the random number selection. These attackers could strategically choose to propose or abstain from proposing blocks, effectively introducing a bias into the system. By working together, they could potentially increase their chances of generating a random number favorable to their goals. However, the effectiveness of this attack is significantly limited for the following reasons: - Limited options: While colluding attackers expand their potential random number choices, the overall pool remains immense (2^256 possibilities). This drastically reduces their ability to target a specific value. - Protocol's countermeasure: The protocol automatically eliminates any bias introduced by previous proposals once an honest proposer submits their block. - Detectability: Exploitation of this attack vector is readily identifiable. A successful attack necessitates coordinated collusion among multiple nodes to synchronize their proposer slots for a specific block height ( the proposer slot order are known in advance ). Subsequent to this alignment, a designated node constructs the block proposal. The network maintains a record of the proposer slot utilized for each block. A value of zero for the proposer slot unequivocally indicates the absence of an exploit. Increasing values correlate with a heightened risk of exploitation. It is important to note that non-zero slot numbers may also arise from transient network disturbances. While this attack is theoretically possible, its practical impact is negligible due to the vast number of potential outcomes and the protocol's inherent safeguards. ## Open Questions ### How would the proposed changes impact the proposer selection and their inherit bias ? The proposed modifications will not influence the selection process for block proposers. Proposers retain the ability to determine which transactions are included in a block. This inherent proposer bias remains unchanged and is unaffected by the proposed changes. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-118: Warp Signature Request (/docs/acps/118-warp-signature-request) --- title: "ACP-118: Warp Signature Request" description: "Details for Avalanche Community Proposal 118: Warp Signature Request" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/118-warp-signature-request/README.md --- | ACP | 118 | | :--- | :--- | | **Title** | Warp Signature Interface Standard | | **Author(s)** | Cam Schultz ([@cam-schultz](https://github.com/cam-schultz)) | | **Status** | Activated ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/123)) | | **Track** | Best Practices Track | ## Abstract Proposes a standard [AppRequest](https://github.com/ava-labs/avalanchego/blob/master/proto/p2p/p2p.proto#L385) payload format type for requesting Warp signatures for the provided bytes, such that signatures may be requested in a VM-agnostic manner. To make this concrete, this standard type should be defined in AvalancheGo such that VMs can import it at the source code level. This will simplify signature aggregator implementations by allowing them to depend only on AvalancheGo for message construction, rather than individual VM codecs. ## Motivation Warp message signatures consist of an aggregate BLS signature composed of the individual signatures of a subnet's validators. Individual signatures need to be retreivable by the party that wishes to construct an aggregate signature. At present, this is left to VMs to implement, as is the case with [Subnet EVM](https://github.com/ava-labs/subnet-evm/blob/v0.6.7/plugin/evm/message/signature_request.go#20) and [Coreth](https://github.com/ava-labs/coreth/blob/v0.13.6-rc.0/plugin/evm/message/signature_request.go#L20) This creates friction in applications that are intended to operate across many VMs (or distinct implementations of the same VM). As an example, the reference Warp message relayer implementation, [awm-relayer](https://github.com/ava-labs/awm-relayer), fetches individual signatures from validators and aggregates them before sending the Warp message to its destination chain for verification. However, Subnet EVM and Coreth have distinct codecs, requiring the relayer to [switch](https://github.com/ava-labs/awm-relayer/blob/v1.4.0-rc.0/relayer/application_relayer.go#L372) according to the target codebase. Another example is ACP-75, which aims to implement acceptance proofs using Warp. The signature aggregation mechanism is not [specified](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/75-acceptance-proofs/README.md#signature-aggregation), which is a blocker for that ACP to be marked implementable. Standardizing the Warp Signature Request interface by defining it as a format for `AppRequest` message payloads in AvalancheGo would simplify the implementation of ACP-75, and streamline signature aggregation for out-of-protocol services such as Warp message relayers. ## Specification We propose the following types, implemented as Protobuf types that may be decoded from the `AppRequest`/`AppResponse` `app_bytes` field. By way of example, this approach is currently used to [implement](https://github.com/ava-labs/avalanchego/blob/v1.11.10-status-removal/proto/sdk/sdk.proto#7) and [parse](https://github.com/ava-labs/avalanchego/blob/v1.11.10-status-removal/network/p2p/gossip/message.go#22) gossip `AppRequest` types. - `SignatureRequest` includes two fields. `message` specifies the payload that the returned signature should correspond to, namely a serialized unsigned Warp message. `justification` specifies arbitrary data that the requested node may use to decide whether or not it is willing to sign `message`. `justification` may not be required by every VM implementation, but `message` should always contain the bytes to be signed. It is up to the VM to define the validity requirements for the `message` and `justification` payloads. ```protobuf message SignatureRequest { bytes message = 1; bytes justification = 2; } ``` - `SignatureResponse` is the corresponding `AppResponse` type that returns the requested signature. ```protobuf message SignatureResponse { bytes signature = 1; } ``` ### Handlers For each of the above types, VMs must implement corresponding `AppRequest` and `AppResponse` handlers. The `AppRequest` handler should be [registered](https://github.com/ava-labs/avalanchego/blob/v1.11.10-status-removal/network/p2p/network.go#L173) using the canonical handler ID, defined as `2`. ## Use Cases Generally speaking, `SignatureRequest` can be used to request a signature over a Warp message by serializing the unsigned Warp message into `message`, and populating `justification` as needed. ### Sign a known Warp Message Subnet EVM and Coreth store messages that have been seen (i.e. on-chain message sent through the [Warp Precompile](https://github.com/ava-labs/subnet-evm/tree/v0.6.7/precompile/contracts/warp) and [off-chain](https://github.com/ava-labs/subnet-evm/blob/v0.6.7/plugin/evm/config.go#L226) Warp messages) such that a signature over that message can be provided on request. `SignatureRequest` can be used for this case by specifying the Warp message in `message`. The queried node may then look up the Warp message in its database and return the signature. In this case, `justification` is not needed. ### Attest to an on-chain event Subnet EVM and Coreth also support attesting to block hashes via Warp, by serving signature requests made using the following `AppRequest` type: ``` type BlockSignatureRequest struct { BlockID ids.ID } ``` `SignatureRequest` can achieve this by specifying an unsigned Warp message with the `BlockID` as the payload, and serializing that message into `message`. `justification` may optionally be used to provide additional context, such as a the block height of the given block ID. ### Confirm that an event did not occur With [ACP-77](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/77-reinventing-subnets), Subnets will have the ability to manage their own validator sets. The Warp message payload contained in a `RegisterSubnetValidatorTx` includes an `expiry`, after which the specified validation ID (i.e. a unique hash over the Subnet ID, node ID, stake weight, and expiry) becomes invalid. The Subnet needs to know that this validation ID is expired so that it can keep its locally tracked validator set in sync with the P-Chain. We also assume that the P-Chain will not persist expired or invalid validation IDs. We can use `SignatureRequest` to construct a Warp message attesting that the validation ID expired. We do so by serializing an unsigned Warp message containing the validation ID into `message`, and providing the validation ID hash preimage in `justification` for the P-Chain to reconstruct the expired validation ID. ## Security Considerations VMs have full latitude when implementing `SignatureRequest` handlers, and should take careful consideration of what `message` payloads their implementation should be willing to sign, given a `justification`. Some considerations include, but are not limited to: - Input validation. Handlers should validate `message` and `justification` payloads to ensure that they decode to coherent types, and that they contain only expected data. - Signature DoS. AvalancheGo's peer-to-peer networking stack implements message rate limiting to mitigate the risk of DoS, but VMs should also consider the cost of parsing and signing a `message` payload. - Payload collision. `message` payloads should be implemented as distinct types that do not overlap with one another within the context of signed Warp messages from the VM. For instance, a `message` payload specifying 32-byte hash may be interpreted as a transaction hash, a block hash, or a blockchain ID. ## Backwards Compatibility This change is backwards compatible for VMs, as nodes running older versions that do not support the new message types will simply drop incoming messages. ## Reference Implementation A reference implementation containing the Protobuf types and the canonical handler ID can be found [here](https://github.com/ava-labs/avalanchego/pull/3218). ## Acknowledgements Thanks to @joshua-kim, @iansuvak, @aaronbuchwald, @michaelkaplan13, and @StephenButtolph for discussion and feedback on this ACP. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-125: Basefee Reduction (/docs/acps/125-basefee-reduction) --- title: "ACP-125: Basefee Reduction" description: "Details for Avalanche Community Proposal 125: Basefee Reduction" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/125-basefee-reduction/README.md --- | ACP | 125 | | :--- | :--- | | **Title** | Reduce C-Chain minimum base fee from 25 nAVAX to 1 nAVAX | | **Author(s)** | Stephen Buttolph ([@StephenButtolph](https://github.com/StephenButtolph)), Darioush Jalali ([@darioush](https://github.com/darioush)) | | **Status** | Activated ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/127)) | | **Track** | Standards | ## Abstract Reduce the minimum base fee on the Avalanche C-Chain from 25 nAVAX to 1 nAVAX. ## Motivation With dynamic fees, the gas price is supposed to be a result of a continuous auction such that the consumed gas per second converges to the target gas usage per second. When dynamic fees were first introduced, safeguards were added to ensure the mechanism worked as intended, such as a relatively high minimum gas price and a maximum gas price. The maximum gas price has since been entirely removed. The minimum gas price has been reduced significantly. However, the base fee is often observed pinned to this minimum. This shows that it is higher than what the market demands, and therefore it is artificially reducing network usage. ## Specification The dynamic fee calculation currently must enforce a minimum base fee of 25 nAVAX. This change proposes reducing the minimum base fee to 1 nAVAX upon the next network upgrade activation. ## Backwards Compatibility Modifies the consensus rules for the C-Chain, therefore it requires a network upgrade. ## Reference Implementation A draft implementation of this ACP for the coreth VM can be found [here](https://github.com/ava-labs/coreth/pull/604/files). ## Security Considerations Lower gas costs may increase state bloat. However, we note that the dynamic fee algorithm responded appropriately during periods of high use (such as Dec. 2023), which gives reasonable confidence that enforcing a 25 nAVAX minimum fee is no longer necessary. ## Open Questions N/A ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-13: Subnet Only Validators (/docs/acps/13-subnet-only-validators) --- title: "ACP-13: Subnet Only Validators" description: "Details for Avalanche Community Proposal 13: Subnet Only Validators" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/13-subnet-only-validators/README.md --- | ACP | 13 | | :--- | :--- | | **Title** | Subnet-Only Validators (SOVs) | | **Author(s)** | Patrick O'Grady ([contact@patrickogrady.xyz](mailto:contact@patrickogrady.xyz)) | | **Status** | Stale | | **Track** | Standards | | **Superseded-By** | [ACP-77](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/77-reinventing-subnets/README.md) | ## Abstract Introduce a new type of staker, Subnet-Only Validators (SOVs), that can validate an Avalanche Subnet and participate in Avalanche Warp Messaging (AWM) without syncing or becoming a Validator on the Primary Network. Require SOVs to pay a refundable fee of 500 $AVAX on the P-Chain to register as a Subnet Validator instead of staking at least 2000 $AVAX, the minimum requirement to become a Primary Network Validator. Preview a future transition to Pay-As-You-Go Subnet Validation and $AVAX-Augmented Subnet Security. _This ACP does not modify/deprecate the existing Subnet Validation semantics for Primary Network Validators._ ## Motivation Each node operator must stake at least 2000 $AVAX ($20k at the time of writing) to first become a Primary Network Validator before they qualify to become a Subnet Validator. Most Subnets aim to launch with at least 8 Subnet Validators, which requires staking 16000 $AVAX ($160k at time of writing). All Subnet Validators, to satisfy their role as Primary Network Validators, must also [allocate 8 AWS vCPU, 16 GB RAM, and 1 TB storage](https://github.com/ava-labs/avalanchego/blob/master/README.md#installation) to sync the entire Primary Network (X-Chain, P-Chain, and C-Chain) and participate in its consensus, in addition to whatever resources are required for each Subnet they are validating. Avalanche Warp Messaging (AWM), the native interoperability mechanism for the Avalanche Network, provides a way for Subnets to communicate with each other/C-Chain without a trusted intermediary. Any Subnet Validator must be able to register a BLS key and participate in AWM, otherwise a Subnet may not be able to generate a BLS Multi-Signature with sufficient participating stake. Regulated entities that are prohibited from validating permissionless, smart contract-enabled blockchains (like the C-Chain) can’t launch a Subnet because they can’t opt-out of Primary Network Validation. This deployment blocker prevents a large cohort of Real World Asset (RWA) issuers from bringing unique, valuable tokens to the Avalanche Ecosystem (that could move between C-Chain <-> Subnets using AWM/Teleporter). A widely validated Subnet that is not properly metered could destabilize the Primary Network if usage spikes unexpectedly. Underprovisioned Primary Network Validators running such a Subnet may exit with an OOM exception, see degraded disk performance, or find it difficult to allocate CPU time to P/X/C-Chain validation. The inverse also holds for Subnets with the Primary Network (where some undefined behavior could bring a Subnet offline). Although the fee paid to the Primary Network to operate a Subnet does not go up with the amount of activity on the Subnet, the fixed, upfront cost of setting up a Subnet Validator on the Primary Network deters new projects that prefer smaller, even variable, costs until demand is observed. _Unlike L2s that pay some increasing fee (usually denominated in units per transaction byte) to an external chain for data availability and security as activity scales, Subnets provide their own security/data availability and the only cost operators must pay from processing more activity is the hardware cost of supporting additional load._ Elastic Subnets allow any community to weight Subnet Validation based on some staking token and reward Subnet Validators with high uptime with said staking token. However, there is no way for $AVAX holders on the Primary Network to augment the security of such Subnets. ## Specification ### Required Changes 1) Introduce a new type of staker, Subnet-Only Validators (SOVs), that can validate an Avalanche Subnet and participate in Avalanche Warp Messaging (AWM) without syncing or becoming a Validator on the Primary Network 2) Introduce a refundable fee (called a "lock") of 500 $AVAX that nodes must pay to become an SOV 3) Introduce a non-refundable fee of 0.1 $AVAX that SOVs must pay to become an SOV 4) Introduce a new transaction type on the P-Chain to register as an SOV (i.e. `AddSubnetOnlyValidatorTx`) 5) Add a mode to ANCs that allows SOVs to optionally disable full Primary Network verification (only need to verify P-Chain) 6) ANCs track IPs for SOVs to ensure Subnet Validators can find peers whether or not they are Primary Network Validators 7) Provide a guaranteed rate limiting allowance for SOVs like Primary Network Validators Because SOVs do not validate the Primary Network, they will not be rewarded with $AVAX for "locking" the 500 $AVAX required to become an SOV. This enables people interested in validating Subnets to opt for a lower upfront $AVAX commitment and lower infrastructure costs instead of $AVAX rewards. Additionally, SOVs will only be required to sync the P-chain (not X/C-Chain) to track any validator set changes in their Subnet and to support Cross-Subnet communication via AWM (see “Primary Network Partial Sync” mode introduced in [Cortina 8](https://github.com/ava-labs/avalanchego/releases/tag/v1.10.8)). The lower resource requirement in this "minimal mode" will provide Subnets with greater flexibility of validation hardware requirements as operators are not required to reserve any resources for C-Chain/X-Chain operation. If an SOV wishes to sync the entire Primary Network, they still can. ### Future Work The previously described specification is a minimal, additive change to Subnet Validation semantics that prepares the Avalanche Network for a more flexible Subnet model. It alone, however, fails to communicate this flexibility nor provides an alternative use of $AVAX that would have otherwise been used to create Subnet Validators. Below are two high-level ideas (Pay-As-You-Go Subnet Validation Registration Fees and $AVAX-Augmented Security) that highlight how this initial change could be extended in the future. If the Avalanche Community is interested in their adoption, they should each be proposed as a unique ACP where they can be properly specified. **These ideas are only suggestions for how the Avalanche Network could be modified in the future if this ACP is adopted. Supporting this ACP does not require supporting these ideas or committing to their rollout.** #### Pay-As-You-Go Subnet Validation Registration Fees _Transition Subnet Validator registration to a dynamically priced, continuously charged fee (that doesn't require locking large amounts of $AVAX upfront)._ While it would be possible to just transition to a lower required "lock" amount, many think that it would be more competitive to transition to a dynamically priced, continuous payment mechanism to register as a Subnet Validator. This new mechanism would target some $Y nAVAX fee that would be paid by each Subnet Validator per Subnet per second (pulling from a "Subnet Validator's Account") instead of requiring a large upfront lockup of $AVAX. The rate of nAVAX/second should be set by the demand for validating Subnets on Avalanche compared to some usage target per Subnet and across all Subnets. This rate should be locked for each Subnet Validation period to ensure operators are not subject to surprise costs if demand rises significantly over time. The optimization work outlined in [BLS Multi-Signature Voting](https://hackmd.io/@patrickogrady/100k-subnets#How-will-BLS-Multi-Signature-uptime-voting-work) should allow the min rate to be set as low as ~512-4096 nAVAX/second (or 1.3-10.6 $AVAX/month). Fees paid to the Avalanche Network for PAYG could be burned, like all other P-Chain, X-Chain, and C-Chain transactions, or they could be partially rewarded to Primary Network Validators as a "boost" over the existing staking rewards. The nice byproduct of the latter approach is that it better aligns Primary Network Validators with the growth of Subnets. #### $AVAX-Augmented Subnet Security _Allow pledging unstaked $AVAX to Subnet Validators on Elastic Subnets that can be slashed if said Subnet Validator commits an attributable fault (i.e. proposes/signs conflicting blocks/AWM payloads). Reward locked $AVAX associated with Subnet Validators that were not slashed with Elastic Subnet staking rewards._ Currently, the only way to secure an Elastic Subnet is to stake its custom staking token (defined in the `TransformSubnetTx`). Many have requested the option to use $AVAX for this token, however, this could easily allow an adversary to take over small Elastic Subnets (where the amount of $AVAX staked may be much less than the circulating supply). $AVAX-Augmented Subnet Security would allow anyone holding $AVAX to lock it to specific Subnet Validators and earn Elastic Subnet reward tokens for supporting honest participants. Recall, all stake management on the Avalanche Network (even for Subnets) occurs on the P-Chain. Thus, staked tokens ($AVAX and/or custom staking tokens used in Elastic Subnets) and stake weights (used for AWM verification) are secured by the full $AVAX stake of the Primary Network. $AVAX-Augmented Subnet Security, like staking, would be implemented on the P-Chain and enjoy the full security of the Primary Network. This approach means locking $AVAX occurs on the Primary Network (no need to transfer $AVAX to a Subnet, which may not be secured by meaningful value yet) and proofs of malicious behavior are processed on the Primary Network (a colluding Subnet could otherwise choose not to process a proof that would lead to their "lockers" being slashed). _This native approach is comparable to the idea of using $ETH to secure DA on [EigenLayer](https://www.eigenlayer.xyz/) (without reusing stake) or $BTC to secure Cosmos Zones on [Babylon](https://babylonchain.io/) (but not using an external ecosystem)._ ## Backwards Compatibility * Existing Subnet Validation semantics for Primary Network Validators are not modified by this ACP. This means that All existing Subnet Validators can continue validating both the Primary Network and whatever Subnets they are validating. This change would just provide a new option for Subnet Validators that allows them to sacrifice their staking rewards for a smaller upfront $AVAX commitment and lower infrastructure costs. * Support for this ACP would require adding a new transaction type to the P-Chain (i.e. `AddSubnetOnlyValidatorTx`). This new transaction is an execution-breaking change that would require a mandatory Avalanche Network upgrade to activate. ## Reference Implementation A full implementation will be provided once this ACP is considered `Implementable`. However, some initial ideas are presented below. ### `AddSubnetOnlyValidatorTx` ```text type AddSubnetOnlyValidatorTx struct { // Metadata, inputs and outputs BaseTx `serialize:"true"` // Describes the validator // The NodeID included in [Validator] must be the Ed25519 public key. Validator `serialize:"true" json:"validator"` // ID of the subnet this validator is validating Subnet ids.ID `serialize:"true" json:"subnetID"` // [Signer] is the BLS key for this validator. // Note: We do not enforce that the BLS key is unique across all validators. // This means that validators can share a key if they so choose. // However, a NodeID does uniquely map to a BLS key Signer signer.Signer `serialize:"true" json:"signer"` // Where to send locked tokens when done validating LockOuts []*avax.TransferableOutput `serialize:"true" json:"lock"` // Where to send validation rewards when done validating ValidatorRewardsOwner fx.Owner `serialize:"true" json:"validationRewardsOwner"` // Where to send delegation rewards when done validating DelegatorRewardsOwner fx.Owner `serialize:"true" json:"delegationRewardsOwner"` // Fee this validator charges delegators as a percentage, times 10,000 // For example, if this validator has DelegationShares=300,000 then they // take 30% of rewards from delegators DelegationShares uint32 `serialize:"true" json:"shares"` } ``` _`AddSubnetOnlyValidatorTx` is almost the same as [`AddPermissionlessValidatorTx`](https://github.com/ava-labs/avalanchego/blob/638000c42e5361e656ffbc27024026f6d8f67810/vms/platformvm/txs/add_permissionless_validator_tx.go#L33-L58), the only exception being that `StakeOuts` are now `LockOuts`._ ### `GetSubnetPeers` To support tracking SOV IPs, a new message should be added to the P2P specification that allows Subnet Validators to request the IP of all peers a node knows about on a Subnet (these Signed IPs won't be gossiped like they are for Primary Network Validators because they don't need to be known by the entire Avalanche Network): ```text message GetSubnetPeers { bytes subnet_id = 1; } ``` _It would be a nice addition if a bloom filter could also be provided here so that an ANC only sends IPs of peers that the original sender does not know._ ANCs should respond to this incoming message with a [`PeerList` message](https://github.com/ava-labs/avalanchego/blob/638000c42e5361e656ffbc27024026f6d8f67810/proto/p2p/p2p.proto#L135-L148). ## Security Considerations * Any Subnet Validator running in "Partial Sync Mode" will not be able to verify Atomic Imports on the P-Chain and will rely entirely on Primary Network consensus to only accept valid P-Chain blocks. * High-throughput Subnets will be better isolated from the Primary Network and should improve its resilience (i.e. surges of traffic on some Subnet cannot destabilize a Primary Network Validator). * Avalanche Network Clients (ANCs) must track IPs and provide allocated bandwidth for SOVs even though they are not Primary Network Validators. ## Open Questions * To help orient the Avalanche Community around this wide-ranging and likely to be long-running conversation around the relationship between the Primary Network and Subnets, should we come up with a project name to describe the effort? I've been casually referring to all of these things as the _Astra Upgrade Track_ but definitely up for discussion (may be more confusing than it is worth to do this). ## Appendix A draft of this ACP was posted on in the ["Ideas" Discussion Board](https://github.com/avalanche-foundation/ACPs/discussions/10#discussioncomment-7373486), as suggested by the [ACP README](https://github.com/avalanche-foundation/ACPs#step-1-post-your-idea-to-github-discussions). Feedback on this draft was collected and addressed on both the "Ideas" Discussion Board and on [HackMD](https://hackmd.io/@patrickogrady/100k-subnets#Feedback-to-Draft-Proposal). ## Acknowledgements Thanks to @luigidemeo1, @stephenbuttolph, @aaronbuchwald, @dhrubabasu, and @abi87 for their feedback on these ideas. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-131: Cancun Eips (/docs/acps/131-cancun-eips) --- title: "ACP-131: Cancun Eips" description: "Details for Avalanche Community Proposal 131: Cancun Eips" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/131-cancun-eips/README.md --- | ACP | 131 | | :--- | :--- | | **Title** | Activate Cancun EIPs on C-Chain and Subnet-EVM chains | | **Author(s)** | Darioush Jalali ([@darioush](https://github.com/darioush)), Ceyhun Onur ([@ceyonur](https://github.com/ceyonur)) | | **Status** | Activated ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/139)) | | **Track** | Standards, Subnet | ## Abstract Enable new EVM opcodes and opcode changes in accordance with the following EIPs on the Avalanche C-Chain and Subnet-EVM chains: - [EIP-4844: BLOBHASH opcode](https://eips.ethereum.org/EIPS/eip-4844) - [EIP-7516: BLOBBASEFEE opcode](https://eips.ethereum.org/EIPS/eip-7516) - [EIP-1153: Transient storage](https://eips.ethereum.org/EIPS/eip-1153) - [EIP-5656: MCOPY opcode](https://eips.ethereum.org/EIPS/eip-5656) - [EIP-6780: SELFDESTRUCT only in same transaction](https://eips.ethereum.org/EIPS/eip-6780) Note blob transactions from EIP-4844 are excluded and blocks containing them will still be considered invalid. ## Motivation The listed EIPs were activated on Ethereum mainnet as part of the [Cancun upgrade](https://github.com/ethereum/execution-specs/blob/master/network-upgrades/mainnet-upgrades/cancun.md#included-eips). This proposal is to activate them on the Avalanche C-Chain in the next network upgrade, to maintain compatibility with upstream EVM tooling, infrastructure, and developer experience (e.g., Solidity compiler defaults >= [0.8.25](https://github.com/ethereum/solidity/releases/tag/v0.8.25)). Additionally, it recommends the activation of the same EIPs on Subnet-EVM chains. ## Specification & Reference Implementation The opcodes (EVM exceution modifications) and block header modifications should be adopted as specified in the EIPs themselves. Other changes such as enabling new transaction types or mempool modifications are not in scope (specifically blob transactions from EIP-4844 are excluded and blocks containing them are considered invalid). ANCs (Avalanche Network Clients) can adopt the implementation as specified in the [coreth](https://github.com/ava-labs/coreth) repository, which was adopted from the [go-ethereum v1.13.8](https://github.com/ethereum/go-ethereum/releases/tag/v1.13.8) release in this [PR](https://github.com/ava-labs/coreth/pull/550). In particular, note the following code: - [Activation of new opcodes](https://github.com/ava-labs/coreth/blob/7b875dc21772c1bb9e9de5bc2b31e88c53055e26/core/vm/jump_table.go#L93) - Activation of Cancun in next Avalanche upgrade: - [C-Chain](https://github.com/ava-labs/coreth/pull/610) - [Subnet-EVM chains](https://github.com/ava-labs/subnet-evm/blob/fa909031ed148484c5072d949c5ed73d915ce1ed/params/config_extra.go#L186) - `ParentBeaconRoot` is enforced to be included and the zero value [here](https://github.com/ava-labs/coreth/blob/7b875dc21772c1bb9e9de5bc2b31e88c53055e26/plugin/evm/block_verification.go#L287-L288). This field is retained for future use and compatibility with upstream tooling. - Forbids blob transactions by enforcing `BlobGasUsed` to be 0 [here](https://github.com/ava-labs/coreth/pull/611/files#diff-532a2c6a5365d863807de5b435d8d6475552904679fd611b1b4b10d3bf4f5010R267). _Note:_ Subnets are sovereign in regards to their validator set and state transition rules, and can choose to opt out of this proposal by making a code change in their respective Subnet-EVM client. ## Backwards Compatibility The original EIP authors highlighted the following considerations. For full details, refer to the original EIPs: - [EIP-4844](https://eips.ethereum.org/EIPS/eip-4844#backwards-compatibility): Blob transactions are not proposed to be enabled on Avalanche, so concerns related to mempool or transaction data availability are not applicable. - [EIP-6780](https://eips.ethereum.org/EIPS/eip-6780#backwards-compatibility) "Contracts that depended on re-deploying contracts at the same address using CREATE2 (after a SELFDESTRUCT) will no longer function properly if the created contract does not call SELFDESTRUCT within the same transaction." Adoption of this ACP modifies consensus rules for the C-Chain, therefore it requires a network upgrade. It is recommended that Subnet-EVM chains also adopt this ACP and follow the same upgrade time as Avalanche's next network upgrade. ## Security Considerations Refer to the original EIPs for security considerations: - [EIP 1153](https://eips.ethereum.org/EIPS/eip-1153#security-considerations) - [EIP 4788](https://eips.ethereum.org/EIPS/eip-4788#security-considerations) - [EIP 4844](https://eips.ethereum.org/EIPS/eip-4844#security-considerations) - [EIP 5656](https://eips.ethereum.org/EIPS/eip-5656#security-considerations) - [EIP 6780](https://eips.ethereum.org/EIPS/eip-6780#security-considerations) - [EIP 7516](https://eips.ethereum.org/EIPS/eip-7516#security-considerations) ## Open Questions No open questions. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-151: Use Current Block Pchain Height As Context (/docs/acps/151-use-current-block-pchain-height-as-context) --- title: "ACP-151: Use Current Block Pchain Height As Context" description: "Details for Avalanche Community Proposal 151: Use Current Block Pchain Height As Context" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/151-use-current-block-pchain-height-as-context/README.md --- | ACP | 151 | | :------------ | :----------------------------------------------------------------------------------------- | | **Title** | Use current block P-Chain height as context for state verification | | **Author(s)** | Ian Suvak ([@iansuvak](https://github.com/iansuvak)) | | **Status** | Activated ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/152)) | | **Track** | Standards | ## Abstract Proposes that the ProposerVM passes inner VMs the P-Chain block height of the current block being built rather than the P-Chain block height of the parent block. Inner VMs use this P-Chain height for verifying aggregated signatures of Avalanche Interchain Messages (ICM). This will allow for a more reliable way to determine which validators should participate in signing the message, and remove unnecessary waiting periods. ## Motivation Currently the ProposerVM passes the P-Chain height of the parent block to inner VMs, which use the value to verify ICM messages in the current block. Using the parent block's P-Chain height is necessary for verifying the proposer and reaching consensus on the current block, but it is not necessary for verifying ICM messages within the block. Using the P-Chain height of the current block being built would make operations using ICM messages to modify the validator set, such as ones specified in [ACP-77](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/77-reinventing-subnets/README.md) be verifiable sooner and more reliably. Currently at least two new P-Chain blocks need to be produced after the relevant state change for it to be reflected for purposes of ICM aggregate signature verification. ## Specification The [block context](https://github.com/ava-labs/avalanchego/blob/d2e9d12ed2a1b6581b8fd414cbfb89a6cfa64551/snow/engine/snowman/block/block_context_vm.go#L14) contains a `PChainHeight` field that is passed from the ProposerVM to the inner VMs building the block. It is later used by the inner VMs to fetch the canonical validator set for verification of ICM aggregated signatures. The `PChainHeight` currently passed in by the ProposerVM is the P-Chain height of the parent block. The proposed change is to instead have the ProposerVM pass in the P-Chain height of the current block. ## Backwards Compatibility This change requires an upgrade to make sure that all validators verifying the validity of the ICM messages use the same P-Chain height and therefore the same validator set. Prior to activation nodes should continue to use P-Chain height of the parent block. ## Reference Implementation An implementation of this ACP for avalanchego can be found [here](https://github.com/ava-labs/avalanchego/pull/3459) ## Security Considerations ProposerVM needs to use the parent block's P-Chain height to verify proposers for security reasons but we don't have such restrictions for verifying ICM message validity in the current block being built. Therefore, this should be a safe change. ## Acknowledgments Thanks to [@StephenButtolph](https://github.com/StephenButtolph) and [@michaelkaplan13](https://github.com/michaelkaplan13) for discussion and feedback on this ACP. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-176: Dynamic Evm Gas Limit And Price Discovery Updates (/docs/acps/176-dynamic-evm-gas-limit-and-price-discovery-updates) --- title: "ACP-176: Dynamic Evm Gas Limit And Price Discovery Updates" description: "Details for Avalanche Community Proposal 176: Dynamic Evm Gas Limit And Price Discovery Updates" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/176-dynamic-evm-gas-limit-and-price-discovery-updates/README.md --- | ACP | 176 | | :- | :- | | **Title** | Dynamic EVM Gas Limits and Price Discovery Updates | | **Author(s)** | Stephen Buttolph ([@StephenButtolph](https://github.com/StephenButtolph)), Michael Kaplan ([@michaelkaplan13](https://github.com/michaelkaplan13)) | | **Status** | Activated ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/178)) | | **Track** | Standards | ## Abstract Proposes that the C-Chain and Subnet-EVM chains adopt a dynamic fee mechanism similar to the one [introduced on the P-Chain as part of ACP-103](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/103-dynamic-fees/README.md), with modifications to allow for block proposers (i.e. validators) to dynamically adjust the target gas consumption per unit time. ## Motivation Currently, the C-Chain has a static gas target of [15,000,000 gas](https://github.com/ava-labs/coreth/blob/39ec874505b42a44e452b8809a2cc6d09098e84e/params/avalanche_params.go#L32) per [10 second rolling window](https://github.com/ava-labs/coreth/blob/39ec874505b42a44e452b8809a2cc6d09098e84e/params/avalanche_params.go#L36), and uses a modified version of the [EIP-1559](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-1559.md) dynamic fee mechanism to adjust the base fee of blocks based on the gas consumed in the previous 10 second window. This has two notable drawbacks: 1. The windower mechanism used to determine the base fee of blocks can lead to outsized spikes in the gas price when there is a large block. This is because after a large block that uses all of its gas limit, blocks that follow in the same window continue to result in increased gas prices even if they are relatively small blocks that are under the target gas consumption. 2. The static gas target necessitates a required network upgrade in order to modify. This is cumbersome and makes it difficult for the network to adjust its capacity in response to performance optimizations or hardware requirement increases. To better position Avalanche EVM chains, including the C-Chain, to be able to handle future increases in load, we propose replacing the above mechanism with one that better handles blocks that consume a large amount of gas, and that allows for validators to dynamically adjust the target rate of consumption. ## Specification ### Gas Price Determination The mechanism to determine the base fee of a block is the same as the one used in [ACP-103](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/103-dynamic-fees/README.md) to determine the gas price of a block on the P-Chain. This mechanism calculates the gas price for a given block $b$ based on the following parameters:
| | | |---|---| | $T$ | the target gas consumed per second | | $M$ | minimum gas price | | $K$ | gas price update constant | | $C$ | maximum gas capacity | | $R$ | gas capacity added per second |
### Making $T$ Dynamic As noted above, the gas price determination mechanism relies on a target gas consumption per second, $T$, in order to calculate the gas price for a given block. $T$ will be adjusted dynamically according to the following specification. Let $q$ be a non-negative integer that is initialized to 0 upon activation of this mechanism. Let the target gas consumption per second be expressed as: $$T = P \cdot e^{\frac{q}{D}}$$ where $P$ is the global minimum allowed target gas consumption rate for the network, and $D$ is a constant that helps control the rate of change of the target gas consumption. After the execution of transactions in block $b$, the value of $q$ can be increased or decreased up to $Q$. It must be the case that $\left|\Delta q\right| \leq Q$, or block $b$ is considered invalid. The amount by which $q$ changes after executing block $b$ is specified by the block builder. Block builders (i.e. validators), may set their desired value for $T$ (i.e. their desired gas consumption rate) in their configuration, and their desired value for $q$ can then be calculated as: $$q_{desired} = D \cdot ln\left(\frac{T_{desired}}{P}\right)$$ Note that since $q_{desired}$ is only used locally and can be different for each node, it is safe for implementations to approximate the value of $ln\left(\frac{T_{desired}}{P}\right)$, and round the resulting value to the nearest integer. When building a block, builders can calculate their next preferred value for $q$ based on the network's current value (`q_current`) according to: ```python # Calculates a node's new desired value for q given for a given block def calc_next_q(q_current: int, q_desired: int, max_change: int) -> int: if q_desired > q_current: return q_current + min(q_desired - q_current, max_change) else: return q_current - min(q_current - q_desired, max_change) ``` As $q$ is updated after the execution of transactions within the block, $T$ is also updated such that $T = P \cdot e^{\frac{q}{D}}$ at all times. As the value of $T$ adjusts, the value of $R$ (capacity added per second) is also updated such that: $$R = 2 \cdot T$$ This ensures that the gas price can increase and decrease at the same rate. The value of $C$ must also adjust proportionately, so we set: $$C = 10 \cdot T$$ This means that the maximum stored gas capacity would be reached after 5 seconds where no blocks have been accepted. In order to keep roughly constant the time it takes for the gas price to double at sustained maximum network capacity usage, the value of $K$ used in the gas price determination mechanism must be updated proportionally to $T$ such that: $$K = 87 \cdot T$$ In order to have the gas price not be directly impacted by the change in $K$, we also update $x$ (excess gas consumption) proportionally. When updating $x$ after executing a block, instead of setting $x = x + G$ as specified in ACP-103, we set: $$x_{n+1} = (x + G) \cdot \frac{K_{n+1}}{K_{n}}$$ Note that the value of $q$ (and thus also $T$, $R$, $C$, $K$, and $x$) are updated **after** the execution of block $b$, which means they only take effect in determining the gas price of block $b+1$. The change to each of these values in block $b$ does not effect the gas price for transactions included in block $b$ itself. Allowing block builders to adjust the target gas consumption rate in blocks that they produce makes it such that the effective target gas consumption rate should converge over time to the point where 50% of the voting stake weight wants it increased and 50% of the voting stake weight wants it decreased. This is because the number of blocks each validator produces is proportional to their stake weight. As noted in ACP-103, the maximum gas consumed in a given period of time $\Delta{t}$, is $r + R \cdot \Delta{t}$, where $r$ is the remaining gas capacity at the end of previous block execution. The upper bound across all $\Delta{t}$ is $C + R \cdot \Delta{t}$. Phrased differently, the maximum amount of gas that can be consumed by any given block $b$ is: $$gasLimit_{b} = min(r + R \cdot \Delta{t}, C)$$ ### Configuration Parameters As noted above, the gas price determination mechanism depends on the values of $T$, $M$, $K$, $C$, and $R$ to be set as parameters. $T$ is adjusted dynamically from its initial value based on $D$ and $P$, and the values of $R$ and $C$ are derived from $T$. Parameters at activation on the C-Chain are:
| Parameter | Description | C-Chain Configuration| | - | - | - | | $P$ | minimum target gas consumption per second | $1,000,000$ | | $D$ | target gas consumption rate update constant | $2^{25}$ | | $Q$ | target gas consumption rate update factor change limit | $2^{15}$ | | $M$ | minimum gas price | $1*10^{-18}$ AVAX | | $K$ | initial gas price update factor | $87,000,000$ |
$P$ was chosen as a safe bound on the minimum target gas usage on the C-Chain. The current gas target of the C-Chain is $1,500,000$ per second. The target gas consumption rate will only stay at $P$ if the majority of stake weight of the network specifies $P$ as their desired gas consumption rate target. $D$ and $Q$ were chosen to give each block builder the ability to adjust the value of $T$ by roughly $\frac{1}{1024}$ of its current value, which matches the [gas limit bound divisor that Ethereum currently uses](https://github.com/ethereum/go-ethereum/blob/52766bedb9316cd6cddacbb282809e3bdfba143e/params/protocol_params.go#L26) to limit the amount that validators can change the execution layer gas limit in a single block. $D$ and $Q$ were scaled up by a factor of $2^{15}$ to provide block builders more granularity in the adjustments to $T$ that they can make. $M$ was chosen as the minimum possible denomination of the native EVM asset, such that the gas price will be more likely to consistently be in a range of price discovery. The price discovery mechanism has already been battle tested on the P-Chain (and prior to that on Ethereum for blob gas prices as defined by EIP-4844), giving confidence that it will correctly react to any increase in network usage in order to prevent a DOS attack. $K$ was chosen such that at sustained maximum capacity ($T*2$ gas/second), the fee rate will double every ~60.3 seconds. For comparison, EIP-1559 can double about ~70 seconds, and the C-Chain's current implementation can double about every ~50 seconds, depending on the time between blocks. The maximum instantaneous price multiplier is: $$e^\frac{C}{K} = e^\frac{10 \cdot T}{87 \cdot T} = e^\frac{10}{87} \simeq 1.12$$ ### Choosing $T_{desired}$ As mentioned above, this new mechanism allows for validators to specify their desired target gas consumption rate ($T_{desired}$) in their configuration, and the value that they set impacts the effective target gas consumption rate of the network over time. The higher the value of $T$, the more resources (storage, compute, etc) that are able to be used by the network. When choosing what value makes sense for them, validators should consider the resources that are required to properly support that level of gas consumption, the utility the network provides by having higher transaction per second throughput, and the stability of network should it reach that level of utilization. While Avalanche Network Clients can set default configuration values for the desired target gas consumption rate, each validator can choose to set this value independently based on their own considerations. ## Backwards Compatibility The changes proposed in this ACP require a required network upgrade in order to take effect. Prior to its activation, the current gas limit and price discovery mechanisms will continue to be used. Its activation should have relatively minor compatibility effects on any developer tooling. Notably, transaction formats, and thus wallets, are not impacted. After its activation, given that the value of $C$ is dynamically adjusted, the maximum possible gas consumed by an individual block, and thus maximum possible consumed by an individual transaction, will also dynamically adjust. The upper bound on the amount of gas consumed by a single transaction fluctuating means that transactions that are considered invalid at one time may be considered valid at a different point in time, and vice versa. While potentially unintuitive, as long as the minimum gas consumption rate is set sufficiently high this should not have significant practical impact, and is also currently the case on the Ethereum mainnet. > [!NOTE] > After the activation of this ACP, concerns were raised around the latency of inclusion for large transactions when the fee is increasing. To address these concerns, block producers SHOULD only produce blocks when there is sufficient capacity to include large transactions. Prior to this ACP, the maximum size of a transaction was $15$ million gas. Therefore, the recommended heuristic is to only produce blocks when there is at least $\min(8 \cdot T, 15 \text{ million})$ capacity. _At the time of writing, this ensures transactions with up to 12.8 million gas will be able to bid for block space._ ## Reference Implementation This ACP was implemented and merged into Coreth behind the `Fortuna` upgrade flag. The full implementation can be found in [coreth@v0.14.1-acp-176.1](https://github.com/ava-labs/coreth/releases/tag/v0.14.1-acp-176.1). ## Security Considerations This ACP changes the mechanism for determining the gas price on Avalanche EVM chains. The gas price is meant to adapt dynamically to respond to changes in demand for using the chain. If it does not react as expected, the chain could be at risk for a DOS attack (if the usage price is too low), or over charge users during period of low activity. This price discovery mechanism has already been employed on the P-Chain, but should again be thoroughly tested for use on the C-Chain prior to activation on the Avalanche Mainnet. Further, this ACP also introduces a mechanism for validators to change the gas limit of the C-Chain. If this limit is set too high, it is possible that validator nodes will not be able to keep up in the processing of blocks. An upper bound on the maximum possible gas limit could be considered to try to mitigate this risk, though it would then take further required network upgrades to scale the network past that limit. ## Acknowledgments Thanks to the following non-exhaustive list of individuals for input, discussion, and feedback on this ACP. - [Emin Gün Sirer](https://x.com/el33th4xor) - [Luigi D'Onorio DeMeo](https://x.com/luigidemeo) - [Darioush Jalali](https://github.com/darioush) - [Aaron Buchwald](https://github.com/aaronbuchwald) - [Geoff Stuart](https://github.com/geoff-vball) - [Meag FitzGerald](https://github.com/meaghanfitzgerald) - [Austin Larson](https://github.com/alarso16) ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-181: P Chain Epoched Views (/docs/acps/181-p-chain-epoched-views) --- title: "ACP-181: P Chain Epoched Views" description: "Details for Avalanche Community Proposal 181: P Chain Epoched Views" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/181-p-chain-epoched-views/README.md --- | ACP | 181 | | :------------ | :----------------------------------------------------------------------------------------- | | **Title** | P-Chain Epoched Views | | **Author(s)** | Cam Schultz [@cam-schultz](https://github.com/cam-schultz) | | **Status** | Activated ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/211)) | | **Track** | Standards | ## Abstract Proposes a standard P-Chain epoching scheme such that any VM that implements it uses a P-Chain block height known prior to the generation of its next block. This would enable VMs to optimize validator set retrievals, which currently must be done during block execution. This standard does *not* introduce epochs to the P-Chain's VM directly. Instead, it provides a standard that may be implemented by layers that inject P-Chain state into VMs, such as the ProposerVM. ## Motivation The P-Chain maintains a registry of L1 and Subnet validators (including Primary Network validators). Validators are added, removed, or their weights changed by issuing P-Chain transactions that are included in P-Chain blocks. When describing an L1 or Subnet's validator set, what is really being described are the weights, BLS keys, and Node IDs of the active validators at a particular P-Chain height. Use cases that require on-demand views of L1 or Subnet validator sets need to fetch validator sets at arbitrary P-Chain heights, while use cases that require up-to-date views need to fetch them as often as every P-Chain block. Epochs during which the P-Chain height is fixed would widen this window to a predictable epoch duration, allowing these use cases to implement optimizations such as pre-fetching validator sets once per epoch, or allowing more efficient backwards traversal of the P-Chain to fetch historical validator sets. ## Specification ### Assumptions In the following specification, we assume that a block $b_m$ has timestamp $t_m$ and P-Chain height $p_m$. ### Epoch Definition An epoch is defined as a contiguous range of blocks that share the same three values: - An Epoch Number - An Epoch P-Chain Height - An Epoch Start Time Let $E_N$ denote an epoch with epoch number $N$. $E_N$'s start time is denoted as $T_{start}^N$, and its P-Chain height as $P_N$. Let block $b_a$ be the block that activates this ACP. The first epoch ($E_0$) has $T_{start}^0 = t_{a-1}$, and $P_0 = p_{a-1}$. In other words, the first epoch start time is the timestamp of the last block prior to the activation of this ACP, and similarly, the first epoch P-Chain height is the P-Chain height of last block prior to the activation of this ACP. ### Epoch Sealing An epoch $E_N$ is *sealed* by the first block with a timestamp greater than or equal to $T_{start}^N + D$, where $D$ is a constant defined in the network upgrade that activates this ACP. Let $B_{S_N}$ denote the block that sealed $E_N$. The sealing block is defined to be a member of the epoch it seals. This guarantees that every epoch will contain at least one block. ### Advancing an Epoch We advance from the current epoch $E_N$ to the next epoch $E_{N+1}$ when the next block after $B_{S_N}$ is produced. This block will be a member of $E_{N+1}$, and will have the values: - $P_{N+1}$ equal to the P-Chain height of $B_{S_N}$ - $T_{start}^{N+1}$ equal to $B_{S_N}$'s timestamp - The epoch number, $N+1$ increments the previous epoch's epoch number by exactly $1$ ## Properties ### Epoch Duration Bounds Since an epoch's start time is set to the [timestamp of the sealing block of the previous epoch](#advancing-an-epoch), all epochs are guaranteed to have a duration of at least $D$, as measured from the epoch's starting time to the timestamp of the epoch's sealing block. However, since a sealing block is [defined](#epoch-sealing) to be a member of the epoch it seals, there is no upper bound on an epoch's duration, since that sealing block may be produced at any point in the future beyond $T_{start}^N + D$. ### Fixing the P-Chain Height When building a block, Avalanche blockchains use the P-Chain height [embedded in the block](#assumptions) to determine the validator set. If instead the epoch P-Chain height is used, then we can ensure that when a block is built, the validator set to be used for the next block is known. To see this, suppose block $b_m$ seals epoch $E_N$. Then the next block, $b_{m+1}$ will begin a new epoch, $E_{N+1}$ with $P_{N+1}$ equal to $b_m$'s P-Chain height, $p_m$. If instead $b_m$ does not seal $E_N$, then $b_{m+1}$ will continue to use $P_{N}$. Both candidates for $b_{m+1}$'s P-Chain height ($p_m$ and $P_N$) are known at $b_m$ build time. ## Use Cases ### ICM Verification Optimization For a validator to verify an ICM message, the signing L1/Subnet's validator set must be retrieved during block verification by traversing backward from the current P-Chain height to the P-Chain height provided by the ProposerVM. The traversal depth is highly variable, so to account for the worst case, VM implementations charge a large amount of gas to perform this verification. With epochs, validator set retrieval occurs at fixed P-Chain heights that increment at regular intervals, which provides opportunities to optimize this retrieval. For instance, validator retrieval may be done asynchronously from block verification as soon as an epoch has been sealed. Further, validator sets at a given height can be more effectively cached or otherwise kept in memory, because the same height will be used verify all ICM messages for the remainder of an epoch. Each of these VM optimizations allow for the potential of ICM verification costs to be safely reduced by a significant amount within VM implementations. ### Improved Relayer Reliability Current ICM VM implementations verify ICM messages against the local P-Chain state, as determined by the P-Chain height set by the ProposerVM. Off-chain relayers perform the following steps to deliver ICM messages: 1. Fetch the sending chain's validator set at the verifying chain's current proposed height 1. Collect BLS signatures from that validator set to construct the signed ICM message 1. Submit the transaction containing the signed message to the verifying chain If the validator set changes between steps 1 and 3, the ICM message will fail verification. Epochs improve upon this by fixing the P-Chain height used to verify ICM messages for a duration of time that is predictable to off-chain relayers. A relayer should be able to derive the epoch boundaries based on the specification above, or they could retrieve that information via a node API. Relayers could use that information to decide the validator set to query, knowing that it will be stable for the duration of the epoch. Further, VMs could relax the verification rules to allow ICM messages to be verified against the previous epoch as a fallback, eliminating edge cases around the epoch boundary. ## EVM ICM Verification Gas Cost Updates Since the activation of [ACP-30](https://github.com/avalanche-foundation/ACPs/tree/60cbfc32e7ee2cffed33d8daee980d7a85dded48/ACPs/30-avalanche-warp-x-evm#gas-costs), the cost to verify ICM messages in the Avalanche EVM implementations (i.e. `coreth` and `subnet-evm`) using the `WarpPrecompile` have been based on the worst-case verification flow, including the relatively expensive lookup of the source chain's validator set at an aribtrary P-Chain height used by each new block. This ACP allows for optimizing this verification, as described above. Prior to this ACP, the gas costs of relevant `WarpPrecompile` functions were: ``` const ( GetVerifiedWarpMessageBaseCost = 2 GetBlockchainIDGasCost = 2 GasCostPerWarpSigner = 500 GasCostPerWarpMessageChunk = 3_200 GasCostPerSignatureVerification = 200_000 ) ``` With optimizations implemented, based on the results of [new benchmarks](https://github.com/ava-labs/coreth/pull/1331) of the `WarpPrecompile` and roughly targeting processing 150 million gas per second, Avalanche EVM chains with this ACP activated use the following gas costs for the `WarpPrecompile`. ``` const ( GetVerifiedWarpMessageBaseCost = 750 GetBlockchainIDGasCost = 200 GasCostPerWarpSigner = 250 GasCostPerWarpMessageChunk = 512 GasCostPerSignatureVerification = 125_000 ) ``` While the performance of `GetVerifiedWarpMessageBaseCost`, `GetBlockchainIDGasCost`, and `GasCostPerWarpMessageChunk` are not directly impacted by this ACP, updated benchmark numbers show the new gas costs to be better aligned with relative time that the operations take to perform. ## Backwards Compatibility This change requires a network upgrade and is therefore not backwards compatible. Any downstream entities that depend on a VM's view of the P-Chain will also need to account for epoched P-Chain views. For instance, ICM messages are signed by an L1's validator set at a specific P-Chain height. Currently, the constructor of the signed message can in practice use the validator set at the P-Chain tip, since all deployed Avalanche VMs are at most behind the P-Chain by a fixed number of blocks. With epoching, however, the ICM message constructor must take into account the epoch P-Chain height of the verifying chain, which may be arbitrarily far behind the P-Chain tip. ## Reference Implementation The following pseudocode illustrates how an epoch may be calculated for a block: ```go // Epoch Duration const D time.Duration type Epoch struct { PChainHeight uint64 Number uint64 StartTime time.Time } type Block interface { Timestamp() time.Time PChainHeight() uint64 Epoch() Epoch } func GetPChainEpoch(parent Block) Epoch { parentTimestamp := parent.Timestamp() parentEpoch := parent.Epoch() epochEndTime := parentEpoch.StartTime.Add(D) if parentTimestamp.Before(epochEndTime) { // If the parent was issued before the end of its epoch, then it did not // seal the epoch. return parentEpoch } // The parent sealed the epoch, so the child is the first block of the new // epoch. return Epoch{ PChainHeight: parent.PChainHeight(), Number: parentEpoch.Number + 1, StartTime: parentTimestamp, } } ``` - If the parent sealed its epoch, the current block [advances the epoch](#advancing-an-epoch), refreshing the epoch height, incrementing the epoch number, and setting the epoch starting time. - Otherwise, the current block uses the current epoch height, number, and starting time, regardless of whether it seals the epoch. A full reference implementation of this ACP for avalanchego can be found [here](https://github.com/ava-labs/avalanchego/pull/4238). ### Setting the Epoch Duration The epoch duration $D$ is set on a network-wide level. For both Fuji (network ID 5) and Mainnet (network ID 1), $D$ will be set to 5 minutes upon activation of this ACP. Any changes to $D$ in the future would require another network upgrade. #### Changing the Epoch Duration Future network upgrades may change the value of $D$ to some new duration $D'$. $D'$ should not take effect until the end of the current epoch, rather than the activation time of the network upgrade that defines $D'$. This ensures an in progress epoch at the upgrade activation time cannot have a realized duration less than both $D$ and $D'$. ## Security Considerations ### Epoch P-Chain Height Skew Because epochs may have [unbounded duration](#epoch-duration-bounds), it is possible for a block's `PChainEpochHeight` to be arbitrarily far behind the tip of the P-Chain. This does not affect the *validity* of ICM verification within a VM that implements P-Chain epoched views, since the validator set at `PChainEpochHeight` is always known. However, the following considerations should be made under this scenario: 1. As validators exit the validator set, their physical nodes may be unavailable to serve BLS signature requests, making it more difficult to construct a valid ICM message 1. A valid ICM message may represent an attestation by a stale validator set. Signatures from validators that have exited the validator set between `PChainEpochHeight` and the current P-Chain tip will not represent active stake. Both of these scenarios may be mitigated by having shorter epoch lengths, which limit the delay in time between when the P-Chain is updated and when those updates are taken into account for ICM verification on a given L1, and by ensuring consistent block production, so that epochs always advance soon after $D$ time has passed. ### Excessive Validator Churn If an epoched view of the P-Chain is used by the consensus engine, then validator set changes over an epoch's duration will be concentrated into a single block at the epoch's boundary. Excessive validator churn can cause consensus failures and other dangerous behavior, so it is imperative that the amount of validator weight change at the epoch boundary is limited. One strategy to accomplish this is to queue validator set changes and spread them out over multiple epochs. Another strategy is to batch updates to the same validator together such that increases and decreases to that validator's weight cancel each other out. Given the primary use case of ICM verification improvements, which occur at the VM level, mechanisms to mitigate against this are omitted from this ACP. ## Open Questions - What should the epoch duration $D$ be set to? - Is it safe for `PChainEpochHeight` and `PChainHeight` to differ significantly within a block, due to [unbounded epoch duration](#epoch-duration-bounds)? ## Acknowledgements Thanks to [@iansuvak](https://github.com/iansuvak), [@geoff-vball](https://github.com/geoff-vball), [@yacovm](https://github.com/yacovm), [@michaelkaplan13](https://github.com/michaelkaplan13), [@StephenButtolph](https://github.com/StephenButtolph), and [@aaronbuchwald](https://github.com/aaronbuchwald) for discussion and feedback on this ACP. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-191: Seamless L1 Creation (/docs/acps/191-seamless-l1-creation) --- title: "ACP-191: Seamless L1 Creation" description: "Details for Avalanche Community Proposal 191: Seamless L1 Creation" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/191-seamless-l1-creation/README.md --- | ACP | 191 | | :- | :- | | **Title** | Seamless L1 Creations (CreateL1Tx) | | **Author(s)** | Martin Eckardt ([@martineckardt](https://github.com/martineckardt)), Aaron Buchwald ([@aaronbuchwald](https://github.com/aaronbuchwald)), Michael Kaplan ([@michaelkaplan13](https://github.com/michaelkaplan13)), Meaghan FitzGerald ([@meaghanfitzgerald](https://github.com/meaghanfitzgerald)) | | **Status** | Proposed ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/197))| | **Track** | Standards | ## Abstract This ACP introduces a new P-Chain transaction type called `CreateL1Tx` that simplifies the creation of Avalanche L1s. It consolidates three existing transaction types (`CreateSubnetTx`, `CreateChainTx`, and `ConvertSubnetToL1Tx`) into a single atomic operation. This streamlines the L1 creation process, removes the need for the intermediary Subnet creation step, and eliminates the management of temporary `SubnetAuth` credentials. ## Motivation [ACP-77](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/77-reinventing-subnets/README.md) introduced Avalanche L1s, providing greater sovereignty and flexibility compared to Subnets. However, creating an L1 currently requires a three-step process: 1. `CreateSubnetTx`: Create the Subnet record on the P-Chain and specify the `SubnetAuth` 2. `CreateChainTx`: Add a blockchain to the Subnet (can be called multiple times) 3. `ConvertSubnetToL1Tx`: Convert the Subnet to an L1, specifying the initial validator set and the validator manager location This process has several drawbacks: * It requires orchestrating three separate transactions that could be handled in one. * The `SubnetAuth` must be managed during creation but becomes irrelevant after conversion. * The multi-step process increases complexity and potential for errors. * It introduces unnecessary state transitions and storage overhead on the P-Chain. By introducing a single `CreateL1Tx` transaction, we can simplify the process, reduce overhead, and improve the developer experience for creating L1s. ## Specification ### New Transaction Type The following new transaction type is introduced: ```go // ChainConfig represents the configuration for a chain to be created type ChainConfig struct { // A human readable name for the chain; need not be unique ChainName string `serialize:"true" json:"chainName"` // ID of the VM running on the chain VMID ids.ID `serialize:"true" json:"vmID"` // IDs of the feature extensions running on the chain FxIDs []ids.ID `serialize:"true" json:"fxIDs"` // Byte representation of genesis state of the chain GenesisData []byte `serialize:"true" json:"genesisData"` } // CreateL1Tx is an unsigned transaction to create a new L1 with one or more chains type CreateL1Tx struct { // Metadata, inputs and outputs BaseTx `serialize:"true"` // Chain configurations for the L1 (can be multiple) Chains []ChainConfig `serialize:"true" json:"chains"` // Chain where the L1 validator manager lives ManagerChainID ids.ID `serialize:"true" json:"managerChainID"` // Address of the L1 validator manager ManagerAddress types.JSONByteSlice `serialize:"true" json:"managerAddress"` // Initial pay-as-you-go validators for the L1 Validators []*L1Validator `serialize:"true" json:"validators"` } ``` The `L1Validator` structure follows the same definition as in [ACP-77](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/77-reinventing-subnets/README.md#convertsubnettol1tx). ### Transaction Processing When a `CreateL1Tx` transaction is processed, the P-Chain performs the following operations atomically: 1. Create a new L1. 2. Create chain records for each chain configuration in the `Chains` array. 3. Set up the L1 validator manager with the specified `ManagerChainID` and `ManagerAddress`. 4. Register the initial validators specified in the `Validators` array. ### IDs * `subnetID`: The `subnetID` of the L1 is the transaction hash. * `blockchainID`: the `blockchainID` for each blockchain is is defined as the SHA256 hash of the 37 bytes resulting from concatenating the 32 byte `subnetID` with the `0x00` byte and the 4 byte `chainIndex` (index in the `Chains` array within the transaction) * `validationID`: The `validationID` for the initial validators added through `CreateL1Tx` is defined as the SHA256 hash of the 36 bytes resulting from concatenating the 32 byte `subnetID` with the 4 byte `validatorIndex` (index in the `Validators` array within the transaction). Note: Even with this updated definition of the `blockchainID`s for chains created using this new flow, the `validationID`s of the L1s initial set of validators is still compatible with the existing reference validator manager contracts as defined [here](https://github.com/ava-labs/icm-contracts/blob/4a897ba913958def3f09504338a1b9cd48fe5b2d/contracts/validator-manager/ValidatorManager.sol#L247). ### Restrictions and Validation The `CreateL1Tx` transaction has the following restrictions and validation criteria: 1. The `Chains` array must contain at least one chain configuration 2. The `ManagerChainID` must be a valid blockchain ID, but cannot be the P-Chain blockchain ID 3. Validator nodes must have unique NodeIDs within the transaction 4. Each validator must have a non-zero weight and a non-zero balance 5. The transaction inputs must provide sufficient AVAX to cover the transaction fee and all validator balances ### Warp Message After the transaction is accepted, the P-Chain must be willing to sign a `SubnetToL1ConversionMessage` with a `conversionID` corresponding to the new L1, similar to what would happen after a `ConvertSubnetToL1Tx`. This ensures compatibility with existing systems that expect this message, such as the validator manager contracts. ## Backwards Compatibility This ACP introduces a new transaction type and does not modify the behavior of existing transaction types. Existing Subnets and L1s created through the three-step process will continue to function as before. This change is purely additive and does not require any changes to existing L1s or Subnets. The existing transactions `CreateSubnetTx`, `CreateChainTx` and `ConvertSubnetToL1Tx` remain unchanged for now, but may be removed in a future ACP to ensure systems have sufficient time to update to the new process. ## Reference Implementation A reference implementation must be provided in order for this ACP to be considered implementable. ## Security Considerations The `CreateL1Tx` transaction follows the same security model as the existing three-step process. By making the L1 creation atomic, it reduces the risk of partial state transitions that could occur if one of the transactions in the three-step process fails. The same continuous fee mechanism introduced in ACP-77 applies to L1s created through this new transaction type, ensuring proper metering of validator resources. The transaction verification process must ensure that all validator properties are properly validated, including unique NodeIDs, valid BLS signatures, and sufficient balances. ## Rationale and Alternatives The primary alternative is to maintain the status quo \- requiring three separate transactions to create an L1. However, this approach has clear disadvantages in terms of complexity, transaction overhead, and user experience. Another alternative would be to modify the existing `ConvertSubnetToL1Tx` to allow specifying chain configurations directly. However, this would complicate the conversion process for existing Subnets and would not fully address the desire to eliminate the Subnet intermediary step for new L1 creation. The chosen approach of introducing a new transaction type provides a clean solution that addresses all identified issues while maintaining backward compatibility. ## Acknowledgements The idea for this PR was originally formulated by Aaron Buchwald in our discussion about the creation of L1s. Special thanks to the authors of ACP-77 for their groundbreaking work on Avalanche L1s, and to the projects that have shared their experiences and challenges with the current validator manager framework. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-194: Streaming Asynchronous Execution (/docs/acps/194-streaming-asynchronous-execution) --- title: "ACP-194: Streaming Asynchronous Execution" description: "Details for Avalanche Community Proposal 194: Streaming Asynchronous Execution" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/194-streaming-asynchronous-execution/README.md --- | ACP | 194 | | :--- | :--- | | **Title** | Streaming Asynchronous Execution | | **Author(s)** | Arran Schlosberg ([@ARR4N](https://github.com/ARR4N)), Stephen Buttolph ([@StephenButtolph](https://github.com/StephenButtolph)) | | **Status** | Proposed ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/196)) | | **Track** | Standards | ## Abstract Streaming Asynchronous Execution (SAE) decouples consensus and execution by introducing a queue upon which consensus is performed. A concurrent execution stream is responsible for clearing the queue and reporting a delayed state root for recording by later rounds of consensus. Validation of transactions to be pushed to the queue is lightweight but guarantees eventual execution. ## Motivation ### Performance improvements 1. Concurrent consensus and execution streams eliminate node context switching, reducing latency caused by each waiting on the other. In particular, "VM time" (akin to CPU time) more closely aligns with wall time since it is no longer eroded by consensus. This increases gas per wall-second even without an increase in gas per VM-second. 2. Lean, execution-only clients can rapidly execute the queue agreed upon by consensus, providing accelerated receipt issuance and state computation. Without the need to compute state _roots_, such clients can eschew expensive Merkle data structures. End users see expedited but identical transaction results. 3. Irregular stop-the-world events like database compaction are amortised over multiple blocks. 4. Introduces additional bursty throughput by eagerly accepting transactions, without a reduction in security guarantees. 5. Third-party accounting of non-data-dependent transactions, such as EOA-to-EOA transfers of value, can be performed prior to execution. ### Future features Performing transaction execution after consensus sequencing allows the usage of consensus artifacts in execution. This unblocks some additional future improvements: 1. Exposing a real-time VRF during transaction execution. 2. Using an encrypted mempool to reduce front-running. This ACP does not introduce these, but some form of asynchronous execution is required to correctly implement them. ### User stories 1. A sophisticated DeFi trader runs a highly optimised execution client, locally clearing the transaction queue well in advance of the network—setting the stage for HFT DeFi. 2. A custodial platform filters the queue for only those transactions sent to one of their EOAs, immediately crediting user balances. ## Description In all execution models, a block is _proposed_ and then verified by validators before being _accepted_. To assess a block's validity in _synchronous_ execution, its transactions are first _executed_ and only then _accepted_ by consensus. This immediately and implicitly _settles_ all of the block's transactions by including their execution results at the time of _acceptance_. E[Executed] --> A[Accepted/Settled]`} /> Under SAE, a block is considered valid if all of its transactions can be paid for when eventually _executed_, after which the block is _accepted_ by consensus. The act of _acceptance_ enqueues the block to be _executed_ asynchronously. In the future, some as-yet-unknown later block will reference the execution results and _settle_ all transactions from the _executed_ block. A[Accepted] A -->|variable delay| E[Executed] E -->|τ seconds| S[Settled] A -. guarantees .-> S`} /> ### Block lifecycle #### Proposing blocks The validator selection mechanism for block production is unchanged. However, block builders are no longer expected to execute transactions during block building. The block builder is expected to include transactions by building upon the most recently settled state and to apply worst-case bounds on the execution of the ancestor blocks prior to the most recently settled block. The worst-case bounds enforce minimum balances of sender accounts and the maximum required base fee. The worst-case bounds are described [below](#block-validity-and-building). Prior to adding a proposed block to consensus, all validators MUST verify that the block builder correctly enforced the worst-case bounds while building the block. This guarantees that the block can be executed successfully if it is accepted. > [!NOTE] > The worst-case bounds guarantee does not provide assurance about whether or not a transaction will revert nor whether its computation will run out of gas by reaching the specified limit. The verification only ensures the transaction is capable of paying for the accrued fees. #### Accepting blocks Once a block is marked as accepted by consensus, the block is put in a FIFO execution queue. #### Executing blocks Each client runs a block executor in parallel, which constantly executes the blocks from the FIFO queue. In addition to executing the blocks, the executor provides deterministic timestamps for the beginning and end of each block's execution. Time is measured two ways by the block executor: 1. The timestamp included in the block header. 2. The amount of gas charged during the execution of blocks. > [!NOTE] > Execution timestamps are more granular than block header timestamps to allow sub-second block execution times. As soon as there is a block available in the execution queue, the block executor starts processing the block. If the executor's current timestamp is prior to the current block's timestamp, the executor's timestamp is advanced to match the block's. Advancing the timestamp in this scenario results in unused gas capacity, reducing the gas _excess_ from which the price is determined. The block is then executed on top of the last executed (not settled) state. After executing the block, the executor advances its timestamp based on the gas usage of the block, also increasing the gas _excess_ for the pricing algorithm. The block's execution time is now timestamped and the block is available to be settled. #### Settling blocks Already-executed blocks are settled once a following block that includes the results of the executed block is accepted. The results are included by setting the state root to that of the last executed block and the receipt root to that of a MPT of all receipts since last settlement, possibly from more than one block. The following block's timestamp is used to determine which blocks to settle—blocks are settled if said timestamp is greater than or equal to the execution time of the executed block plus a constant delay. The additional delay amortises any sporadic slowdowns the block executor may have encountered. ## Specification ### Background ACP-103 introduced the following variables for calculating the gas price:
| | | |---|---| | $T$ | the target gas consumed per second | | $M$ | minimum gas price | | $K$ | gas price update constant | | $R$ | gas capacity added per second |
ACP-176 provided a mechanism to make $T$ dynamic and set: $$ \begin{align} R &= 2 \cdot T \\ K &= 87 \cdot T \end{align} $$ The _excess_ actual consumption $x \ge 0$ beyond the target $T$ is tracked via numerical integration and used to calculate the gas price as: $$M \cdot \exp\left(\frac{x}{K}\right)$$ ### Gas charged We introduce $g_L$, $g_U$, and $g_C$ as the gas _limit_, _used_, and _charged_ per transaction, respectively. We define $$ g_C := \max\left(g_U, \frac{g_L}{\lambda}\right) $$ where $\lambda$ enforces a lower bound on the gas charged based on the gas limit. > [!NOTE] > $\dfrac{g_L}{\lambda}$ is rounded up by actually calculating $\dfrac{g_L + \lambda - 1}{\lambda}$ In all previous instances where execution referenced gas used, from now on, we will reference gas charged. For example, the gas excess $x$ will be modified by $g_C$ rather than $g_U$. ### Block size The constant time delay between block execution and settlement is defined as $\tau$ seconds. The maximum allowed size of a block is defined as: $$ \omega_B ~:= R \cdot \tau \cdot \lambda $$ Any block whose total sum of gas limits for transactions exceed $\omega_B$ MUST be considered invalid. ### Queue size The maximum allowed size of the execution queue _prior_ to adding a new block is defined as: $$ \omega_Q ~:= 2 \cdot \omega_B $$ Any block that attempts to be enqueued while the current size of the queue is larger than $\omega_Q$ MUST be considered invalid. > [!NOTE] > By restricting the size of the queue _prior_ to enqueueing the new block, $\omega_B$ is guaranteed to be the only limitation on block size. ### Block executor During the activation of SAE, the block executor's timestamp $t_e$ is initialised to the timestamp of the last accepted block. Prior to executing a block with timestamp $t_b$, the executor's timestamp and excess is updated: $$ \begin{align} \Delta{t} &~:= \max\left(0, t_b - t_e\right) \\ t_e &~:= t_e + \Delta{t} \\ x &~:= \max\left(x - T \cdot \Delta{t}, 0\right) \\ \end{align} $$ The block is then executed with the gas price calculated from the current value of $x$. After executing a block that charged $g_C$ gas in total, the executor's timestamp and excess is updated: $$ \begin{align} \Delta{t} &~:= \frac{g_C}{R} \\ t_e &~:= t_e + \Delta{t} \\ x &~:= x + \Delta{t} \cdot (R - T) \\ \end{align} $$ > [!NOTE] > The update rule here assumes that $t_e$ is a timestamp that tracks the passage of time both by gas and by wall-clock time. $\frac{g_C}{R}$ MUST NOT be simply rounded. Rather, the gas accumulation MUST be left as a fraction. $t_e$ is now this block's execution timestamp. ### Handling gas target changes When a block is produced that modifies $T$, both the consensus thread and the execution thread will update to the modified $T$ after their own handling of the block. For example, restrictions of the queue size MUST be calculated based on the parent block's $T$. Similarly, the time spent executing a block MUST be calculated based on the parent block's $T$. ### Block settlement For a _proposed_ block that includes timestamp $t_b$, all ancestors whose execution timestamp $t_e$ is $t_e \leq t_b - \tau$ are considered settled. Note that $t_e$ is not an integer as it tracks fractional seconds with gas consumption, which is not the case for $t_b$. The _proposed_ block MUST include the `stateRoot` produced by the execution of the most recently settled block. For any _newly_ settled blocks, the _proposed_ block MUST include all execution artifacts: - `receiptsRoot` - `logsBloom` - `gasUsed` The receipts root MUST be computed as defined in [EIP-2718](https://eips.ethereum.org/EIPS/eip-2718) except that the tree MUST be built from the concatenation of receipts from all blocks being settled. > [!NOTE] > If the block executor has fallen behind, the node may not be able to determine precisely which ancestors should be considered settled. If this occurs, validators MUST allow the block executor to catch up prior to deciding the block's validity. ### Block validity and building After determining which blocks to settle, all remaining ancestors of the new block must be inspected to determine the worst-case bounds on $x$ and account balances. Account nonces are able to be known immediately. The worst-case bound on $x$ can be calculated by following the block executor update rules using $g_L$ rather than $g_C$. The worst-case bound on account balances can be calculated by charging the worst-case gas cost to the sender of a transaction along with deducting the value of the transaction from the sender's account balance. The `baseFeePerGas` field MUST be populated with the gas price based on the worst-case bound on $x$ at the start of block execution. ### Configuration Parameters As noted above, SAE depends on the values of $\tau$ and $\lambda$ to be set as parameters and the values of $\omega_B$ and $\omega_Q$ are derived from $T$. Parameters to specify for the C-Chain are:
| Parameter | Description | C-Chain Configuration| | - | - | - | | $\tau$ | duration between execution and settlement | $5s$ | | $\lambda$ | minimum conversion from gas limit to gas charged | $2$ |
## Backwards Compatibility This ACP modifies the meaning of multiple fields in the block. A comprehensive list of changes will be produced once a reference implementation is available. Likely fields to change include: - `stateRoot` - `receiptsRoot` - `logsBloom` - `gasUsed` - `extraData` ## Reference Implementation A reference implementation is still a work-in-progress. This ACP will be updated to include a reference implementation once one is available. ## Security Considerations ### Worst-case transaction validity To avoid a DoS vulnerability on execution, we require an upper bound on transaction gas cost (i.e. amount $\times$ price) beyond the regular requirements for transaction validity (e.g. nonce, signature, etc.). We therefore introduced "worst-case cost" validity. We can prove that if every transaction were to use its full gas limit this would result in the greatest possible: 1. Consumption of gas units (by definition of the gas limit); and 2. Gas excess $x$ (and therefore gas price) at the time of execution. For a queue of blocks $Q = \\{i\\}_ {i \ge 0}$ the gas excess $x_j$ immediately prior to execution of block $j \in Q$ is a monotonic, non-decreasing function of the gas usage of all preceding blocks in the queue; i.e. $x_j~:=~f(\\{g_i\\}_{i 0$. Hence any decrease of $x$ is $\ge$ predicted. The excess, and hence gas price, for every later block $x_{i>k}$ is therefore reduced: $$ \downarrow g_k \implies \begin{cases} \downarrow \Delta^+x \propto g_k \\ \uparrow \Delta^-x \propto R-g_k \end{cases} \implies \downarrow \Delta x_k \implies \downarrow M \cdot \exp\left(\frac{x_{i>k}}{K}\right) $$ Given maximal gas consumption under (1), the monotonicity of $f$ implies (2). Since we are working with non-negative integers, it follows that multiplying a transaction's gas limit by the hypothetical gas price of (2) results in its worst-case gas cost. Any sender able to pay for this upper bound (in addition to value transfers) is guaranteed to be able to pay for the actual execution cost. Transaction _acceptance_ under worst-case cost validity is therefore a guarantee of _settlement_. ### Queue DoS protection Worst-case cost validity only protects against DoS at the point of execution but leaves the queue vulnerable to high-limit, low-usage transactions. For example, a malicious user could send a transfer-only transaction (21k gas) with a limit set to consume the block's full gas limit. Although they would have to have sufficient funds to theoretically pay for all the reserved gas, they would never actually be charged this amount. Pushing a sufficient number of such transactions to the queue would artificially inflate the worst-case cost of other users. Therefore, the gas charged was modified from being equal to the gas usage to the above $g_C := \max\left(g_U, \frac{g_L}{\lambda}\right)$ The gas limit is typically set higher than the predicted gas consumption to allow for a buffer should the prediction be imprecise. This precludes setting $\lambda := 1$. Conversely, setting $\lambda := \infty$ would allow users to attack the queue with high-limit, low-consumption transactions. Setting $\lambda ~:= 2$ allows for a 100% buffer on gas-usage estimates without penalising the sender, while still disincentivising falsely high limits. #### Upper bound on queue DoS Recall $R$ (gas capacity per second) for rate and $g_C$ (gas charged) as already defined. The actual gas excess $x_A$ has an upper bound of the worst-case excess $x_W$, both of which can be used to calculate respective base fees $f_A$ and $f_W$ (the variable element of gas prices) from the existing exponential function: $$ f := M \cdot \exp\left( \frac{x}{K} \right). $$ Mallory is attempting to maximize the DoS ratio $$ D := \frac{f_W}{f_A} $$ by maximizing $\Sigma_{\forall i} (g_L - g_U)_i$ to maximize $x_W - x_A$. > [!TIP] > Although $D$ shadows a variable in ACP-176, that one is very different to anything here so there won't be confusion. Recall that the increasing excess occurs such that $$ x := x + g \cdot \frac{(R - T)}{R} $$ Since the largest allowed size of the queue when enqueuing a new block is $\omega_Q$, we can derive an upper bound on the difference in the changes to worst-case and actual gas excess caused by the transactions in the queue before the new block is added: $$ \begin{align} \Delta x_A &\ge \frac{\omega_Q}{\lambda} \cdot \frac{(R - T)}{R} \\ \Delta x_W &= \omega_Q \cdot \frac{(R - T)}{R} \\ \Delta x_W - \Delta x_A &\le \omega_Q \cdot \frac{(R - T)}{R} - \frac{\omega_Q}{\lambda} \cdot \frac{(R - T)}{R} \\ &= \omega_Q \cdot \frac{(R - T)}{R} \cdot \left(1-\frac{1}{\lambda}\right) \\ &= \omega_Q \cdot \frac{(2 \cdot T - T)}{2 \cdot T} \cdot \left(1-\frac{1}{\lambda}\right) \\ &= \omega_Q \cdot \frac{T}{2 \cdot T} \cdot \left(1-\frac{1}{\lambda}\right) \\ &= \frac{\omega_Q}{2} \cdot \left(1-\frac{1}{\lambda}\right) \\ &= \frac{2 \cdot \omega_B}{2} \cdot \left(1-\frac{1}{\lambda}\right) \\ &= \omega_B \cdot \left(1-\frac{1}{\lambda}\right) \\ &= R \cdot \tau \cdot \lambda \cdot \left(1-\frac{1}{\lambda}\right) \\ &= R \cdot \tau \cdot (\lambda-1) \\ &= 2 \cdot T \cdot \tau \cdot (\lambda-1) \end{align} $$ Note that we can express Mallory's DoS quotient as: $$ \begin{align} D &= \frac{f_W}{f_A} \\ &= \frac{ M \cdot \exp \left( \frac{x_W}{K} \right)}{ M \cdot \exp \left( \frac{x_A}{K} \right)} \\ & = \exp \left( \frac{x_W - x_A}{K} \right). \end{align} $$ When the queue is empty (i.e. the execution stream has caught up with accepted transactions), the worst-case fee estimate $f_W$ is known to be the actual base fee $f_A$; i.e. $Q = \emptyset \implies D=1$. The previous bound on $\Delta x_W - \Delta x_A$ also bounds Mallory's ability such that: $$ \begin{align} D &\le \exp \left( \frac{2 \cdot T \cdot \tau \cdot (\lambda-1)}{K} \right)\\ &= \exp \left( \frac{2 \cdot T \cdot \tau \cdot (\lambda-1)}{87 \cdot T} \right)\\ &= \exp \left( \frac{2 \cdot \tau \cdot (\lambda-1)}{87} \right)\\ \end{align} $$ Therefore, for the values suggested by this ACP: $$ \begin{align} D &\le \exp \left( \frac{2 \cdot 5 \cdot (2 - 1)}{87} \right)\\ &= \exp \left( \frac{10}{87} \right)\\ &\simeq 1.12\\ \end{align} $$ In summary, Mallory can require users to increase their gas price by at most ~12%. In practice, the gas price often fluctuates more than 12% on a regular basis. Therefore, this does not appear to be a significant attack vector. However, any deviation that dislodges the gas price bidding mechanism from a true bidding mechanism is of note. ## Appendix ### JSON RPC methods Although asynchronous execution decouples the transactions and receipts recorded by a specific block, APIs MUST NOT alter their behavior to mirror this. In particular, the API method `eth_getBlockReceipts` MUST return the receipts corresponding to the block's transactions, not the receipts settled in the block. #### Named blocks The Ethereum Mainnet APIs allow for retrieving blocks by named parameters that the API server resolves based on their consensus mechanism. Other than the _earliest_ (genesis) named block, which MUST be interpreted in the same manner, all other named blocks are mapped to SAE in terms of the _execution_ status of blocks and MUST be interpreted as follows: * _pending_: the most recently _accepted_ block; * _latest_: the block that was most recently _executed_; * _safe_ and _finalized_: the block that was most recently _settled_. > [!NOTE] > The finality guarantees of Snowman consensus remove any distinction between _safe_ and _finalized_. > Furthermore, the _latest_ block is not at risk of re-org, only of a negligible risk of data corruption local to the API node. ### Observations around transaction prioritisation As EOA-to-EOA transfers of value are entirely guaranteed upon _acceptance_, block builders MAY choose to prioritise other transactions for earlier execution. A reliable marker of such transactions is a gas limit of 21,000 as this is an indication from the sender that they do not intend to execute bytecode. However, this could delay the ability to issue transactions that depend on these EOA-to-EOA transfers. Block builders are free to make their own decisions around which transactions to include. ## Acknowledgments Thank you to the following non-exhaustive list of individuals for input, discussion, and feedback on this ACP. * [Aaron Buchwald](https://github.com/aaronbuchwald) * [Angharad Thomas](https://x.com/divergenceharri) * [Martin Eckardt](https://github.com/martineckardt) * [Meaghan FitzGerald](https://github.com/meaghanfitzgerald) * [Michael Kaplan](https://github.com/michaelkaplan13) * [Yacov Manevich](https://github.com/yacovm) ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-20: Ed25519 P2p (/docs/acps/20-ed25519-p2p) --- title: "ACP-20: Ed25519 P2p" description: "Details for Avalanche Community Proposal 20: Ed25519 P2p" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/20-ed25519-p2p/README.md --- | ACP | 20 | | :--- | :--- | | **Title** | Ed25519 p2p | | **Author(s)** | Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu)) | | **Status** | Proposed ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/21))| | **Track** | Standards | ## Abstract Support Ed25519 TLS certificates for p2p communications on the Avalanche network. Permit usage of Ed25519 public keys for Avalanche Network Client (ANC) NodeIDs. Support Ed25519 signatures in the ProposerVM. ## Motivation Avalanche Network Clients (ANCs) rely on TLS handshakes to facilitate p2p communications. AvalancheGo (and by extension, the Avalanche Network) only supports TLS certificates that use RSA or ECDSA as the signing algorithm and explicitly prohibits any other signing algorithms. If a TLS certificate is not present, AvalancheGo will generate and persist to disk a 4096 bit RSA private key on start-up. This key is subsequently used to generate the TLS certificate which is also persisted to disk. Finally, the TLS certificate is hashed to generate a 20 byte NodeID. Authenticated p2p messaging was required when the network started and it was sufficient to simply use a hash of the TLS certificate. With the introduction of Snowman++, validators were then required to produce shareable message signatures. The Snowman++ block headers (specified [here](https://github.com/ava-labs/avalanchego/blob/v1.10.15/vms/proposervm/README.md#snowman-block-extension)) were then required to include the full TLS `Certificate` along with the `Signature`. However, TLS certificates support Ed25519 as their signing algorithm. Ed25519 are IETF recommendations ([RFC8032](https://datatracker.ietf.org/doc/html/rfc8032)) with some very nice properties, a large one being their size: - 32 byte public key - 64 byte private key - 64 byte signature Because of the small size of the public key, it can be used for the NodeID directly with a marginal hit to size (an additional 12 bytes). Additionally, the brittle reliance on static TLS certificates can be removed. Using the Ed25519 private key, a TLS certificate can be generated in-memory on node startup and used for p2p communications. This reduces the maintenance burden on node operators as they will only need to backup the Ed25519 private key instead of the TLS certificate and the RSA private key. Ed25519 has wide adoption, including in the crypto industry. A non-exhaustive list of things that use Ed25519 can be found [here](https://ianix.com/pub/ed25519-deployment.html). More information about the Ed25519 protocol itself can be found [here](https://ed25519.cr.yp.to). ## Specification ### Required Changes 1. Support registration of 32-byte NodeIDs on the P-chain 2. Generate an Ed25519 key by default (`staker.key`) on node startup 3. Use the Ed25519 key to generate a TLS certificate on node startup 4. Add support for Ed25519 keys + signatures to the proposervm 5. Remove the TLS certificate embedding in proposervm blocks when an Ed25519 NodeID is the proposer 6. Add support for Ed25519 in `PeerList` messages Changes to the p2p layer will be minimal as TLS handshakes are used to do p2p communication. Ed25519 will need to be added as a supported algorithm. The P-chain will also need to be modified to support registration of 32-byte NodeIDs. During serialization, the length of the NodeID is not serialized and was assumed to always be 20 bytes. Implementers of this ACP must take care to continue parsing old transactions correctly. This ACP could be implemented by adding a new tx type that requires Ed25519 NodeIDs only. If the implementer chooses to do this, a separate follow-up ACP must be submitted detailing the format of that transaction. ### Future Work In the future, usage of non-Ed25519 TLS certificates should be prohibited to remove any dependency on them. This will further secure the Avalanche network by reducing complexity. The path to doing so is not outlined in this ACP. ## Backwards Compatibility An implementation of this proposal should not introduce any backwards compatibility issues. NodeIDs that are 20 bytes should continue to be treated as hashes of TLS certificates. NodeIDs of 32 bytes (size of Ed25519 public key) should be supported following implementation of this proposal. ## Reference Implementation TLS certificate generation using an Ed25519 private key is standard. The golang standard library has a reference [implementation](https://github.com/golang/go/blob/go1.20.10/src/crypto/tls/generate_cert.go). Parsing TLS certificates and extracting the public key is also standard. AvalancheGo already contains [code](https://github.com/ava-labs/avalanchego/blob/638000c42e5361e656ffbc27024026f6d8f67810/staking/verify.go#L55-L65) to verify the public key from a TLS certificate. ## Security Considerations ### Validation Criteria Although Ed25519 is standardized in [RFC8032](https://datatracker.ietf.org/doc/html/rfc8032), it does not define strict validation criteria. This has led to inconsistencies in the validation criteria across implementations of the signature scheme. This is unacceptable for any protocol that requires participants to reach consensus on signature validity. Henry de Valance highlights the complexity of this issue [here](https://hdevalence.ca/blog/2020-10-04-its-25519am). From [Chalkias et al. 2020](https://eprint.iacr.org/2020/1244.pdf): * The RFC 8032 and the NIST FIPS186-5 draft both require to reject non-canonically encoded points, but not all of the implementations follow those guidelines. * The RFC 8032 allows optionality between using a permissive verification equation and a more strict verification equation. Different implementations use different equations meaning validation results can vary even across implementations that follow RFC 8032. Zcash adopted [ZIP-215](https://zips.z.cash/zip-0215) (proposed by Henry de Valance) to explicitly define the Ed25519 validation criteria. Implementers of this ACP _*must*_ use the ZIP-215 validation criteria. The [`ed25519consensus`](https://github.com/hdevalence/ed25519consensus) golang library is a minimal fork of golang's `crypto/ed25519` package with support for ZIP-215 verification. It is maintained by [Filippo Valsorda](https://github.com/FiloSottile) who also maintains many golang stdlib cryptography packages. It is strongly recommended to use this library for golang implementations. ## Open Questions _Can this Ed25519 key be used in alternative communication protocols?_ Yes. Ed25519 can be used for alternative communication protocols like [QUIC](https://datatracker.ietf.org/group/quic/about) or [NOISE](http://www.noiseprotocol.org/noise.html). This ACP removes the reliance on TLS certificates and associates a Ed25519 public key with NodeIDs. This allows for experimentation with different communication protocols that may be better suited for a high throughput blockchain like Avalanche. _Can this Ed25519 key be used for Verifiable Random Functions?_ Yes. VRFs, as specified in [RFC9381](https://datatracker.ietf.org/doc/html/rfc9381), can be constructed using elliptic curves that are secure in the cryptographic random oracle model. Ed25519 test vectors are provided in the RFC for implementers of an Elliptic Curve VRF (ECVRF). This allows for Avalanche validators to generate a VRF per block using their associated Ed25519 keys, including for Subnets. ## Acknowledgements Thanks to [@StephenButtolph](https://github.com/StephenButtolph) and [@patrick-ogrady](https://github.com/patrick-ogrady) for their feedback on these ideas. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-204: Precompile Secp256r1 (/docs/acps/204-precompile-secp256r1) --- title: "ACP-204: Precompile Secp256r1" description: "Details for Avalanche Community Proposal 204: Precompile Secp256r1" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/204-precompile-secp256r1/README.md --- # ACP-204: Precompile for secp256r1 Curve Support | ACP | 204 | | :--- | :--- | | **Title** | Precompile for secp256r1 Curve Support | | **Author(s)** | [Santiago Cammi](https://github.com/scammi), [Arran Schlosberg](https://github.com/ARR4N) | | **Status** | Activated ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/212)) | | **Track** | Standards | ## Abstract This proposal introduces a precompiled contract that performs signature verifications for the secp256r1 elliptic curve on Avalanche's C-Chain. The precompile will be implemented at address `0x0000000000000000000000000000000000000100` and will enable native verification of P-256 signatures, significantly improving gas efficiency for biometric authentication systems, WebAuthn, and modern device-based signing mechanisms. ## Motivation The secp256r1 (P-256) elliptic curve is the standard cryptographic curve used by modern device security systems, including Apple's Secure Enclave, Android Keystore, WebAuthn, and Passkeys. However, Avalanche currently only supports secp256k1 natively, forcing developers to use expensive Solidity-based verification that costs [200k-330k gas per signature verification](https://hackmd.io/@1ofB8klpQky-YoR5pmPXFQ/SJ0nuzD1T#Smart-Contract-Based-Verifiers). This ACP proposes implementing EIP-7951's secp256r1 precompiled contract to unlock significant ecosystem benefits: ### Enterprise & Institutional Adoption - Reduced onboarding friction: Enterprises can leverage existing biometric authentication infrastructure instead of managing seed phrases or hardware wallets - Regulatory compliance: Institutions can utilize their approved device security standards and identity management systems - Cost optimization: ~50x gas reduction (from 200k-330k to 6,900 gas) makes enterprise-scale applications economically viable The 100x gas cost reduction makes these use cases economically viable while maintaining the security properties institutions and users expect from their existing devices. Adding the precompiled contract at the same address as used in [RIP-7212](https://github.com/ethereum/RIPs/blob/master/RIPS/rip-7212.md) provides consistency across ecosystems, and allows for any libraries that have been developed to interact with the precompile to be used unmodified across ecosystems. ## Specification This ACP implements [EIP-7951](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-7951.md) for secp256r1 signature verification on Avalanche. The specification follows EIP-7951 exactly, with the precompiled contract deployed at address `0x0000000000000000000000000000000000000100`. ### Core Functionality - Input: 160 bytes (message hash + signature components r,s + public key coordinates x,y) - Output: success: 32 bytes `0x...01`; failure: no data returned - Gas Cost: 6,900 gas (based on EIP-7951 benchmarking) - Validation: Full compliance with NIST FIPS 186-3 specification ### Activation This precompile may be activated as part of Avalanche's next network upgrade. Individual Avalanche L1s and subnets could adopt this enhancement independently through their respective client software updates. For complete technical specifications, validation requirements, and implementation details, refer to [EIP-7951](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-7951.md). ## Backwards Compatibility This ACP introduces a new precompiled contract and does not modify existing functionality. No backwards compatibility issues are expected since: 1. The precompile uses a previously unused address 2. No existing opcodes or consensus rules are modified 3. The change is additive and opt-in for applications Adoption requires a coordinated network upgrade for the C-Chain. Other EVM L1s can adopt this enhancement independently by upgrading their client software. ## Security Considerations ### Cryptographic Security - The secp256r1 curve is standardized by NIST and widely vetted - Security properties are comparable to secp256k1 (used by ECRECOVER) - Implementation follows NIST FIPS 186-3 specification exactly ### Implementation Security - Signature verification (vs public-key recovery) approach maximizes compatibility with existing P-256 ecosystem - No malleability check included to match NIST specification, but wrapper libraries may choose to add this - Input validation prevents invalid curve points and out-of-range signature components ### Network Security - Gas cost prevents potential DoS attacks through expensive computation - No consensus-level security implications beyond standard precompile considerations ## Reference Implementation The implementation builds upon existing work: 1. EIP-7951 Reference: The [Go-Ethereum implementation]https://github.com/ethereum/go-ethereum/pull/31991) of EIP-7951 provides the foundation 2. Coreth Implementation: Integration with Avalanche's C-Chain (Avalanche's fork of go-ethereum) 3. Cryptographic Library: Implementation utilizes Go's standard library `crypto/ecdsa` and `crypto/elliptic` packages, which implement NIST P-256 per FIPS 186-3 ([Go documentation](https://pkg.go.dev/crypto/elliptic#P256)) The implementation follows established patterns for precompile integration, adding the contract to the precompile registry and implementing the verification logic using established cryptographic libraries. This ACP was implemented and merged into Coreth and Subnet-EVM behind the `Granite` upgrade flag. The full implementation can be found in [coreth@v0.15.4-rc.4](https://github.com/ava-labs/coreth/releases/tag/v0.15.4-rc.4), [subnet-evm@v0.8.0-fuji-rc.2](https://github.com/ava-labs/subnet-evm/releases/tag/v0.8.0-fuji-rc.2) and [libevm@v1.13.14-0.3.0.release](https://github.com/ava-labs/libevm/releases/tag/v1.13.14-0.3.0.release). ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-209: Eip7702 Style Account Abstraction (/docs/acps/209-eip7702-style-account-abstraction) --- title: "ACP-209: Eip7702 Style Account Abstraction" description: "Details for Avalanche Community Proposal 209: Eip7702 Style Account Abstraction" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/209-eip7702-style-account-abstraction/README.md --- | ACP | 209 | | :--- | :--- | | **Title** | EIP-7702-style Set Code for EOAs | | **Author(s)** | Stephen Buttolph ([@StephenButtolph](https://github.com/StephenButtolph)), Arran Schlosberg ([@ARR4N](https://github.com/ARR4N)), Aaron Buchwald ([@aaronbuchwald](https://github.com/aaronbuchwald)), Michael Kaplan ([@michaelkaplan13](https://github.com/michaelkaplan13)) | | **Status** | Proposed ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/216)) | | **Track** | Standards | ## Abstract [EIP-7702](https://github.com/ethereum/EIPs/blob/e17d216b4e8b359703ddfbc84499d592d65281fb/EIPS/eip-7702.md) was activated on the Ethereum mainnet in May 2025 as part of the Pectra upgrade, and introduced a new "set code transaction" type that allows Externally Owned Accounts (EOAs) to set the code in their account. This enabled several UX improvements, including batching multiple operations into a single atomic transaction, sponsoring transactions on behalf of another account, and privilege de-escalation for EOAs. This ACP proposes adding a similar transaction type and functionality to Avalanche EVM implementations in order to have them support the same style of UX available on Ethereum. Modifications to the handling of account nonce and balances are required in order for it to be safe when used in conjunction with the streaming asynchronous execution (SAE) mechanism proposed in [ACP-194](https://github.com/avalanche-foundation/ACPs/tree/4a9408346ee408d0ab81050f42b9ac5ccae328bb/ACPs/194-streaming-asynchronous-execution). ## Motivation The motivation for this ACP is the same as the motivation described in [EIP-7702](https://github.com/ethereum/EIPs/blob/e17d216b4e8b359703ddfbc84499d592d65281fb/EIPS/eip-7702.md#motivation). However, EIP-7702 as implemented for Ethereum breaks invariants required for EVM chains that use the ACP-194 SAE mechanism. There has been strong community feedback in support of ACP-194 for its potential to: - Allow for increasing the target gas rate of Avalanche EVM chains, including the C-Chain - Enable the use of an encrypted mempool to prevent front-running - Enable the use of real time VRF during transaction execution Given the strong support for ACP-194, bringing EIP-7702-style functionality to Avalanche EVMs requires modifications to preserve its necessary invariants, described below. ### Invariants needed for ACP-194 There are [two invariants explicitly broken by EIP-7702](https://github.com/ethereum/EIPs/blob/e17d216b4e8b359703ddfbc84499d592d65281fb/EIPS/eip-7702.md#backwards-compatibility) that are required for SAE. They are: 1. An account balance can only decrease as a result of a transaction originating from that account. 1. An EOA nonce may not increase after transaction execution has begun. These invariants are required for SAE in order to be able to statically analyze (i.e. determine without executing the transaction) that a transaction: - Has the proper nonce - Will have sufficient balance to pay for its worst case transaction fee plus the balance it sends As described in the ACP-194, this lightweight analysis of transactions in blocks allows blocks to be accepted by consensus with the guarantee that they can be executed successfully. Only after block acceptance are the transactions within the block then put into a queue to be executed asynchronously. If the execution of transactions in the queue can decrease an EOA's account balance or change an EOA's current nonce, then block verification is unable to ensure that transactions in the block will be valid when executed. If transactions accepted into blocks can be invalidated prior to their execution, this poses DOS vulnerabilities because the invalidated transactions use up space in the pending execution queue according to their gas limits, but they do not pay any fees. Notably, EIP-7702's violation of these invariants already presents challenges for mempool verification on Ethereum. As [noted in the security considerations section](https://github.com/ethereum/EIPs/blob/e17d216b4e8b359703ddfbc84499d592d65281fb/EIPS/eip-7702.md#transaction-propagation), EIP-7702 makes it "possible to cause transactions from other accounts to become stale" and this "poses some challenges for transaction propagation" because nodes now cannot "statically determine the validity of transactions for that account". In synchronous execution environments such as Ethereum, these issues only pose potential DOS risks to the public transaction mempool. Under an asynchronous execution scheme, the issues pose DOS risks to the chain itself since the invalidated transactions can be included in blocks prior to their execution. ## Specification The same [set code transaction as specified in EIP-7702](https://github.com/ethereum/EIPs/blob/e17d216b4e8b359703ddfbc84499d592d65281fb/EIPS/eip-7702.md?ref=blockhead.co#set-code-transaction) will be added to Avalanche EVM implementations. The behavior of the transaction is the same as specified in EIP-7702. However, in order to keep the guarantee of transaction validity upon inclusion in an accepted block, two modifications are made to the transaction verification and execution rules. 1. Delegated accounts must maintain a "reserved balance" to ensure they can always pay for the transaction fees and transferred balance of transactions sent from the account. The reserved balances are managed via a new `ReservedBalanceManager` precompile, as specified below. 1. The handling of account nonces during execution is separated from the verification of nonces during block verification, as specified below. ### Reserved balances To ensure that all transactions can cover their worst case transaction fees and transferred balances upon inclusion in an accepted block, a "reserved balance" mechanism is introduced for accounts. Reserved balances are required for delegated accounts to guarantee that subsequent transactions they send after setting code for their account can still cover their fees and transfer amounts, even if transactions from other accounts reduce the account's balance prior to their execution. To allow for managing reserved balances, a new `ReservedBalanceManager` stateful precompile will be added at address `0x0200000000000000000000000000000000000006`. The `ReservedBalanceManager` precompile will have the following interface: ```solidity interface IReservedBalanceManager { /// @dev Emitted whenever an account's reserved balance is modified. event ReservedBalanceUpdated(address indexed account, uint256 newBalance); /// @dev Called to deposit the native token balance provided into the account's /// reserved balance. function depositReservedBalance(address account) external payable; /// @dev Returns the current reserved balance for the given account. function getReservedBalance(address account) external view returns (uint256 balance); } ``` The precompile will maintain a mapping of accounts to their current reserved balances. The precompile itself intentionally only allows for _increasing_ an account's reserved balance. Reducing an account's reserved balance is only ever done by the EVM when a transaction is sent from the account, as specified below. During transaction verification, the following rules are applied: - If the sender EOA account has not set code via an EIP-7702 transaction, no reserved balance is required. - The transaction is confirmed to be able to pay for its worst case transaction fee and transferred balance by looking at the sender account's regular balance and accounting for prior transactions it has sent that are still in the pending execution queue, as specified in ACP-194. - Otherwise, if the sender EOA account has previously been delegated via an EIP-7702 transaction (even if that transaction is still in the pending execution queue), then the account's current "[settled](https://github.com/avalanche-foundation/ACPs/tree/4a9408346ee408d0ab81050f42b9ac5ccae328bb/ACPs/194-streaming-asynchronous-execution#settling-blocks)" reserved balance must be sufficient to cover the sum of the worst case transaction fees and balances sent for all of the transactions in the pending execution queue after the set code transaction. During transaction execution, the following rules are applied: - When initially deducting balance from the sender EOA account for the maximum transaction fee and balance sent with the transaction, the account's regular balance is used first. The account's reserved balance is only reduced if the regular balance is insufficient. - In the execution of code as part of a transaction, only regular account balances are available. The only possible modification to reserved balances during code execution is increases via calls to the `ReservedBalanceManager` precompile `depositReservedBalance` function. - If there is a gas refund at the end of the transaction execution, the balance is first credited to the sender account's reserved balance, up to a maximum of the account's reserved balance prior to the transaction. Any remaining refund is credited to the account's regular balance. ### Handling of nonces To account for EOA account nonces being incremented during contract execution and potentially invalidating transactions from that EOA that have already been accepted, we separate the rules for how nonces are verified during block verification and how they are handled during execution. During block verification, all transactions must be verified to have a correct nonce value based on the latest "settled" state root, as defined in ACP-194, and the number of transactions from the sender account in the pending execution queue. Specifically, the required nonce is derived from the settled state root and incremented by one for each of the sender’s transactions already accepted into the pending execution queue or current block. During execution, the nonce used must be one greater than the latest nonce used by the account, accounting for both all transactions from the account and all contracts created by the account. This means that the actual nonce used by a transaction may differ from the nonce assigned in the raw transaction itself and used in verification. Separating the nonce values used for block verification and execution ensures that transactions accepted in blocks cannot be invalidated by the execution of transactions before them in the pending execution queue. It still provides the same level of replay protection to transactions, as a transaction with a given nonce from an EOA can be accepted at most once. However, this separation has a subtle potential impact on contract creation. Previously, the resulting address of a contract could be deterministically derived from a contract creation transaction based on its sender address and the nonce set in the transaction. Now, since the nonce used in execution is separate from that set in the transaction, this is no longer guaranteed. ## Backwards Compatibility The introduction of EIP-7702 transactions will require a network upgrade to be scheduled. Upon activation, a few invariants will be broken: - (From EIP-7702) `tx.origin == msg.sender` can only be true in the topmost frame of execution. - Once an account has been delegated, it can invoke multiple calls per transaction. - (From EIP-7702) An EOA nonce may not increase after transaction execution has begun. - Once an account has been delegated, the account may call a create operation during execution, causing the nonce to increase. - The contract address of a contract deployed by an EOA (via transaction with an empty "to" address) can be derived from the sender address and the transaction's nonce. - If earlier transactions cause the nonce to increase before execution, the actual nonce used in a contract creation transaction may differ from the one in the transaction payload, altering the resulting contract address. - Note that this can only occur for accounts that have been delegated, and whose delegated code involves contract creation. Additionally, at all points after the acceptance of a set code transaction, an EOA must have sufficient reserved balance to cover the sum of the worst case transactions fees and balances sent for all transactions in the pending execution queue after the set code transaction. Notably, this means that: - If a delegated account has zero reserved balance at any point, it will be unable to send any further transactions until a different account provides it with reserved balance via the `ReservedBalanceManager` precompile. - In order to initially "self-fund" its own reserved balance, an account must deposit reserved balance via the `ReservedBalanceManager` precompile prior to sending a set code transaction. - In order to transfer its full (regular + reserved) account balance, a delegated account must first deposit all of its regular balance into reserved balance. In order to support wallets as seamlessly as possible, the `eth_getBalance` RPC implementations should be updated to return the sum of an accounts regular and reserved balances. Additionally, clients should provide a new `eth_getReservedBalance` RPC method to allow for querying the reserved balance of a given account. ## Reference Implementation A reference implementation is not yet available and must be provided for this ACP to be considered implementable. ## Security Considerations All of the [security considerations from the EIP-7702 specification](https://github.com/ethereum/EIPs/blob/e17d216b4e8b359703ddfbc84499d592d65281fb/EIPS/eip-7702.md?ref=blockhead.co#security-considerations) apply here as well, except for the considerations regarding "sponsored transaction relayers" and "transaction propagation". Those two considerations do not apply here, as they are accounted for by the modifications made to introduce reserved balances and separate the handling of nonces in execution from verification. Additionally, given that an account's reserved balance may need be updated in state when a transfer is sent from the account it must be confirmed that 21,000 gas is still a sufficiently high cost for the potential more expensive operation. Charging more gas for basic transfer transactions in this case could otherwise be an option, but would likely cause further backwards compatibility issues for smart contracts and off-chain services. ## Open Questions 1. Are the implementation and UX complexities regarding the `ReservedBalanceManager` precompile worth the UX improvements introduced by the new set code transaction type? - Except for having a contract spend an account's native token balance, most, if not all, of the UX improvements associated with the new transaction type could theoretically be implemented at the contract layer rather than the protocol layer. However, not all contracts provide support for account abstraction functionality via standards such as [ERC-2771](https://eips.ethereum.org/EIPS/eip-2771). 2. Are the implementation and UX complexities regarding the `ReservedBalanceManager` precompile worth giving delegate contracts the ability to spend native token balances? - An alternative may be to disallow delegate contracts from spending native token balances at all, and revert if they attempt to. They could use "wrapped native token" ERC20 implementations (i.e. WAVAX) to achieve the same effect. However, this may be equally or more complex at the implementation level, and would cause incompatibilies in delegate contract implementations for Ethereum. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-224: Dynamic Gas Limit In Subnet Evm (/docs/acps/224-dynamic-gas-limit-in-subnet-evm) --- title: "ACP-224: Dynamic Gas Limit In Subnet Evm" description: "Details for Avalanche Community Proposal 224: Dynamic Gas Limit In Subnet Evm" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/224-dynamic-gas-limit-in-subnet-evm/README.md --- | ACP | 224 | | :--- | :--- | | **Title** | Introduce ACP-176-Based Dynamic Gas Limits and Fee Manager Precompile in Subnet-EVM | | **Author(s)** | Ceyhun Onur ([@ceyonur](https://github.com/ceyonur)), Michael Kaplan ([@michaelkaplan13](https://github.com/michaelkaplan13)) | | **Status** | Proposed ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/230)) | | **Track** | Standards | ## Abstract Proposes implementing [ACP-176](https://github.com/avalanche-foundation/ACPs/blob/aa3bea24431b2fdf1c79f35a3fd7cc57eeb33108/ACPs/176-dynamic-evm-gas-limit-and-price-discovery-updates/README.md) in Subnet-EVM, along with the addition of a new optional `ACP224FeeManagerPrecompile` that can be used to configure fee parameters on-chain dynamically after activation, in the same way that the existing `FeeManagerPrecompile` can be used today prior to ACP-176. ## Motivation ACP-176 updated the EVM dynamic fee mechanism to more accurately achieve the target gas consumption on-chain. It also added a mechanism for the target gas consumption rate to be dynamically updated. Until now, ACP-176 was only added to Coreth (C-Chain), primarily because most L1s prefer to control their fees and gas targets through the `FeeManagerPrecompile` and `FeeConfig` in genesis chain configuration, and the existing `FeeManagerPrecompile` is not compatible with the ACP-176 fee mechanism. [ACP-194](https://github.com/avalanche-foundation/ACPs/blob/aa3bea24431b2fdf1c79f35a3fd7cc57eeb33108/ACPs/194-streaming-asynchronous-execution/README.md) (SAE) depends on having a gas target and capacity mechanism aligned with ACP-176. Specifically, there must be a known gas capacity added per second, and maximum gas capacity. The existing windower fee mechanism employed by Subnet-EVM does not provide these properties because it does not have a fixed capacity rate, making it difficult to calculate worst-case bounds for gas prices. As such, adding ACP-176 into Subnet-EVM is a functional requirement for L1s to be able to use SAE in the future. Adding ACP-176 fee dynamics to Subnet-EVM also has the added benefit of aligning with Coreth such that only a single mechanism needs to be maintained on a go-forwards basis. While both ACP-176 and ACP-194 will be required upgrades for L1s, this ACP aims to provide similar controls for chains with a new precompile. A new dynamic fee configuration and fee manager precompile that maps well into the ACP-176 mechanism will be added, optionally allowing admins to adjust fee parameters dynamically. ## Specification ### ACP-176 Parameters This ACP uses the same parameters as in the [ACP-176 specification](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/176-dynamic-evm-gas-limit-and-price-discovery-updates/README.md#configuration-parameters), and allows their values to be configured on a chain-by-chain basis. The parameters and their current values used by the C-Chain are as follows: | Parameter | Description | C-Chain Configuration | | :--- | :--- | :--- | | $T$ | target gas consumed per second | dynamic | | $R$ | gas capacity added per second | 2*T | | $C$ | maximum gas capacity | 10*T | | $P$ | minimum target gas consumption per second | 1,000,000 | | $D$ | target gas consumption rate update constant | 2^25 | | $Q$ | target gas consumption rate update factor change limit | 2^15 | | $M$ | minimum gas price | 1 Wei (10^-18 AVAX) | | $K$ | gas price update constant ($KMult * T$) | 87*T | ### Prior Subnet-EVM Fee Configuration Parameters Prior to this ACP, the Subnet-EVM fee configuration and fee manager precompile used the following parameters to control the fee mechanism: **GasLimit**: Sets the max amount of gas consumed per block. **TargetBlockRate**: Sets the target rate of block production in seconds used for fee adjustments. If the actual block rate is faster than this target, block gas cost will be increased, and vice versa. **MinBaseFee**: The minimum base fee sets a lower bound on the EIP-1559 base fee of a block. Since the block's base fee sets the minimum gas price for any transaction included in that block, this effectively sets a minimum gas price for any transaction. **TargetGas**: Specifies the targeted amount of gas (including block gas cost) to consume within a rolling 10s window. When the dynamic fee algorithm observes that network activity is above/below the `TargetGas`, it increases/decreases the base fee proportionally to how far above/below the target actual network activity is. **BaseFeeChangeDenominator**: Divides the difference between actual and target utilization to determine how much to increase/decrease the base fee. A larger denominator indicates a slower changing, stickier base fee, while a lower denominator allows the base fee to adjust more quickly. **MinBlockGasCost**: Sets the minimum amount of gas to charge for the production of a block. **MaxBlockGasCost**: Sets the maximum amount of gas to charge for the production of a block. **BlockGasCostStep**: Determines how much to increase/decrease the block gas cost depending on the amount of time elapsed since the previous block. If the block is produced at the target rate, the block gas cost will stay the same as the block gas cost for the parent block. If it is produced faster/slower, the block gas cost will be increased/decreased by the step value for each second faster/slower than the target block rate accordingly. Note: if the `BlockGasCostStep` is set to a very large number, it effectively requires block production to go no faster than the `TargetBlockRate`. Ex: if a block is produced two seconds faster than the target block rate, the block gas cost will increase by `2 * BlockGasCostStep`. ### ACP-176 Parameters in Subnet-EVM ACP-176 will make `GasLimit` and `BaseFeeChangeDenominator` configurations obsolete in Subnet-EVM. `TargetBlockRate`, `MinBlockGasCost`, `MaxBlockGasCost`, and `BlockGasCostStep` will be also removed by [ACP-226](https://github.com/avalanche-foundation/ACPs/tree/ce51dfab/ACPs/226-dynamic-minimum-block-times). `MinGasPrice` is equivalent to `M` in ACP-176 and will be used to set the minimum gas price for ACP-176. This is similar to `MinBaseFee` in old Subnet-EVM fee configuration, and roughly gives the same effect. Currently the default value is `25 * 10^-9 AVAX` (25 nAVAX / 25 Gwei). This default will be changed to the minimum possible denomination of the native EVM asset (1 Wei), which is aligned with the C-Chain. `TargetGas` is equivalent to `T` (target gas consumed per second) in ACP-176 and will be used to set the target gas consumed per second for ACP-176. `TimeToDouble` will be used to control the speed of the fee adjustment. In ACP-176, the gas price update constant $K$ is defined as $K = KMult \cdot T$, where $T$ is the target gas per second and $KMult$ is a multiplier. The `TimeToDouble` parameter configures $KMult$ directly via the relationship $KMult = \frac{TimeToDouble}{ln(2)}$. The default value for `TimeToDouble` is 60 seconds, yielding $KMult = \frac{60}{ln(2)} \approx 87$, which is aligned with the C-Chain (where $K = 87 \cdot T$). At sustained maximum capacity ($2T$ gas/second), this results in the gas price doubling approximately every 60 seconds. As a result parameters will be set as follows: | Parameter | Description | Default Value | Is Configurable | | :--- | :--- | :--- | :--- | | $T$ | target gas consumed per second | 1,000,000 | :white_check_mark: | | $R$ | gas capacity added per second | 2*T | :x: | $C$ | maximum gas capacity | 10*T | :x: | $P$ | minimum target gas consumption per second | 1,000,000 | :x: | $D$ | target gas consumption rate update constant | 2^25 | :x: | $Q$ | target gas consumption rate update factor change limit | 2^15 | :x: | $M$ | minimum gas price | 1 Wei | :white_check_mark: | $K$ | gas price update constant ($KMult \cdot T$) | ~87*T | :white_check_mark: Through `TimeToDouble` (default 60s equivalent to $KMult \approx 87$)$ The gas capacity added per second (`R`) always being equal to `2*T` keeps it such that the gas price is capable of increase and decrease at the same rate. The values of `Q` and `D` affect the magnitude of change to `T` that each block can have, and the granularity at which the target gas consumption rate can be updated. The proposed values match the C-Chain, allowing each block to modify the current gas target by roughly $\frac{1}{1024}$ of its current value. This has provided sufficient responsiveness and granularity as is, removing the need to make `D` and `Q` dynamic or configurable. Similarly, 1,000,000 gas/second should be a low enough minimum target gas consumption for any EVM L1. The target gas for a given L1 will be able to be increased from this value dynamically and has no maximum. #### Max Capacity Factor (C) Design Rationale The maximum gas capacity (`C`) is intentionally not configurable for L1s. [ACP-194 (SAE)](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/194-streaming-asynchronous-execution#block-size) defines the max gas capacity (i.e., max block size/block gas limit) as $2 \cdot T \cdot \tau \cdot \lambda$, where $\tau$ is the constant delay and $\lambda$ is the inverse of the minimum percentage of the gas limit charged. This definition ensures that the transaction queue can always be fully saturated. This means that the max capacity of the C-Chain will actually double upon ACP-194 activation, since it is currently $2 \cdot T \cdot 5$ and it will become $2 \cdot T \cdot 5 \cdot 2$. The original motivation to make this configurable was to allow for very high maximum gas usage by a single block, primarily to support large contract deployments. Given that SAE will be activated at the same time as ACP-224, the max capacity of the C-Chain will double upon activation, further reducing the need for configurability. Additionally Ethereum's Fusaka upgrade introduces a maximum transaction gas limit of $2^{24}$ (~16.7M), which makes this concern largely moot. Given these considerations, `C` was changed to not be parametrizable for L1s because: 1. SAE provides clear rationale for the max capacity value (ensuring the transaction queue can always be fully saturated). 2. The future maximum transaction gas limit of 16.7M makes large contract deployments less of a concern. 3. There are very limited benefits to allowing `C` to be higher than the SAE-defined value in the future. 4. It's still a function of `T`, so it can be adjusted dynamically via the `ACP224FeeManagerPrecompile` if needed. ### Dynamic Gas Target Via Validator Preference For L1s that want their gas target to be dynamically adjusted based on the preferences of their validator sets, the same mechanism introduced on the C-Chain in ACP-176 will be employed. Validators will be able to set their `gas-target` preference in their node's configuration, and block builders can then adjust the target excess in blocks that they propose based on their preference. Validator target gas preferences are active in the following cases: 1. **Precompile not activated**: When the `ACP224FeeManagerPrecompile` is not activated at all, validators can control `targetGas` by using the `gas-target` preference in their node's configuration. If a validator does not set a `gas-target` preference, the parent block's gas target is maintained. 2. **`validatorTargetGas` enabled**: When the precompile is activated and `validatorTargetGas` is set to `true` (either via `initialFeeConfig` or by an admin calling `setFeeConfig` with `validatorTargetGas: true`), the precompile's stored `targetGas` is **not used for block building**. Validators adjust `targetExcess` from the parent block's current value within ACP-176's bounded update limits. If a validator does not set a `gas-target` preference, the parent block's gas target is maintained. When `validatorTargetGas` is `false` (the default), the precompile's stored `targetGas` value is authoritative and validator preferences for gas target are ignored. ### ACP224FeeManagerPrecompile #### Solidity Interface The `ACP224FeeManagerPrecompile` provides an on-chain interface for managing fee parameters dynamically. The `FeeConfig` struct contains all configuration parameters, including both numeric values and mode flags (`staticPricing`, `validatorTargetGas`). All changes are made through a single `setFeeConfig` call, keeping the interface minimal and straightforward. This design ensures configurations are applied atomically and consistently. The precompile offers similar controls as the existing `FeeManagerPrecompile` implemented in Subnet-EVM [here](https://github.com/ava-labs/subnet-evm/tree/53f5305/precompile/contracts/feemanager). The solidity interface is as follows: ```solidity //SPDX-License-Identifier: MIT pragma solidity ^0.8.24; import "./IAllowList.sol"; /// @title ACP-224 Fee Manager Interface /// @notice Interface for managing dynamic gas limit and fee parameters /// @dev Inherits from IAllowList for access control interface IACP224FeeManager is IAllowList { /// @notice Configuration parameters for the dynamic fee mechanism /// @dev Fields are ordered so each mode flag precedes the parameter(s) it governs, /// reducing the risk of mis-ordering arguments. struct FeeConfig { bool validatorTargetGas; // When true, validators control targetGas via node preferences uint256 targetGas; // Target gas consumption per second (T) bool staticPricing; // When true, gas price is always minGasPrice uint256 minGasPrice; // Minimum gas price in wei (M) uint256 timeToDouble; // Seconds for gas price to double at max capacity } /// @notice Emitted when fee configuration is updated /// @param sender Address that triggered the update /// @param oldFeeConfig Previous configuration /// @param newFeeConfig New configuration event FeeConfigUpdated(address indexed sender, FeeConfig oldFeeConfig, FeeConfig newFeeConfig); /// @notice Set the fee configuration /// @param config New fee configuration parameters function setFeeConfig(FeeConfig calldata config) external; /// @notice Get the current fee configuration /// @return config Current fee configuration function getFeeConfig() external view returns (FeeConfig memory config); /// @notice Get the block number when fee config was last changed /// @return blockNumber Block number of last configuration change function getFeeConfigLastChangedAt() external view returns (uint256 blockNumber); } ``` For chains with the precompile activated, `setFeeConfig` can be used to dynamically change all fee configuration parameters, including both numeric values and mode flags. Importantly, any updates made via calls to `setFeeConfig` in a transaction will take effect only as of _settlement_ of the transaction, not as of _acceptance_ or _execution_ (for transaction life cycles/status, refer to ACP-194 [here](https://github.com/avalanche-foundation/ACPs/tree/61d2a2a/ACPs/194-streaming-asynchronous-execution#description)). This ensures that all nodes apply the same worst-case bounds validation on transactions being accepted into the queue, since the worst-case bounds are affected by changes to the fee configuration. #### FeeConfig Fields The `FeeConfig` struct fields are shared by both `setFeeConfig` and `initialFeeConfig`: - `validatorTargetGas` (bool): When `true`, validators control `targetGas` dynamically via their node preferences, using the same mechanism as the C-Chain (see [Dynamic Gas Target Via Validator Preference](#dynamic-gas-target-via-validator-preference)). The stored `targetGas` value is not used for block building; validators always adjust from the parent block's `targetExcess`. When `true`, `targetGas` **must** be `0`. - `targetGas` (uint64): Target gas consumption per second ($T$). When `validatorTargetGas` is `false`, must be at least 1,000,000 (the minimum target gas consumption $P$). When `validatorTargetGas` is `true`, must be `0` meaning validators determine the gas target from the parent block's state. - `staticPricing` (bool): When `true`, the gas price is always `minGasPrice`, bypassing the ACP-176 dynamic fee mechanism entirely. When `staticPricing` is `true`, `timeToDouble` **must** be `0`. - `minGasPrice` (uint64): Minimum gas price in wei ($M$). Must be greater than `0`. When `staticPricing` is `true`, this is the fixed gas price. - `timeToDouble` (uint64): Seconds for the gas price to double when the chain is at maximum capacity. Determines $K$ as described in [ACP-176 Parameters in Subnet-EVM](#acp-176-parameters-in-subnet-evm). Must be greater than `0` when `staticPricing` is `false`. When `staticPricing` is `true`, must be `0`. **Toggling behavior:** - **Enabling** (setting `validatorTargetGas` to `true` via `setFeeConfig`): `targetGas` must be `0`. Validators begin adjusting `targetExcess` from the parent block's current value. Since ACP-176's bounded update mechanism limits changes to roughly $\frac{1}{1024}$ of the current value per block, the transition is always gradual. The parent block's `targetExcess` (which was set by the precompile) provides an unambiguous starting point. - **Disabling** (setting `validatorTargetGas` to `false` via `setFeeConfig`): The `targetGas` value provided in the same `setFeeConfig` call (must be >= 1,000,000) is immediately authoritative for block building upon settlement. The caller should check the current effective gas target (from block headers or RPC) and provide an appropriate `targetGas` value, since validators may have drifted the effective gas target far from the previously stored value and using a stale value could cause a dangerous sudden jump in gas capacity. #### Initial Fee Configuration All fee configuration for ACP-224 is done through the `ACP224FeeManagerPrecompile`. There is no separate genesis chain configuration for the new fee parameters. If the precompile is not activated at all, or no `initialFeeConfig` is provided, default values aligned with the C-Chain are used. The precompile can be activated with an `initialFeeConfig` to set the initial fee parameters at the activation timestamp. This follows the established Subnet-EVM pattern for [initial precompile configurations](https://build.avax.network/docs/avalanche-l1s/evm-configuration/customize-avalanche-l1#initial-precompile-configurations). If no admin, manager, or enabled addresses are provided, the precompile becomes **read-only**: `getFeeConfig` and `getFeeConfigLastChangedAt` remain callable, but `setFeeConfig` reverts. In this case the precompile has to be disabled and re-enabled with the desired admin, manager, or enabled addresses to change the configuration. When `initialFeeConfig` is present, all three uint fields (`targetGas`, `minGasPrice`, `timeToDouble`) are **required**. This prevents silent misconfiguration from typos (e.g., a misspelled `"minGasprice"` would otherwise silently fall back to a default, potentially making the chain near-free to spam). Boolean mode flags (`staticPricing`, `validatorTargetGas`) default to `false` when omitted. The precompile configuration will look like the following: ```json { "acp224FeeManagerConfig": { "blockTimestamp": , "adminAddresses": ["0x..."], "managerAddresses": ["0x..."], "enabledAddresses": ["0x..."], "initialFeeConfig": { "validatorTargetGas": true, "targetGas": 0, "staticPricing": false, "minGasPrice": 25000000000, "timeToDouble": 60 } } } ``` #### Example Configurations **Custom fees, fully locked (read-only precompile, dynamic pricing):** ```json { "acp224FeeManagerConfig": { "blockTimestamp": , "initialFeeConfig": { "targetGas": 5000000, "minGasPrice": 25000000000, "timeToDouble": 60 } } } ``` No admin addresses means the precompile is read-only. All boolean flags default to `false`: dynamic pricing is active, and the precompile controls `targetGas`. **Validators control target gas, fee parameters locked:** ```json { "acp224FeeManagerConfig": { "blockTimestamp": , "initialFeeConfig": { "validatorTargetGas": true, "targetGas": 0, "minGasPrice": 25000000000, "timeToDouble": 60 } } } ``` Read-only precompile. Validators adjust gas target dynamically via their node preferences. `targetGas` is `0` because `validatorTargetGas` is `true`. `minGasPrice` and `timeToDouble` are locked. **Static pricing, fully locked:** ```json { "acp224FeeManagerConfig": { "blockTimestamp": , "initialFeeConfig": { "targetGas": 15000000, "staticPricing": true, "minGasPrice": 25000000000, "timeToDouble": 0 } } } ``` Gas price is always 25 gwei regardless of demand. `timeToDouble` is `0` because `staticPricing` is `true`. `targetGas` still governs block gas capacity (15,000,000 gas/second). **Static pricing with validator-controlled target gas:** ```json { "acp224FeeManagerConfig": { "blockTimestamp": , "initialFeeConfig": { "validatorTargetGas": true, "targetGas": 0, "staticPricing": true, "minGasPrice": 1000000000, "timeToDouble": 0 } } } ``` Flat 1 gwei gas price. Validators adjust throughput dynamically. `targetGas` and `timeToDouble` are both `0` because `validatorTargetGas` and `staticPricing` are `true`, respectively. Read-only precompile. **Admin controls pricing, validators control target gas:** ```json { "acp224FeeManagerConfig": { "blockTimestamp": , "adminAddresses": ["0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC"], "initialFeeConfig": { "validatorTargetGas": true, "targetGas": 0, "minGasPrice": 25000000000, "timeToDouble": 60 } } } ``` Admin controls all fee parameters via `setFeeConfig`. Validators control target gas (`targetGas` is `0` because `validatorTargetGas` is `true`). Admin can set `validatorTargetGas` to `false` via `setFeeConfig` to take back control of `targetGas` (must then provide a `targetGas` >= 1,000,000). #### Internal State In addition to storing the latest fee configuration to be returned by `getFeeConfig`, the precompile will also maintain state storing the latest values of $q$ (the target excess) and $KMult$. These values can be derived from the `targetGas` and `timeToDouble` values given to the precompile, respectively. The value of $q$ can be deterministically calculated using the same method as Coreth currently employs to calculate a node's desired target excess [here](https://github.com/ava-labs/coreth/blob/b4c8300490afb7f234df704fdcc446f227e4ec2f/plugin/evm/upgrade/acp176/acp176.go#L170). Similarly, $KMult$ is approximated directly from `timeToDouble` according to: $$KMult = \frac{TimeToDouble}{ln(2)}$$ where $ln(2) \approx 0.69$ The resulting ACP-176 gas price update constant is then $K = KMult \cdot T$. Note that `timeToDouble` is the user-facing configuration parameter, while $KMult$ is the internally derived multiplier. When $T$ changes (via `setFeeConfig` or validator preferences), $K$ adjusts proportionally since $KMult$ remains fixed. Similar to the [desired target excess calculation in Coreth](https://github.com/ava-labs/coreth/blob/0255516f25964cf4a15668946f28b12935a50e0c/plugin/evm/upgrade/acp176/acp176.go#L170), which takes a node's desired gas target and calculates its desired target excess value, the `ACP224FeeManagerPrecompile` will use binary search to determine the resulting dynamic target excess value given the `targetGas` value passed to `setFeeConfig`. All blocks accepted after the settlement of such a call must have the correct target excess value as derived from the binary search result. #### Configuration Precedence Configuration precedence is as follows: B{Is ACP224FeeManager precompile active?} B -- Yes --> C{Is validatorTargetGas enabled?} C -- Yes --> VP[Delegate targetGas to validator preferences] C -- No --> PC[Use targetExcess from precompile storage] B -- No --> VP2[Use default targetGas with validator preferences] VP --> F{Is gas-target set in node config?} F -- Yes --> G[Calculate targetExcess from preference and allowed bounds] F -- No --> I[Use parent block ACP176 gas target] VP2 --> F subgraph pricing [Gas Price Determination] K{Is staticPricing enabled?} K -- Yes --> L[Gas price = minGasPrice] K -- No --> M["Gas price = minGasPrice * e^(excess/K)"] end PC --> pricing G --> pricing I --> pricing`} /> #### Adjustment to ACP-176 calculations for price discovery ACP-176 defines the gas price for a block as: $$p = M \cdot e^{\frac{x}{K}}$$ Now, whenever $M$ (`minGasPrice`) or $K$ (derived from `timeToDouble`) are changed via the `ACP224FeeManagerPrecompile`, $x$ must also be updated. Specifically, when $M$ is updated from $M_0$ to $M_1$, $x$ must also be updated from $x_0$ (the current excess) to $x_1$. $x_1$ theoretically could be calculated directly as: $$x_1 = ln(\frac{M_0}{M_1}) \cdot K + x_0$$ However, this would introduce floating point inaccuracies. Instead $x_1$ can be approximated using binary search to find the minimum non-negative integer such that the resulting gas price calculated using $M_1$ is greater than or equal to the current gas price prior to the change in $M$. In effect, this means that both reducing the minimum gas price and increasing the minimum gas price to a value less than the current gas price have no immediate effect on the current gas price. However, increasing the minimum gas price to value greater than the current gas price will cause the gas price to immediately step up to the new minimum value. Similarly, when $K$ is updated from $K_0$ to $K_1$, $x$ must also be updated from $x_0$ (the current excess) to $x_1$, where $x_1$ is calculated as: $$x_1 = x_0 \cdot \frac{K_1}{K_0}$$ This makes it such that the current gas price stays the same when $K$ is changed. Changes to $K$ only impact how quickly or slowly the gas price can change going forward based on usage. ## Backwards Compatibility ACP-224 will require a network update in order to activate the new fee mechanism. The `ACP224FeeManagerPrecompile` requires a separate activation and can be activated before or after the ACP-224 fee mechanism. If activated before, the precompile operates in a pending state where configuration can be set but does not take effect until ACP-224 activates (see [Early Activation of `ACP224FeeManagerPrecompile`](#early-activation-of-acp224feemanagerprecompile)). Activation of the ACP-224 mechanism will deactivate the prior fee mechanism and the prior `FeeManagerPrecompile`. This ensures that there is no ambiguity or overlap between legacy and new pricing logic. The `ACP224FeeManagerPrecompile` with `initialFeeConfig` replaces the prior genesis chain configuration for fee parameters. For existing networks, a network upgrade that activates the `ACP224FeeManagerPrecompile` with `initialFeeConfig` should be used to configure the new fee parameters. ACP-224 will be activated at the same time as ACP-194 (SAE) in the same network upgrade (Helicon). This coordinated activation is required because ACP-194 depends on the gas target and capacity mechanism defined by ACP-176, which this ACP implements for Subnet-EVM. Networks that do not activate ACP-224 will not be able to use ACP-194. ### Early Activation of `ACP224FeeManagerPrecompile` For continuity purposes, the `ACP224FeeManagerPrecompile` can be activated before the Helicon network upgrade (which activates ACP-224 and ACP-194). This allows L1 admins to prepare their fee configuration ahead of time. When the precompile is activated before Helicon: 1. **Configuration calls are accepted**: The precompile's `setFeeConfig` can be called to set desired fee parameters and modes. The values are stored in the precompile's state. If `initialFeeConfig` is provided at activation, its values are also stored. 2. **Values are pending**: The stored fee configuration does not affect the current fee mechanism. The existing `FeeManagerPrecompile` and legacy fee mechanism remain active and in control. 3. **Activation applies stored values**: When Helicon activates, the stored fee configuration immediately takes effect. The legacy fee mechanism and `FeeManagerPrecompile` are deactivated at this point. This approach ensures a smooth migration path where admins can test and verify their configuration before it becomes active, avoiding any race conditions at the moment of activation. ## Reference Implementation A reference implementation is not yet available and must be provided for this ACP to be considered implementable. ## Security Considerations Generally, this has the same security considerations as ACP-176. However, due to the dynamic nature of parameters exposed in the `ACP224FeeManagerPrecompile` there is an additional risk of misconfiguration. Misconfiguration of parameters could leave the network vulnerable to a DoS attack or result in higher transaction fees than necessary. ## Acknowledgements * [Stephen Buttolph](https://github.com/StephenButtolph) * [Arran Schlosberg](https://github.com/ARR4N) * [Austin Larson](https://github.com/alarso16) ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-226: Dynamic Minimum Block Times (/docs/acps/226-dynamic-minimum-block-times) --- title: "ACP-226: Dynamic Minimum Block Times" description: "Details for Avalanche Community Proposal 226: Dynamic Minimum Block Times" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/226-dynamic-minimum-block-times/README.md --- | ACP | 226 | | :- | :- | | **Title** | Dynamic Minimum Block Times | | **Author(s)** | Stephen Buttolph ([@StephenButtolph](https://github.com/StephenButtolph)), Michael Kaplan ([@michaelkaplan13](https://github.com/michaelkaplan13)) | | **Status** | Activated ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/228)) | | **Track** | Standards | ## Abstract Proposes replacing the current block production rate limiting mechanism on Avalanche EVM chains with a new mechanism where validators collectively and dynamically determine the minimum time between blocks. ## Motivation Currently, Avalanche EVM chains employ a mechanism to limit the rate of block production by increasing the "block gas cost" that must be burned if blocks are produced more frequently than the target block rate specified for the chain. The block gas cost is paid by summing the "priority fee" amounts that all transactions included in the block collectively burn. This mechanism has a few notable suboptimal aspects: 1. There is no explicit minimum block delay time. Validators are capable of producing blocks as frequently as they would like by paying the additional fee, and too rapid block production could cause network stability issues. 1. The target block rate can only be changed in a required network upgrade, which makes updates difficult to coordinate and operationalize. 1. The target block rate can only be specified with 1-second granularity, which does not allow for configuring sub-second block times as performance improvements are made to make them feasible. With the prospect of ACP-194 removing block execution from consensus and allowing for increases to the gas target through the dynamic ACP-176 mechanism, Avalanche EVM chains would be better suited by having a dynamic minimum block delay time denominated in milliseconds. This allows networks to ensure that blocks are never produced more frequently than the minimum block delay, and allows validators to dynamically influence the minimum block delay value by setting their preference. ## Specification ### Block Header Changes Upon activation of this ACP, the `blockGasCost` field in block headers will be required to be set to 0. This means that no validation of the cumulative priority fee amounts of transactions within the block exceeding the block gas cost is required. Additionally, two new fields are added to EVM block headers: `timestampMilliseconds` and `minimumBlockDelayExcess`. #### `timestampMilliseconds` The canonical serialization and interpretation of EVM blocks already contains a block timestamp specified in seconds. Altering this would require deep changes to the EVM codebase, as well as cause breaking changes to tooling such as indexers and block explorers. Instead, a new field is added representing the unix timestamp in milliseconds. Header verification should verify the `block.timestamp` (in seconds) is aligned with the `block.timeMilliseconds`, more precisely: `block.timestampMilliseconds / 1000 == block.timestamp`. Existing tools that do not need millisecond granularity do not need to parse the new field, which limits the amount of breaking changes. The `timestampMilliseconds` field is represented in block headers as a `uint64`. #### `minimumBlockDelayExcess` The new `minimumBlockDelayExcess` field in the block header is used to derive the minimum number of milliseconds that must pass before the next block is allowed to be accepted. Specifically, if block $B$ has a `minimumBlockDelayExcess` of $q$, then the effective timestamp of block $B+1$ in milliseconds must be at least $M * e^{\frac{q}{D}}$ greater than the effective timestamp of block $B$ in milliseconds. $M$, $q$, and $D$ are defined below in the mechanism specification. The `minimumBlockDelayExcess` field is represented in block headers as a `uint64`. The value of `minimumBlockDelayExcess` can be updated in each block, similar to the gas target excess field introduced in ACP-176. The mechanism is specified below. ### Dynamic `minimumBlockDelay` mechanism The `minimumBlockDelay` can be defined as: $$ m = M * e^{\frac{q}{D}} $$ Where: - $M$ is the global minimum `minimumBlockDelay` value in milliseconds - $q$ is a non-negative integer that is initialized upon the activation of this mechanism, referred to as the `minimumBlockDelayExcess` - $D$ is a constant that helps control the rate of change of `minimumBlockDelay` After the execution of transactions in block $b$, the value of $q$ can be increased or decreased by up to $Q$. It must be the case that $\left|\Delta q\right| \leq Q$, or block $b$ is considered invalid. The amount by which $q$ changes after executing block $b$ is specified by the block builder. Block builders (i.e., validators) may set their desired value for $M$ (i.e., their desired `minimumBlockDelay`) in their configuration, and their desired value for $q$ can then be calculated as: $$q_{desired} = D \cdot ln\left(\frac{M_{desired}}{M}\right)$$ Note that since $q_{desired}$ is only used locally and can be different for each node, it is safe for implementations to approximate the value of $ln\left(\frac{M_{desired}}{M}\right)$ and round the resulting value to the nearest integer. Alternatively, client implementations can choose to use binary search to find the closest integer solution, as `coreth` [does to calculate a node's desired target excess](https://github.com/ava-labs/coreth/blob/ebaa8e028a3a8747d11e6822088b4af7863451d8/plugin/evm/upgrade/acp176/acp176.go#L170). When building a block, builders can calculate their next preferred value for $q$ based on the network's current value (`q_current`) according to: ```python # Calculates a node's new desired value for q for a given block def calc_next_q(q_current: int, q_desired: int, max_change: int) -> int: if q_desired > q_current: return q_current + min(q_desired - q_current, max_change) else: return q_current - min(q_current - q_desired, max_change) ``` As $q$ is updated after the execution of transactions within the block, $m$ is also updated such that $m = M \cdot e^{\frac{q}{D}}$ at all times. As noted above, the change to $m$ only takes effect for subsequent block production, and cannot change the time at which block $b$ can be produced itself. ### Gas Accounting Updates Currently, the amount of gas capacity available is only incremented on a per second basis, as defined by ACP-176. With this ACP, it is expected for chains to be able to have sub-second block times. However, in the case when a chain's gas capacity is fully consumed (i.e. during period of heavy transaction load), blocks would not be able to produced at sub-second intervals because at least one second would need to elapse for new gas capacity to be added. To correct this, upon activation of this ACP, gas capacity is added on a per millisecond basis. The ACP-176 mechanism for determining the target gas consumption per second remains unchanged, but its result is now used to derive the target gas consumption per millisecond by dividing by 1000, and gas capacity is added at that rate as each block advances time by some number of milliseconds. ### Activation Parameters for the C-Chain Parameters at activation on the C-Chain are:
| Parameter | Description | C-Chain Configuration| | - | - | - | | $M$ | minimum `minimumBlockDelay` value | 1 millisecond | | $q$ | initial `minimumBlockDelayExcess` | 7,970,124 | | $D$ | `minimumBlockDelay` update constant | $2^{20}$ | | $Q$ | `minimumBlockDelay` update factor change limit | 200 |
$M$ was chosen as a lower bound for `minimumBlockDelay` values to allow high-performance Avalanche L1s to be able to realize maximum performance and minimal transaction latency. Based on the 1 millisecond value for $M$, $q$ was chosen such that the effective `minimumBlockDelay` value at time of activation is as close as possible to the current target block rate of the C-Chain, which is 2 seconds. $D$ and $Q$ were chosen such that it takes approximately 3,600 consecutive blocks of the maximum allowed change in $q$ for the effective `minimumBlockDelay` value to either halve or double. ### ProposerVM `MinBlkDelay` The ProposerVM currently offers a static, configurable `MinBlkDelay` seconds for consecutive blocks. With this ACP enforcing a dynamic minimum block delay time, any EVM instance adopting this ACP that also leverages the ProposerVM should ensure that the ProposerVM `MinBlkDelay` is set to 0. ### Note on Block Building While there is no longer a requirement for blocks to burn a minimum block gas cost after the activation of this ACP, block builders should still take priority fees into account when building blocks to allow for transaction prioritization and to maximize the amount of native token (AVAX) burned in the block. From a user (transaction issuer) perspective, this means that a non-zero priority fee would only ever need to be set to ensure inclusion during periods of maximum gas utilization. ## Backwards Compatibility While this proposal requires a network upgrade and updates the EVM block header format, it does so in a way that tries to maintain as much backwards compatibility as possible. Specifically, applications that currently parse and use the existing timestamp field that is denominated in seconds can continue to do so. The `timestampMilliseconds` header value only needs to be used in cases where more granular timestamps are required. ## Reference Implementation This ACP was implemented and merged into Coreth and Subnet-EVM behind the `Granite` upgrade flag. The full implementation can be found in [coreth@v0.15.4-rc.4](https://github.com/ava-labs/coreth/releases/tag/v0.15.4-rc.4) and [subnet-evm@v0.8.0-fuji-rc.0](https://github.com/ava-labs/subnet-evm/releases/tag/v0.8.0-fuji-rc.0). ## Security Considerations Too rapid block production may cause availability issues if validators of the given blockchain are not able to keep up with blocks being proposed to consensus. This new mechanism allows validators to help influence the maximum frequency at which blocks are allowed to be produced, but potential misconfiguration or overly aggressive settings may cause problems for some validators. The mechanism for the minimum block delay time to adapt based on validator preference has already been used previously to allow for dynamic gas targets based on validator preference on the C-Chain, providing more confidence that it is suitable for controlling this network parameter as well. However, because each block is capable of changing the value of the minimum block delay by a certain amount, the lower the minimum block delay is, the more blocks that can be produced in a given time, and the faster the minimum block delay value will be able to change. This creates a dynamic where the mechanism for controlling `minimumBlockDelay` is more reactive at lower values, and less reactive at higher values. The global minimum `minimumBlockDelay` ($M$) provides a lower bound of how quickly blocks can ever be produced, but it is left to validators to ensure that the effective value does not exceed their collective preference. ## Acknowledgments Thanks to [Luigi D'Onorio DeMeo](https://x.com/luigidemeo) for continually bringing up the idea of reducing block times to provide better UX for users of Avalanche blockchains. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-23: P Chain Native Transfers (/docs/acps/23-p-chain-native-transfers) --- title: "ACP-23: P Chain Native Transfers" description: "Details for Avalanche Community Proposal 23: P Chain Native Transfers" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/23-p-chain-native-transfers/README.md --- | ACP | 23 | | :--- | :--- | | **Title** | P-Chain Native Transfers | | **Author(s)** | Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu)) | | **Status** | Activated | | **Track** | Standards | ## Abstract Support native transfers on P-chain. This enables users to transfer P-chain assets without leaving the P-chain or using a transaction type that's not meant for native transfers. ## Motivation Currently, the P-chain has no simple transfer transaction type. The X-chain supports this functionality through a `BaseTx`. Although the P-chain contains transaction types that extend `BaseTx`, the `BaseTx` transaction type itself is not a valid transaction. This leads to abnormal implementations of P-chain native transfers like in the AvalancheGo wallet which abuses [`CreateSubnetTx`](https://github.com/ava-labs/avalanchego/blob/v1.10.15/wallet/chain/p/builder.go#L54-L63) to replicate the functionality contained in `BaseTx`. With the growing number of subnets slated for launch on the Avalanche network, simple transfers will be demanded more by users. While there are work-arounds as mentioned before, the network should support it natively to provide a cheaper option for both validators and end-users. ## Specification To support `BaseTx`, Avalanche Network Clients (like AvalancheGo) must register `BaseTx` with the type ID `0x22` in codec version `0x00`. For the specification of the transaction itself, see [here](https://github.com/ava-labs/avalanchego/blob/v1.10.15/vms/platformvm/txs/base_tx.go#L29). Note that most other P-chain transactions extend this type, the only change in this ACP is to register it as a valid transaction itself. ## Backwards Compatibility Adding a new transaction type is an execution change and requires a mandatory upgrade for activation. Implementors must take care to reject this transaction prior to activation. This ACP only details the specification of the added `BaseTx` transaction type. ## Reference Implementation An implementation of `BaseTx` support was created [here](https://github.com/ava-labs/avalanchego/pull/2232) and subsequently merged into AvalancheGo. Since the "D" Upgrade is not activated, this transaction will be rejected by AvalancheGo. If modifications are made to the specification of the transaction as part of the ACP process, the code must be updated prior to activation. ## Security Considerations The P-chain has fixed fees which does not place any limits on chain throughput. A potentially popular transaction type like `BaseTx` may cause periods of high usage. The reference implementation in AvalancheGo sets the transaction fee to 0.001 AVAX as a deterrent (equivalent to `ImportTx` and `ExportTx`). This should be sufficient for the time being but a dynamic fee mechanism will need to be added to the P-chain in the future to mitigate this security concern. This is not addressed in this ACP as it requires a larger change to the fee dynamics on the P-chain as a whole. ## Open Questions No open questions. ## Acknowledgements Thanks to [@StephenButtolph](https://github.com/StephenButtolph) and [@abi87](https://github.com/abi87) for their feedback on the reference implementation. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-236: Auto Renewed Staking (/docs/acps/236-auto-renewed-staking) --- title: "ACP-236: Auto Renewed Staking" description: "Details for Avalanche Community Proposal 236: Auto Renewed Staking" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/236-auto-renewed-staking/README.md --- | ACP | 236 | |:--------------|:------------------------------------------------------------| | **Title** | Auto-Renewed Staking | | **Author(s)** | Razvan Angheluta ([@rrazvan1](https://github.com/rrazvan1)) | | **Status** | Proposed ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/244)) | | **Track** | Standards | ## Abstract This proposal introduces auto-renewed staking for validators on the Avalanche P-Chain. Validators can renew their staking position automatically, allowing their stake to compound over time, accruing rewards once per specified cycle. Note that this mechanism applies only to primary network validation. It does not apply to L1 validators or to legacy subnet validators. ## Motivation The current staking system on the Avalanche P-Chain restricts flexibility for stakers by requiring them to specify an explicit end time for their stake and by enforcing minimum and maximum staking durations, limiting their ability to respond to changing market conditions or liquidity needs. Managing a large number of nodes is also challenging, as re-staking at the end of each period is labor-intensive, time-consuming, and poses security risks due to the required transaction signing. Additionally, tokens can remain idle at the end of a staking period until stakers initiate the necessary transactions to stake them again. ## Specification Auto-renewed staking introduces a mechanism that allows validators to remain staked indefinitely, without having to manually renew staking transactions at the end of each period. Instead of committing to a fixed endtime upfront, validators specify a cycle duration (period) and an `AutoCompoundRewardShares` value when they submit an `AddAutoRenewedValidatorTx`. At the end of each cycle, the validator is automatically restaked for a new cycle. The validator (via `Owner`) may update the auto-renew config at any time during a cycle; such updates take effect only at the end of the current cycle. To stop validating, the validator signals intent to stop validating by updating the next cycle’s period to `0`; this causes the validator to gracefully exit at the end of the current cycle and unlock their staked funds. The minimum and maximum cycle lengths follow the same protocol parameters as before (`MinStakeDuration` and `MaxStakeDuration`). Note: On mainnet, the current configuration is: `MinStakeDuration = 14 days` and `MaxStakeDuration = 365 days`. Clarification: In the rewards formula, `StakingPeriod` is the cycle’s duration, not the total accumulated time across cycles. Each cycle is treated separately when computing rewards. Delegator interaction remains unchanged, and the same constraints apply: a delegation period must fit entirely within the validator’s cycle. Delegators cannot delegate across multiple cycles, since there is no guarantee that a validator will continue validating after the current cycle. Essentially, it is not possible to delegate with auto-renewal. Rewards are accrued once per cycle and are managed according to the `AutoCompoundRewardShares` value: the specified portion is restaked and the remainder withdrawn. Auto-renewal only occurs if the validator is eligible for rewards for that cycle. If the validator is not reward-eligible for the cycle, the validator is forced to exit at the end of the cycle and staked funds are unlocked, and accrued rewards are withdrawn. If the updated stake weight (previous stake + staking rewards + delegation commission rewards) exceeds `MaxStakeLimit`, only the excess above `MaxStakeLimit` is withdrawn and distributed to `ValidatorRewardsOwner` and `DelegatorRewardsOwner`. Because of the way `RewardValidatorTx` is structured, multiple instances cannot be issued without resulting in identical transaction IDs. To resolve this, a new transaction type has been introduced for both rewarding and stopping auto-renewed validators: `RewardAutoRenewedValidatorTx`. Along with the validator’s creation transaction ID, it also includes a timestamp. Auto-renewed validators follow the existing uptime requirements. The main difference is that uptime is measured separately for each cycle. At the end of every cycle, the validator’s uptime during that specific period is evaluated to determine eligibility for rewards. Auto-renewed staking is conditioned on reward eligibility. When a new cycle begins, uptime tracking resets and starts again for the next period. Note: Submitting an `AddAutoRenewedValidatorTx` immediately followed by a `SetAutoRenewedValidatorConfigTx` that sets the next period to `0` replicates the behavior of the current fixed-period staking system (stake for a single cycle, then gracefully exit). ### Auto-Renew Config The `Owner` field defines who is authorized to modify the validator's auto-renew config. The auto-renew config defines the validator's end-of-cycle behavior: whether it continues into the next cycle and how rewards are split between restaking and withdrawal. At creation, validators set the auto-renew config: `AutoCompoundRewardShares` and `Period`. `AutoCompoundRewardShares` specifies, in millionths (percentage * 10_000), what portion of earned rewards should be automatically restaked at the end of each cycle. The remaining portion of the rewards will be withdrawn. For example, a value of 300,000, restakes 30% of the rewards and withdraws 70%. `Period` defines the duration of the next validation cycle and can be updated during a cycle with changes taking effect at cycle end. Stopping is requested by setting the next cycle’s `Period` to `0` via `SetAutoRenewedValidatorConfigTx`. ### New P-Chain Transaction Types The following new transaction types will be introduced to the P-Chain to support this functionality: #### AddAutoRenewedValidatorTx ```go type AddAutoRenewedValidatorTx struct { // Metadata, inputs and outputs BaseTx `serialize:"true"` // Node ID of the validator ValidatorNodeID ids.NodeID `serialize:"true" json:"nodeID"` // [Signer] is the BLS key for this validator. Signer signer.Signer `serialize:"true" json:"signer"` // Where to send staked tokens when done validating StakeOuts []*avax.TransferableOutput `serialize:"true" json:"stake"` // Where to send validation rewards when done validating ValidatorRewardsOwner fx.Owner `serialize:"true" json:"validationRewardsOwner"` // Where to send delegation rewards when done validating DelegatorRewardsOwner fx.Owner `serialize:"true" json:"delegationRewardsOwner"` // Who is authorized to modify the auto-renew config Owner fx.Owner `serialize:"true" json:"owner"` // Fee this validator charges delegators as a percentage, times 10,000 // For example, if this validator has DelegationShares=300,000 then they // take 30% of rewards from delegators DelegationShares uint32 `serialize:"true" json:"shares"` // Weight of this validator used when sampling Wght uint64 `serialize:"true" json:"weight"` // Percentage of rewards to restake at the end of each cycle, expressed in millionths (percentage * 10,000). // Range [0..1_000_000]: // 0 = restake principal only; withdraw 100% of rewards // 300_000 = restake 30% of rewards; withdraw 70% // 1_000_000 = restake 100% of rewards; withdraw 0% AutoCompoundRewardShares uint32 `serialize:"true" json:"autoCompoundRewardShares"` // Period is the validation cycle duration, in seconds. Period uint64 `serialize:"true" json:"period"` } ``` #### SetAutoRenewedValidatorConfigTx ```go type SetAutoRenewedValidatorConfigTx struct { // Metadata, inputs and outputs BaseTx `serialize:"true"` // ID of the tx that created the auto-renewed validator. TxID ids.ID `serialize:"true" json:"txID"` // Authorizes this validator to be updated. Auth verify.Verifiable `serialize:"true" json:"auth"` // Percentage of rewards to restake at the end of each cycle, expressed in millionths (percentage * 10,000). // Range [0..1_000_000]: // 0 = restake principal only; withdraw 100% of rewards // 300_000 = restake 30% of rewards; withdraw 70% // 1_000_000 = restake 100% of rewards; withdraw 0% AutoCompoundRewardShares uint32 `serialize:"true" json:"autoCompoundRewardShares"` // Period for the next cycle (in seconds). Takes effect at cycle end. // If 0, stop at the end of the current cycle and unlock funds. Period uint64 `serialize:"true" json:"period"` } ``` #### RewardAutoRenewedValidatorTx ```go type RewardAutoRenewedValidatorTx struct { // ID of the tx that created the validator being removed/rewarded TxID ids.ID `serialize:"true" json:"txID"` // End time of the validation cycle. Timestamp uint64 `serialize:"true" json:"timestamp"` } ``` ### UTXO Creation Auto-renewed staking creates UTXOs across different transactions depending on the withdrawal reason: Attached to `AddAutoRenewedValidatorTx`: - Initial stake (returned when validator stops) Attached to `RewardAutoRenewedValidatorTx`: - Validation/delegatee rewards withdrawn based on `AutoCompoundRewardShares` - Excess rewards withdrawn when restaking would exceed `MaxValidatorStake` - Accrued validation/delegatee rewards when validator stops (gracefully or forced) ## Backwards Compatibility This change requires a network upgrade to make sure that all validators are able to verify and execute the new introduced transactions. ## Considerations Auto-renewed staking makes it easier for users to keep their funds staked longer than with fixed-period staking, since it involves fewer transactions, lower friction, and reduced risks. Greater staking participation leads to stronger overall network security. Validators benefit by not having to manually restart at the end of each cycle, which reduces transaction volume and the risk of network congestion. However, the uptime risk per cycle slightly increases depending on cycle length and validator performance. For example, missing five days in a one-year cycle will still yield validation rewards, whereas missing five days in a two-week cycle may affect rewards. ## Flow of a Validator with Auto-Renewing B[Validator active] B -->|Optional during cycle| C[Issue SetAutoRenewedValidatorConfigTx to update auto-renew config or request stop] B --> D[Cycle end reached] D --> E[Block builder issues RewardAutoRenewedValidatorTx] E --> F[Evaluate uptime and compute cycle rewards] F --> G{Stop requested?} G -->|Yes| H[Withdraw rewards and unlock principal] H --> I[Validator exits] G -->|No| J{Eligible for rewards?} J -->|No| H J -->|Yes| K[Apply auto-renew config and split rewards into restake/withdrawal] K --> L{New stake exceeds MaxStakeLimit?} L -->|Yes| M[Withdraw excess above MaxStakeLimit] L -->|No| N[Start new cycle] M --> N[Start new cycle] N --> B`} /> ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-24: Shanghai Eips (/docs/acps/24-shanghai-eips) --- title: "ACP-24: Shanghai Eips" description: "Details for Avalanche Community Proposal 24: Shanghai Eips" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/24-shanghai-eips/README.md --- | ACP | 24 | | :--- | :--- | | **Title** | Activate Shanghai EIPs on C-Chain | | **Author(s)** | Darioush Jalali ([@darioush](https://github.com/darioush)) | | **Status** | Activated | | **Track** | Standards | ## Abstract This ACP proposes the adoption of the following EIPs on the Avalanche C-Chain network: - [EIP-3651: Warm COINBASE](https://eips.ethereum.org/EIPS/eip-3651) - [EIP-3855: PUSH0 instruction](https://eips.ethereum.org/EIPS/eip-3855) - [EIP-3860: Limit and meter initcode](https://eips.ethereum.org/EIPS/eip-3860) - [EIP-6049: Deprecate SELFDESTRUCT](https://eips.ethereum.org/EIPS/eip-6049) ## Motivation The listed EIPs were activated on Ethereum mainnet as part of the [Shanghai upgrade](https://github.com/ethereum/execution-specs/blob/master/network-upgrades/mainnet-upgrades/shanghai.md#included-eips). This ACP proposes their activation on the Avalanche C-Chain in the next network upgrade. This maintains compatibility with upstream EVM tooling, infrastructure, and developer experience (e.g., Solidity compiler >= [0.8.20](https://github.com/ethereum/solidity/releases/tag/v0.8.20)). ## Specification & Reference Implementation This ACP proposes the EIPs be adopted as specified in the EIPs themselves. ANCs (Avalanche Network Clients) can adopt the implementation as specified in the [coreth](https://github.com/ava-labs/coreth) repository, which was adopted from the [go-ethereum v1.12.0](https://github.com/ethereum/go-ethereum/releases/tag/v1.12.0) release in this [PR](https://github.com/ava-labs/coreth/pull/277). In particular, note the following code: - [Activation of new opcode and dynamic gas calculations](https://github.com/ava-labs/coreth/blob/bf2051729c7aa0c4ed8848ad3a78e241a791b968/core/vm/jump_table.go#L92) - [EIP-3860 intrinsic gas calculations](https://github.com/ava-labs/coreth/blob/bf2051729c7aa0c4ed8848ad3a78e241a791b968/core/state_transition.go#L112-L113) - [EIP-3651 warm coinbase](https://github.com/ava-labs/coreth/blob/bf2051729c7aa0c4ed8848ad3a78e241a791b968/core/state/statedb.go#L1197-L1199) - Note EIP-6049 marks SELFDESTRUCT as deprecated, but does not remove it. The implementation in coreth is unchanged. ## Backwards Compatibility The following backward compatibility considerations were highlighted by the original EIP authors: - [EIP-3855](https://eips.ethereum.org/EIPS/eip-3855#backwards-compatibility): "... introduces a new opcode which did not exist previously. Already deployed contracts using this opcode could change their behaviour after this EIP". - [EIP-3860](https://eips.ethereum.org/EIPS/eip-3860#backwards-compatibility) "Already deployed contracts should not be effected, but certain transactions (with initcode beyond the proposed limit) would still be includable in a block, but result in an exceptional abort." Adoption of this ACP modifies consensus rules for the C-Chain, therefore it requires a network upgrade. ## Security Considerations Refer to the original EIPs for security considerations: - [EIP 3855](https://eips.ethereum.org/EIPS/eip-3855#security-considerations) - [EIP 3860](https://eips.ethereum.org/EIPS/eip-3860#security-considerations) ## Open Questions No open questions. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-247: Delegation Multiplier Increase (/docs/acps/247-delegation-multiplier-increase) --- title: "ACP-247: Delegation Multiplier Increase" description: "Details for Avalanche Community Proposal 247: Delegation Multiplier Increase" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/247-delegation-multiplier-increase/README.md --- | ACP | 247 | | :--- | :--- | | **Title** | Delegation Multiplier Increase | | **Authors** | Giacomo Barbieri ([@ijaack94](https://x.com/ijaack94)), BENQI ([@benqifinance](https://x.com/benqifinance)) | | **Status** | Implementable ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/248)) | | **Track** | Standards | ## Abstract This Avalanche Community Proposal advocates for one targeted adjustment to Primary Network validator staking parameters: increasing the delegation multiplier from 5x to **24x** to enable validators to efficiently serve larger delegated bases within the new weight constraint. These changes maintain the 2,000 AVAX minimum validator stake and focus on improving capital efficiency for existing, well-capitalized validators rather than broadening participation. ## Motivation ### Current Capital Efficiency Problem The Avalanche Primary Network currently limits validators to a **4x delegation multiplier**, creating suboptimal infrastructure utilization. A validator with 2,000 AVAX self-stake can accept only 8,000 AVAX in delegations (4x multiplier), yielding a total weight of 10,000 AVAX. **Economic Reality at Current 4x Multiplier** (at $20 AVAX, 8.25% APY, 5% delegation fee): | Metric | Value | |--------|-------| | Self-stake | 2,000 AVAX | | Max delegations | 8,000 AVAX | | Total weight | 10,000 AVAX | | Monthly rewards | $1,375 | | Validator self-share (20% of weight) | $275 | | Delegation fee income (5%) | $55 | | **Total validator monthly income** | **$330** | | **Monthly operational costs** | **$150** | | **Net monthly profit** | **$180** | This marginal profitability (~55% margin) creates structural problems: - Many validators operate below capacity (not 8,000 AVAX fully delegated) - Some validators forced to run multiple nodes to achieve sufficient scale - Infrastructure investment is heavily underutilized - Validator economics are fragile, attracting only well-funded entities - Sensitive to delegation fee pressure (any reduction below 5% hurts profitability) ### Real Network Evidence: Validator Underutilization Real validator distribution data (as of October 28, 2025) provides empirical evidence of the problem: | Metric | Count | Percentage | |--------|-------|-----------| | **Total validators** | 854 | 100% | | **Validators with ZERO delegations** | 451 | **52.8%** | | **Validators with <1k AVAX delegated** | 501 | **58.7%** | | **Validators with 100+ delegations** | 105 | 12.3% | | **Validators with 1-5 delegations** | 138 | 16.2% | **Critical Finding**: Over half of all validators have zero delegations despite having the technical capacity for up to 8,000 AVAX under current 4x multiplier. This reveals: 1. **Validators don't put much effort into collecting delegations** because they can't really make a business out of it 2. **Delegation is highly concentrated** among only ~100 top-performing validators 3. **Current system severely underutilizes validator infrastructure** across the network 4. **Small validators (2k-5k AVAX self-stake)** cannot compete for delegations **Validator Self-Stake Distribution**: - 447 validators (52.3%) have only 2k-5k AVAX self-stake - Most of these are probably unprofitable or barely breaking even - They cannot attract delegations at current economics - Our profitability calculations ($180/month) match their behavior ### Proposed Solution Implement two complementary changes: 2. **Increase delegation multiplier to 24x** - Increases the maximum _potential_ rewards that validators could receive from delegations, not changing any of the underlying incentives of delegation if not flipping the perception of maximum rewards for validators on Avalanche. **Resulting Economics at 24x Multiplier** (at $20 AVAX, 2,000 AVAX self-stake, 8.25% APY, 5% delegation fee): | Metric | Current (4x) | Proposed (24x) | Change | |--------|-------------|---|---| | Max delegations | 8,000 AVAX | 48,000 AVAX | +500% | | Total weight | 10,000 AVAX | 50,000 AVAX | +400% | | Monthly rewards total | $1,375 | $6,875 | +400% | | Validator income | $330 | $605 | +83% | | Net monthly profit | $180 | $455 | +153% | | Profit margin | 55% | 75% | +20 pp | This transformation: - **Makes validators economically viable** at 50%+ delegation capacity - **Eliminates need for multi-node operations** for profitability - **Unlocks dormant validator capacity** (current 52.8% with zero delegations) - **Enables sustainable single-node operations** at scale ## Specification ### Technical Changes **Current Parameter**: ``` DelegationMultiplier = 4 ``` **Proposed Parameter**: ``` DelegationMultiplier = 24 ``` **Calculation Formula**: ``` MaxDelegatorStake(validator) = Min(ValidatorStake * 24, MaxValidatorWeight - ValidatorStake) Example for 2,000 AVAX validator: MaxDelegatorStake = Min(2,000 * 24, 3,000,000 - 2,000) MaxDelegatorStake = Min(48,000, 998,000) MaxDelegatorStake = 48,000 AVAX ``` **Implementation Details**: - Applies uniformly to all validators - Only validators with 120,000+ AVAX self-stake cannot utilize full 24x due to 3M weight cap - All 2,000 AVAX validators can deploy full 24x multiplier - Non-breaking for existing delegation relationships #### Unchanged Parameters The following parameters remain **UNCHANGED**: | Parameter | Value | Notes | |-----------|-------|-------| | Minimum validator stake | 2,000 AVAX | No reduction | | Minimum delegator stake | 25 AVAX | Unchanged | | Minimum staking duration | 2 weeks | Unchanged | | Maximum staking duration | 1 year | Unchanged | | Uptime requirement | 80% | Unchanged | | Slashing penalty | None | Unchanged | | Reward formula | Same | APY calculations unchanged | ## Economic Impact Analysis ### Validator Profitability Scenarios All scenarios calculated at: - **AVAX Price**: $20 USD - **Annual APY**: 8.25% (average, varies by network conditions) - **Delegation Fee**: 5% (fee validator charges on delegator rewards) - **Monthly operational costs**: $150 (cloud hosting, bandwidth, monitoring) #### Scenario 1: Solo Validator (No Delegation) | Metric | Current (4x) | Proposed (24x) | |--------|-------------|---| | Self-stake | 2,000 AVAX | 2,000 AVAX | | Delegations | 0 | 0 | | Total weight | 2,000 AVAX | 2,000 AVAX | | Monthly rewards | $275 | $275 | | Monthly costs | $150 | $150 | | **Net profit** | **$125** | **$125** | **Conclusion**: Solo operation equally profitable in both cases; validators must attract delegations for profitability. #### Scenario 2: 50% Delegation Capacity | Metric | Current (4x) | Proposed (24x) | |--------|-------------|---| | Self-stake | 2,000 AVAX | 2,000 AVAX | | Delegations | 4,000 AVAX | 24,000 AVAX | | Total weight | 6,000 AVAX | 26,000 AVAX | | Monthly rewards | $825 | $3,575 | | Validator income | $302.50 | $440 | | Monthly costs | $150 | $150 | | **Net profit** | **$152.50** | **$290** | **Conclusion**: 24x multiplier increases profitability by **90.2%** at 50% delegation capacity. Validators reach break-even and profitability much more easily. **Real Network Impact**: Current data shows 52.8% of validators (451) have ZERO delegations. At 50% capacity with 24x, these validators become economically viable, incentivizing them to accept delegations. #### Scenario 3: Full Delegation Capacity (100%) | Metric | Current (4x) | Proposed (24x) | |--------|-------------|---| | Self-stake | 2,000 AVAX | 2,000 AVAX | | Delegations | 8,000 AVAX | 48,000 AVAX | | Total weight | 10,000 AVAX | 50,000 AVAX | | Monthly rewards | $1,375 | $6,875 | | Validator self-share | $275 | $275 | | Delegation fee income (5%) | $55 | $330 | | Total validator income | $330 | $605 | | Monthly costs | $150 | $150 | | **Net profit** | **$180** | **$455** | **Conclusion**: 24x multiplier increases profitability by **+152.8%** at full delegation. Transforms validators from barely-profitable to highly profitable operations. ### Sensitivity Analysis: Impact of Different Delegation Fees **With 50% delegation capacity filled:** | Delegation Fee | Current (4x) | Proposed (24x) | Multiplier Advantage | |---|---|---|---| | 2% (minimum) | $89.38 | $185 | 2.07x | | 3% | $96.88 | $245 | 2.53x | | 5% (assumed) | **$152.50** | **$290** | 1.90x | | 7% | $208.13 | $335 | 1.61x | | 10% (high) | $271.88 | $425 | 1.56x | **Key insight**: 24x multiplier provides 1.5-2.5x better economics across all fee rates. Current model is highly sensitive to fee compression; 24x multiplier provides a financial buffer. ### Fee Market Implications **Current 4x Model**: - Validators barely profitable even at 5% fees - Significant pressure to accept low fees to attract delegations - Unsustainable economics push toward multi-node operations - 52.8% of validators choose NOT to accept delegations (uneconomical) **Proposed 24x Model**: - Validators achieve healthy profits at standard 5% fees - Can sustain competitive fee discovery without margin pressure - Enables single-node operations at scale - Market-based fee competition works as intended - Expected outcome: More validators accept delegations, earning fees ## Network Impact Analysis ### P-Chain Load Considerations The 24x multiplier change has **minimal impact** on P-Chain resources: | Resource | Current State | Projected (24x) | Change | |----------|---|---|---| | Validator registry size | ~854 validators | ~854 validators | No change | | Delegation transactions | Existing pattern | More delegations expected | Variable | | BLS signature aggregation | Baseline | Baseline | Minimal | | Storage requirements | Baseline | Baseline | Negligible | **Why minimal impact?** - Validator count unchanged (no new validators) - Each validator has same duty requirements - Delegations already tracked; multiplier doesn't add overhead - Maximum weight cap prevents runaway growth ### Consensus Security **Sybil Resistance**: Unchanged - 2,000 AVAX minimum stake maintained - Economic cost to create malicious validator unchanged ## Consequences Analysis ### Positive Consequences **1. Validator Capital Efficiency Improves** (Very High probability, High severity) - Single validators serve 6x larger delegated bases (8K -> 48K AVAX) - Infrastructure investment better utilized - Eliminates need for multiple-node operations **2. Validator Profitability Increases Substantially** (Very High probability, High severity) - Fully delegated validators earn 152.8% more ($180 -> $455/month) - At 50% delegation: +90.2% improvement - Makes validator operations genuinely profitable - Attracts professional infrastructure operators **3. Unlocks Dormant Validator Capacity** (Very High probability, High severity) - 52.8% of validators (451) currently have zero delegations - Improved economics incentivize these validators to accept delegations - Converts uneconomical validators into active, capital-efficient operations - Significantly increases network capital deployment **4. Fee Market Stabilizes** (High probability, Medium severity) - Validators can maintain 5%+ fees without margin pressure - Sustainable economics support long-term operations - Delegators benefit from stable, profitable validators - Competitive fee discovery enabled **5. Minimal Network Disruption** (Very High probability, Medium severity) - No new validator cohort entering (no min stake change) - Validator count unchanged (~854) - P-Chain load impact minimal - Easy implementation (2 parameters only) ### Negative Consequences **1. Geographic Diversity Unchanged** (High probability, Low severity) - Entry barrier remains $40,000 (unchanged) - Validator count unlikely to increase (~854 baseline) - Cloud provider concentration persists **Mitigation**: This proposal optimizes efficiency only; separate initiatives address diversity. **3. Fee Market Competition May Decrease** (Medium probability, Low severity) - Fewer new validators entering (no min stake reduction) - Existing validators less pressured on fees - Delegators have same validator options **Mitigation**: 24x efficiency enables competitive fee discovery without external pressure. **4. Network Accessibility Unchanged** (Very High probability, Low severity) - $40,000 entry barrier maintained - Small token holders cannot become validators - No improvement for less wealthy participants **Mitigation**: This proposal optimizes efficiency, not accessibility. Different ACPs can address participation barriers. ## Backwards Compatibility ### Breaking Changes: None This proposal maintains full backwards compatibility: 1. **Existing Validators**: All continue operating normally 2. **Existing Delegations**: No modifications required 3. **Reward Calculations**: Formula unchanged 4. **Staking Durations**: All parameters unchanged 5. **Uptime Requirements**: 80% threshold unchanged ## Open Questions **Core Question**: "Should Avalanche prioritize validator capital efficiency through multiplier increase and weight cap reduction?" **Supporting Questions**: 1. Is 24x multiplier appropriate? 2. Should minimum stake remain at 2,000 AVAX? - YES: Maintain current barrier (approved) - NO: Lower it (pursue separate ACP) ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-25: Vm Application Errors (/docs/acps/25-vm-application-errors) --- title: "ACP-25: Vm Application Errors" description: "Details for Avalanche Community Proposal 25: Vm Application Errors" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/25-vm-application-errors/README.md --- | ACP | 25 | | :--- | :--- | | **Title** | Virtual Machine Application Errors | | **Author(s)** | Joshua Kim ([@joshua-kim](https://github.com/joshua-kim)) | | **Status** | Activated | | **Track** | Standards | ## Abstract Support a way for a Virtual Machine (VM) to signal application-defined error conditions to another VM. ## Motivation VMs are able to build their own peer-to-peer application protocols using the `AppRequest`, `AppResponse`, and `AppGossip` primitives. `AppRequest` is a message type that requires a corresponding `AppResponse` to indicate a successful response. In the unhappy path where an `AppRequest` is unable to be served, there currently is no native way for a peer to signal an error condition. VMs currently resort to timeouts in failure cases, where a client making a request will fallback to marking its request as failed after some timeout period has expired. Having a native application error type would offer a more powerful abstraction where Avalanche nodes would be able to score peers based on perceived errors. This is not currently possible because Avalanche networking isn't aware of the specific implementation details of the messages being delivered to VMs. A native application error type would also guarantee that all clients can potentially expect an `AppError` message to unblock an unsuccessful `AppRequest` and only rely on a timeout when absolutely necessary, significantly decreasing the latency for a client to unblock its request in the unhappy path. ## Specification ### Message This modifies the p2p specification by introducing a new [protobuf](https://protobuf.dev/) message type: ``` message AppError { bytes chain_id = 1; uint32 request_id = 2; uint32 error_code = 3; string error_message = 4; } ``` 1. `chain_id`: Reserves field 1. Senders **must** use the same chain id of from the original `AppRequest` this `AppError` message is being sent in response to. 2. `request_id`: Reserves field 2. Senders **must** use the same request id from the original `AppRequest` this `AppError` message is being sent in response to. 3. `error_code`: Reserves field 3. Application defined error code. Implementations _should_ use the same error codes for the same conditions to allow clients to error match. Negative error codes are reserved for protocol defined errors. VMs may reserve any error code greater than zero. 4. `error_message`: Reserves field 4. Application defined human-readable error message that _should not_ be used for error matching. For error matching, use `error_code`. ### Reserved Errors The following error codes are currently reserved by the Avalanche protocol: | Error Code | Description | | ---------- | --------------- | | 0 | undefined | | -1 | network timeout | ### Handling Clients **must** respond to an inbound `AppRequest` message with either a corresponding `AppResponse` to indicate a successful response, or an `AppError` to indicate an error condition by the requested `deadline` in the original `AppRequest`. ## Backwards Compatibility This new message type requires a network activation to require either an `AppResponse` or an `AppError` as a required response to an `AppRequest`. ## Reference Implementation - Message definition: https://github.com/ava-labs/avalanchego/pull/2111 - Handling: https://github.com/ava-labs/avalanchego/pull/2248 ## Security Considerations Optional section that discusses the security implications/considerations relevant to the proposed change. Clients should be aware that peers can arbitrarily send `AppError` messages to invoke error handling logic in a VM. ## Open Questions Optional section that lists any concerns that should be resolved prior to implementation. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-256: Hardware Recommendations (/docs/acps/256-hardware-recommendations) --- title: "ACP-256: Hardware Recommendations" description: "Details for Avalanche Community Proposal 256: Hardware Recommendations" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/256-hardware-recommendations/README.md --- | ACP | 256 | | :- | :- | | **Title** | Update Hardware Requirements for Primary Network Nodes | | **Author(s)** | Aaron Buchwald ([@aaronbuchwald](https://github.com/aaronbuchwald)), Martin Eckardt ([@martineckardt](https://github.com/martineckardt)), Meaghan FitzGerald ([@meaghanfitzgerald](https://github.com/meaghanfitzgerald)) | | **Status** | Proposed ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/257))| | **Track** | Best Practices | ## Abstract This ACP updates the recommended minimum hardware for Avalanche Primary Network nodes. [Call to Action]: Move Primary Network node storage to a physically-mounted (local) NVMe SSD. _The majority of node operators currently fulfill these requirements. All other operators are encouraged to update by January 17th, 2026._ Avalanche L1 nodes: No change required. ## Motivation EVM throughput on Avalanche is primarily limited by state access latency. Lower-latency storage enables higher C-Chain throughput (gas/sec). This ACP standardizes Primary Network node storage guidance on local NVMe, consistent with common practice across major L1 blockchains. ## Specification ### Updated Minimum Recommended Hardware for Primary Network nodes - CPU: Equivalent of 8 AWS vCPU - RAM: 16 GB - Storage Size: 1 TB - Nodes storing large historal/archival state, or running custom configs may require up to ~15 TB - Storage Type: Physically-mounted (local) NVMe SSD (new) - Network: Reliable IPv4 or IPv6 connectivity, with an open public port ### Migration (Recommended Path) If a node does not meet the storage requirement, the operator should migrate to a freshly state-synced node (zero downtime, can be done during a validation period): [https://build.avax.network/docs/nodes/node-storage/periodic-state-sync](https://build.avax.network/docs/nodes/node-storage/periodic-state-sync). Validators that cannot update to low-latency storage can periodically [manually delete state-sync snapshots](https://build.avax.network/docs/nodes/node-storage/state-sync-snapshot-deletion). ## Validator Storage Exception and Required State Management Validators may use higher-latency storage (e.g., network-attached NVMe / SSD) only if stored state is kept to < 500 GB. ### How to meet the < 500 GB requirement: - Periodically replace the node with a freshly state-synced node: [https://build.avax.network/docs/nodes/node-storage/periodic-state-sync](https://build.avax.network/docs/nodes/node-storage/periodic-state-sync) - Or manually delete state sync snapshots: [https://build.avax.network/docs/nodes/node-storage/state-sync-snapshot-deletion](https://build.avax.network/docs/nodes/node-storage/state-sync-snapshot-deletion) Non-validator nodes (RPC, archival, history-heavy): Use local NVMe. If you need historical access, state management alone is not a substitute for low-latency disks. ### Cloud Service Provider Guidance - AWS: Instance Store NVMe (i3 / i4i / i4g) recommended; EBS gp3 acceptable only for validators with state management - Azure: Lsv3-series (local NVMe) recommended; Premium SSD Managed Disks (P30+) acceptable for validators only with state management - Google Cloud: Local SSD (NVMe) recommended; Persistent SSD acceptable for validators only with state management Note: These offerings are subject to change at the discretion of the CSP. It is imperative that all operators independently confirm that their cloud instance offering is, in fact, locally-mounted NVMe. ## Background State access latency is driven by: 1. Disk latency: Network-attached disks add latency from network/protocol overhead. 2. Stored state size: Larger state increases storage pressure; needs vary by node type. Reducing either one enables the node to process higher throughput. The second option is not available to nodes that require historical state. ## Cost Considerations - Validators: Network-attached storage can be a cost compromise only with disciplined state management. - Archival / history-heavy: Treat local NVMe as mandatory for functional performance. ## Backwards Compatibility This ACP introduces no protocol changes. All recommendations are compatible with current and historical versions of AvalancheGo. Existing configurations will continue to function, but higher-latency storage increases the risk of falling behind during load spikes. ## Security Considerations 1. Under-resourced validators may become unresponsive during stress, reducing effective participation. 2. State management procedures require operational care (and sometimes downtime). 3. Cloud storage failure modes exist for both ephemeral local disks and network-attached volumes—use monitoring, protections, and recovery plans. # ACP-267: Uptime Requirement Increase (/docs/acps/267-uptime-requirement-increase) --- title: "ACP-267: Uptime Requirement Increase" description: "Details for Avalanche Community Proposal 267: Uptime Requirement Increase" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/267-uptime-requirement-increase/README.md --- | ACP | 267 | | :--- | :--- | | **Title** | Increase Validator Uptime Requirement from 80% to 90% | | **Author(s)** | Martin Eckardt ([@martineckardt](https://github.com/martineckardt)) | | **Status** | Proposed [Discussion](https://github.com/avalanche-foundation/ACPs/discussions/268) | | **Track** | Best Practices | ## Abstract This proposal increases the minimum uptime requirement for Avalanche Primary Network validators from 80% to 90%. Validators must maintain at least 90% uptime during their staking period to receive their validation rewards. This change aims to enhance overall network health, reliability, and performance by ensuring validators meet a higher standard of availability. ## Motivation ### Current State The Avalanche Primary Network currently requires validators to maintain a minimum of 80% uptime during their staking period to be eligible for staking rewards. While this threshold has served the network adequately, there are compelling reasons to raise the bar. ### Need for Enhanced Network Reliability Higher validator uptime is essential for achieving further performance gains across the Avalanche Primary Network. Even a small number of validators operating at 80% uptime can cause substantial network degradation, including increased latency in API endpoints and delayed block finalization. When the Snowman consensus protocol queries validators about a block, encountering too many non-responsive nodes among the sampled set causes the query to fail. This forces the protocol to issue additional queries, delaying block agreement and reducing overall network throughput. As a result, sustained validator availability is critical to ensure the network consistently processes transactions at optimal speed. ## Specification ### Technical Changes Update the validator uptime requirement from 80% to 90% by changing the following in `MainnetParams` (defined in [`genesis/genesis_mainnet.go`](https://github.com/ava-labs/avalanchego/blob/master/genesis/genesis_mainnet.go)): ```go StakingConfig: StakingConfig{ UptimeRequirement: .9, // 90% } ``` The same change would be applied to `genesis/genesis_fuji.go` and `genesis/genesis_local.go` for consistency across all networks. ### Implementation Details The proposed change only raises the validator uptime requirement to 90%, with the uptime calculation method remaining unchanged. Validators are measured by their observed responsiveness during the staking period. Validators and their delegators will only receive rewards if the validator achieves at least 90% uptime; otherwise, they receive no rewards, maintaining the current all-or-nothing reward model with no partial payouts. ## Backwards Compatibility Each node continuously tracks its perceived uptime of its peers throughout the peer's validator staking period. At the end of the peer's validator staking period, each node sets its preference of whether or not to reward the peer based on its perceived uptime. If sufficient stake increases their peer uptime requirement in the middle of ongoing staking periods, those active validators would have their total accumulated uptime (from their original start time) compared against the new 90% threshold when their staking period ends. To give validators time to improve their infrastructure if needed, this ACP should be advertised broadly in the community. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-273: Reduce Minimum Staking Duration (/docs/acps/273-reduce-minimum-staking-duration) --- title: "ACP-273: Reduce Minimum Staking Duration" description: "Details for Avalanche Community Proposal 273: Reduce Minimum Staking Duration" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/273-reduce-minimum-staking-duration/README.md --- | ACP | 273 | | :--- | :--- | | **Title** | Reduce Minimum Validator Staking Duration | | **Author(s)** | Eric Lu ([ericlu-avax](https://github.com/ericlu-avax)), Martin Eckardt ([@martineckardt](https://github.com/martineckardt)), Meaghan FitzGerald ([@meaghanfitzgerald](https://github.com/meaghanfitzgerald)), Stephen Buttolph ([@StephenButtolph](https://github.com/StephenButtolph)) | | **Status** | Proposed ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/274)) | | **Track** | Standards | ## Abstract This proposal reduces the minimum validator staking period on Avalanche's Primary Network from 2 weeks (336 hours) to 2 day (48 hours). The change lowers barriers to validator participation while maintaining network security, enabling more flexible staking strategies and improving capital efficiency across the ecosystem. ## Motivation ### Current State The Avalanche Primary Network currently requires validators and delegators to stake for a minimum of 2 weeks (336 hours) to be eligible for staking rewards. While this ensures validator commitment to network security, it creates friction for participants who require more flexible liquidity access. ### Benefits of a Shorter Minimum Period 1. Increased Validator Participation: A 48-hour minimum removes a significant barrier to entry. Validators who cannot commit capital for 2 weeks can now participate, increasing overall network stake and decentralization. 2. Improved Capital Efficiency: Stakers gain reliable access to liquidity after 48 hours rather than 14 days. This enables more dynamic capital allocation strategies while maintaining commitment to network security during the staking period. ### Reward Impact Analysis Using Avalanche's reward formula at the time of writing, the APY difference between 48-hour and 2-week staking periods is approximately 0.04 percentage points (6.08% vs 6.12%). This minimal difference preserves reward incentives while enabling liquidity flexibility. ## Specification ### Technical Changes Update the minimum staking duration by modifying `MinStakeDuration` in the genesis configuration: Current: ```go MinStakeDuration = 336 * time.Hour ``` Proposed: ```go MinStakeDuration = 48 * time.Hour ``` ### Implementation Details The staking mechanism remains unchanged. Validators must still meet all existing requirements: - Minimum stake: 2,000 AVAX for Primary Network validators - Uptime requirement: 90% (per ACP-267) - Hardware requirements: As specified in ACP-256 Only the minimum duration parameter changes, with all other validation rules and reward calculations preserved. ## Backwards Compatibility This is a non-backwards compatible change to the P-Chain validation and execution requirements. Therefore, it requires a network upgrade in order to be implemented. This change affects only new staking periods initiated after activation. Existing validators with active staking periods remain unaffected and will complete their original durations under the previous rules. The change is non-breaking: all existing validation infrastructure, reward mechanisms, and consensus operations continue functioning without modification. ## Security Considerations Shorter staking periods increase the likelihood of significant validator churn over a short period of time. This may impact Primary Network consensus safety and the reliability of Interchain Message delivery, which is required for L1 validator operations. This risk may impact the eventual implementation details of this ACP. Additionally, if the incentive to stake for longer than 48 hours is insufficient, the vast majority of stake may opt for the minimum 48-hour duration (particularly likely with the addition of [ACP-236](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/236-auto-renewed-staking/README.md), which introduces auto-renewed staking). In this scenario, essentially all entities securing the network could change completely within 48 hours. The entire validator set could disappear, or a low-stake-weight validator could become a high-stake-weight validator within a very short time frame (i.e. a validator could go from 1% network stake to 20% network stake in just a few hours). This instability poses a real cost in terms of network security and should be weighed against the benefits of shorter staking periods. Validators remain subject to the same accountability standards during the 48-hour period. Network consensus sampling assumes the same validator availability model. Historical data demonstrates that validator uptime patterns remain consistent regardless of staking duration length, as infrastructure quality and operational commitment drive uptime, not duration requirements alone. ## Open Questions 1. If the minimum validation period is shortened, should the minimum delegation period also be reduced? At the time of writing, the ratio of `AddPermissionlessDelegatorTx`s to `AddPermissionlessValidatorTx`s over the past 365 days was 100:1, with delegations accounting for 46% of all P-Chain transactions. Assuming all delegations are currently set to the minimum duration (14 days), reducing the minimum to 48 hours would increase the rate of state growth from delegation operations on the P-Chain by approximately 14x. Given this potential impact, if the minimum delegation period is reduced, should there be an additional requirement that short-term delegations (e.g., 48 hours) must stake at least 1,000 AVAX? 2. Based on the scenario posed in Security Considerations, should the rewards rate differ for shorter vs longer validation periods to incentivize network stability? ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-30: Avalanche Warp X Evm (/docs/acps/30-avalanche-warp-x-evm) --- title: "ACP-30: Avalanche Warp X Evm" description: "Details for Avalanche Community Proposal 30: Avalanche Warp X Evm" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/30-avalanche-warp-x-evm/README.md --- | ACP | 30 | | :--- | :--- | | **Title** | Integrate Avalanche Warp Messaging into the EVM | | **Author(s)** | Aaron Buchwald ([aaron.buchwald56@gmail.com](mailto:aaron.buchwald56@gmail.com)) | | **Status** | Activated | | **Track** | Standards | ## Abstract Integrate Avalanche Warp Messaging into the C-Chain and Subnet-EVM in order to bring Cross-Subnet Communication to the EVM on Avalanche. ## Motivation Avalanche Subnets enable the creation of independent blockchains within the Avalanche Network. Each Avalanche Subnet registers its validator set on the Avalanche P-Chain, which serves as an effective "membership chain" for the entire Avalanche Ecosystem. By providing read access to the validator set of every Subnet on the Avalanche Network, any Subnet can look up the validator set of any other Subnet within the Avalanche Ecosystem to verify an Avalanche Warp Message, which replaces the need for point-to-point exchange of validator set info between Subnets. This enables a light weight protocol that allows seamless, on-demand communication between Subnets. For more information on the Avalanche Warp Messaging message and payload formats see here: - [AWM Message Format](https://github.com/ava-labs/avalanchego/tree/v1.10.15/vms/platformvm/warp/README.md) - [Payload Format](https://github.com/ava-labs/avalanchego/tree/v1.10.15/vms/platformvm/warp/payload/README.md) This ACP proposes to activate Avalanche Warp Messaging on the C-Chain and offer compatible support in Subnet-EVM to provide the first standard implementation of AWM in production on the Avalanche Network. ## Specification The specification will be broken down into the Solidity interface of the Warp Precompile, a Golang example implementation, the predicate verification, and the proposed gas costs for the Warp Precompile. The Warp Precompile address is `0x0200000000000000000000000000000000000005`. ### Precompile Solidity Interface ```solidity // (c) 2022-2023, Ava Labs, Inc. All rights reserved. // See the file LICENSE for licensing terms. // SPDX-License-Identifier: MIT pragma solidity ^0.8.0; struct WarpMessage { bytes32 sourceChainID; address originSenderAddress; bytes payload; } struct WarpBlockHash { bytes32 sourceChainID; bytes32 blockHash; } interface IWarpMessenger { event SendWarpMessage(address indexed sender, bytes32 indexed messageID, bytes message); // sendWarpMessage emits a request for the subnet to send a warp message from [msg.sender] // with the specified parameters. // This emits a SendWarpMessage log from the precompile. When the corresponding block is accepted // the Accept hook of the Warp precompile is invoked with all accepted logs emitted by the Warp // precompile. // Each validator then adds the UnsignedWarpMessage encoded in the log to the set of messages // it is willing to sign for an off-chain relayer to aggregate Warp signatures. function sendWarpMessage(bytes calldata payload) external returns (bytes32 messageID); // getVerifiedWarpMessage parses the pre-verified warp message in the // predicate storage slots as a WarpMessage and returns it to the caller. // If the message exists and passes verification, returns the verified message // and true. // Otherwise, returns false and the empty value for the message. function getVerifiedWarpMessage(uint32 index) external view returns (WarpMessage calldata message, bool valid); // getVerifiedWarpBlockHash parses the pre-verified WarpBlockHash message in the // predicate storage slots as a WarpBlockHash message and returns it to the caller. // If the message exists and passes verification, returns the verified message // and true. // Otherwise, returns false and the empty value for the message. function getVerifiedWarpBlockHash( uint32 index ) external view returns (WarpBlockHash calldata warpBlockHash, bool valid); // getBlockchainID returns the snow.Context BlockchainID of this chain. // This blockchainID is the hash of the transaction that created this blockchain on the P-Chain // and is not related to the Ethereum ChainID. function getBlockchainID() external view returns (bytes32 blockchainID); } ``` ### Warp Predicates and Pre-Verification Signed Avalanche Warp Messages are encoded in the [EIP-2930 Access List](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-2930.md) of a transaction, so that they can be pre-verified before executing the transactions in the block. The access list can specify any number of access tuples: a pair of an address and an array of storage slots in EIP-2930. Warp Predicate verification borrows this functionality to encode signed warp messages according to the serialization format defined [here](https://github.com/ava-labs/subnet-evm/blob/v0.5.9/predicate/Predicate.md). Each Warp specific access tuple included in the access list specifies the Warp Precompile address as the address. The first tuple that specifies the Warp Precompile address is considered to be at index. Each subsequent access tuple that specifies the Warp Precompile address increases the Warp Message index by 1. Access tuples that specify any other address are not included in calculating the index for a specific warp message. Avalanche Warp Messages are pre-verified (prior to block execution), and outputs a bitset for each transaction where a 1 indicates an Avalanche Warp Message that failed verification at that index. Throughout the EVM execution, the Warp Precompile checks the status of the resulting bit set to determine whether pre-verified messages are considered valid. This has the additional benefit of encoding the Warp pre-verification results in the block, so that verifying a historical block can use the encoded results instead of needing to access potentially old P-Chain state. The result bitset is encoded in the block according to the predicate result specification [here](https://github.com/ava-labs/subnet-evm/blob/v0.5.9/predicate/Results.md). Each Warp Message in the access list is charged gas to pay for verifying the Warp Message (gas costs are covered below) and is verified with the following steps (see [here](https://github.com/ava-labs/subnet-evm/blob/v0.5.9/x/warp/config.go#L218) for reference implementation): 1. Unpack the predicate bytes 2. Parse the signed Avalanche Warp Message 3. Verify the signature according to the AWM spec in AvalancheGo [here](https://github.com/ava-labs/subnet-evm/blob/v0.5.9/x/warp/config.go#L218) (the quorum numerator/denominator for the C-Chain is 67/100 and is configurable in Subnet-EVM) ### Precompile Implementation All types, events, and function arguments/outputs are encoded using the ABI package according to the official [Solidity ABI Specification](https://docs.soliditylang.org/en/latest/abi-spec.html). When the precompile is invoked with a given `calldata` argument, the first four bytes (`calldata[0:4]`) are read as the [function selector](https://docs.soliditylang.org/en/latest/abi-spec.html#function-selector). If the function selector matches the function selector of one of the functions defined by the Solidity interface, the contract invokes the corresponding execution function with the remaining calldata ie. `calldata[4:]`. For the full specification of the execution functions defined in the Solidity interface, see the reference implementation here: - [sendWarpMessage](https://github.com/ava-labs/subnet-evm/blob/v0.5.9/x/warp/contract.go#L226) - [getVerifiedWarpMessage](https://github.com/ava-labs/subnet-evm/blob/v0.5.9/x/warp/contract.go#L187) - [getVerifiedWarpBlockHash](https://github.com/ava-labs/subnet-evm/blob/v0.5.9/x/warp/contract.go#L145) - [getBlockchainID](https://github.com/ava-labs/subnet-evm/blob/v0.5.9/x/warp/contract.go#L96) ### Gas Costs The Warp Precompile charges gas during the verification of included Avalanche Warp Messages, which is included in the intrinsic gas cost of the transaction, and during the execution of the precompile. #### Verification Gas Costs Pre-verification charges the following costs for each Avalanche Warp Message: - GasCostPerSignatureVerification: 20000 - GasCostPerWarpMessageBytes: 100 - GasCostPerWarpSigner: 500 These numbers were determined experimentally using the benchmarks available [here](https://github.com/ava-labs/subnet-evm/blob/master/x/warp/predicate_test.go#L687) to target approximately the same mgas/s as existing precompile benchmarks in the EVM, which ranges between 50-200 mgas/s. In addition to the benchmarks, the following assumptions and goals were taken into account: - BLS Public Key Aggregation is extremely fast, resulting in charging more for the base cost of a single BLS Multi-Signature Verification than for adding an additional public key - The cost per byte included in the transaction should be strictly higher for including Avalanche Warp Messages than via transaction calldata, so that the Warp Precompile does not change the worst case maximum block size #### Execution Gas Costs The execution gas costs were determined by summing the cost of the EVM operations that are performed throughout the execution of the precompile with special consideration for added functionality that does not have an existing corollary within the EVM. ##### sendWarpMessage `sendWarpMessage` charges a base cost of 41,500 gas + 8 gas / payload byte This is comprised of charging for the following components: - 375 gas / log operation - 3 topics * 375 gas / topic - 20k gas to produce and serve a BLS Signature - 20k gas to store the Unsigned Warp Message - 8 gas / payload byte This charges 20k gas for storing an Unsigned Warp Message although the message is stored in an independent key-value database instead of the active state. This makes it less expensive to store, so 20k gas is a conservative estimate. Additionally, the cost of serving valid signatures is significantly cheaper than serving state sync and bootstrapping requests, so the cost to validators of serving signatures over time is not considered a significant concern. `sendWarpMessage` also charges for the log operation it includes commensurate with the gas cost of a standard log operation in the EVM. A single `SendWarpMessage` log is charged: - 375 gas base cost - 375 gas per topic (`eventID`, `sender`, `messageID`) - 8 byte per / payload byte encoded in the `message` field Topics are indexed fields encoded as 32 byte values to support querying based on given specified topic values. ##### getBlockchainID `getBlockchainID` charges 2 gas to serve an already in-memory 32 byte valu commensurate with existing in-memory operations. ##### getVerifiedWarpBlockHash / getVerifiedWarpMessage `GetVerifiedWarpMessageBaseCost` charges 2 gas for serving a Warp Message (either payload type). Warp message are already in-memory, so it charges 2 gas for access. `GasCostPerWarpMessageBytes` charges 100 gas per byte of the Avalanche Warp Message that is unpacked into a Solidity struct. ## Backwards Compatibility Existing EVM opcodes and precompiles are not modified by activating Avalanche Warp Messaging in the EVM. This is an additive change to activate a Warp Precompile on the Avalanche C-Chain and can be scheduled for activation in any VM running on Avalanche Subnets that are capable of sending / verifying the specified payload types. ## Reference Implementation A full reference implementation can be found in Subnet-EVM v0.5.9 [here](https://github.com/ava-labs/subnet-evm/tree/v0.5.9/x/warp). ## Security Considerations Verifying an Avalanche Warp Message requires reading the source subnet's validator set at the P-Chain height specified in the [Snowman++ Block Extension](https://github.com/ava-labs/avalanchego/blob/v1.10.15/vms/proposervm/README.md#snowman-block-extension). The Avalanche PlatformVM provides the current state of the Avalanche P-Chain and maintains reverse diff-layers in order to compute Subnets' validator sets at historical points in time. As a result, verifying a historical Avalanche Warp Message that references an old P-Chain height requires applying diff-layers from the current state back to the referenced P-Chain height. As Subnets and the P-Chain continue to produce and accept new blocks, verifying the Warp Messages in historical blocks becomes increasingly expensive. To efficiently handle historical blocks containing Avalanche Warp Messages, the EVM uses the result bitset encoded in the block to determine the validity of Avalanche Warp Messages without requiring a historical P-Chain state lookup. This is considered secure because the network already verified the Avalanche Warp Messages when they were originally verified and accepted. ## Open Questions _How should validator set lookups in Warp Message verification be effectively charged for gas?_ The verification cost of performing a validator set lookup on the P-Chain is currently excluded from the implementation. The cost of this lookup is variable depending on how old the referenced P-Chain height is from the perspective of each validator. [Ongoing work](https://github.com/ava-labs/avalanchego/pull/1611) can parallelize P-Chain validator set lookups and message verification to reduce the impact on block verification latency to be negligible and reduce costs to reflect the additional bandwidth of encoding Avalanche Warp Messages in the transaction. ## Acknowledgements Avalanche Warp Messaging and this effort to integrate it into the EVM has been a monumental effort. Thanks to all of the contributors who contributed their ideas, feedback, and development to this effort. @stephenbuttolph @patrick-ogrady @michaelkaplan13 @minghinmatthewlam @cam-schultz @xanderdunn @darioush @ceyonur ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-31: Enable Subnet Ownership Transfer (/docs/acps/31-enable-subnet-ownership-transfer) --- title: "ACP-31: Enable Subnet Ownership Transfer" description: "Details for Avalanche Community Proposal 31: Enable Subnet Ownership Transfer" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/31-enable-subnet-ownership-transfer/README.md --- | ACP | 31 | | :--- | :--- | | **Title** | Enable Subnet Ownership Transfer | | **Author(s)** | Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu)) | | **Status** | Activated | | **Track** | Standards | ## Abstract Allow the current owner of a Subnet to transfer ownership to a new owner. ## Motivation Once a Subnet is created on the P-chain through a [CreateSubnetTx](https://github.com/ava-labs/avalanchego/blob/v1.10.15/vms/platformvm/txs/create_subnet_tx.go#L14-L19), the `Owner` of the subnet is currently immutable. Subnet operators may want to transition ownership of the Subnet to a new owner for a number of reasons, not least of all being rotating their control key(s) periodically. ## Specification Implement a new transaction type (`TransferSubnetOwnershipTx`) that: 1. Takes in a `Subnet` 2. Verifies that the `SubnetAuth` has the right to remove the node from the subnet by verifying it against the `Owner` field in the `CreateSubnetTx` that created the `Subnet`. 3. Takes in a new `Owner` and assigning it as the new owner of `Subnet` This transaction type should have the following format (code below is presented in Golang): ```go type TransferSubnetOwnershipTx struct { // Metadata, inputs and outputs BaseTx `serialize:"true"` // ID of the subnet this tx is modifying Subnet ids.ID `serialize:"true" json:"subnetID"` // Proves that the issuer has the right to remove the node from the subnet. SubnetAuth verify.Verifiable `serialize:"true" json:"subnetAuthorization"` // Who is now authorized to manage this subnet Owner fx.Owner `serialize:"true" json:"newOwner"` } ``` This transaction type should have type ID `0x21` in codec version `0x00`. This transaction type should have a fee of `0.001 AVAX`, equivalent to adding a subnet validator/delegator. ## Backwards Compatibility Adding a new transaction type is an execution change and requires a mandatory upgrade for activation. Implementors must take care to reject this transaction prior to activation. This ACP only details the specification of the `TransferSubnetOwnershipTx` type. ## Reference Implementation An implementation of `TransferSubnetOwnershipTx` was created [here](https://github.com/ava-labs/avalanchego/pull/2178) and subsequently merged into AvalancheGo. Since the "D" Upgrade is not activated, this transaction will be rejected by AvalancheGo. If modifications are made to the specification of the transaction as part of the ACP process, the code must be updated prior to activation. ## Security Considerations No security considerations. ## Open Questions No open questions. ## Acknowledgements Thank you [@friskyfoxdk](https://github.com/friskyfoxdk) for filing an [issue](https://github.com/ava-labs/avalanchego/issues/1946) requesting this feature. Thanks to [@StephenButtolph](https://github.com/StephenButtolph) and [@abi87](https://github.com/abi87) for their feedback on the reference implementation. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-41: Remove Pending Stakers (/docs/acps/41-remove-pending-stakers) --- title: "ACP-41: Remove Pending Stakers" description: "Details for Avalanche Community Proposal 41: Remove Pending Stakers" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/41-remove-pending-stakers/README.md --- | ACP | 41 | | :--- | :--- | | **Title** | Remove Pending Stakers | | **Author(s)** | Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu)) | | **Status** | Activated | | **Track** | Standards | ## Abstract Remove user-specified `StartTime` for stakers. Start the staking period for a staker as soon as their staking transaction is accepted. This greatly reduces the computational load on the P-chain, increasing the efficiency of all Avalanche Network validators. ## Motivation Stakers currently set a `StartTime` for their staking period. This means that Avalanche Network Clients, like AvalancheGo, need to maintain a pending set of all stakers that have not yet started. This places a nontrivial amount of work on the P-chain: - When a new delegator transaction is verified, the pending set needs to be checked to ensure that the validator they are delegating to will not exceed `MaxValidatorStake` while they are active - When a new staker transaction is accepted, it gets added to the pending set - When time is advanced on the P-chain, any stakers in the pending set whose `StartTime <= CurrentTime` need to be moved to the current set By immediately starting every staker on acceptance, the validators do not have to do the above work when validating the P-chain. `MaxValidatorStake` will become an `O(1)` operation as only the current stake of the validator needs to be checked. The pending set can be fully removed. ## Specification 1. When adding a new staker, the current on-chain time should be used for the staker's start time. 2. When determining when to remove the staker from the staker set, the `EndTime` specified in the transaction should continue to be used. Staking transactions should now be rejected if it does not satisfy `MinStakeDuration <= EndTime - CurrentTime <= MaxStakeDuration`. `StartTime` will no longer be validated. ## Backwards Compatibility Modifying the state transition of a transaction type is an execution change and requires a mandatory upgrade for activation. Implementors must take care to not alter the execution behavior prior to activation. This ACP only details the new state transition. Current wallet implementations will continue to work as-is post-activation of this ACP since no transaction formats are modified or added. Wallet implementations may run into issues with their txs being rejected as a result of this ACP if `EndTime >= CurrentChainTime + MaxStakeDuration`. `CurrentChainTime` is guaranteed to be >= the latest block timestamp on the P-chain. ## Reference Implementation A reference implementation has not been created for this ACP since it deals with state management. Each ANC will need to adjust their execution step to follow the Specification detailed above. For AvalancheGo, this work is tracked in this PR: https://github.com/ava-labs/avalanchego/pull/2175 If modifications are made to the specification of the new execution behavior as part of the ACP process, the code must be updated prior to activation. ## Security Considerations No security considerations. ## Open Questions _How will stakers stake for `MaxStakeDuration` if they cannot determine their `StartTime`?_ As mentioned above, the beginning of your staking period is the block acceptance timestamp. Unless you can accurately predict the block timestamp, you will *not* be able to fully stake for `MaxStakeDuration`. This is an explicit trade-off to guarantee that stakers will receive their original stake + any staking rewards at `EndTime`. Delegators can maximize their staking period by setting the same `EndTime` as the Validator they are delegating to. ## Acknowledgements Thanks to [@StephenButtolph](https://github.com/StephenButtolph) and [@abi87](https://github.com/abi87) for their feedback on these ideas. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-62: Disable Addvalidatortx And Adddelegatortx (/docs/acps/62-disable-addvalidatortx-and-adddelegatortx) --- title: "ACP-62: Disable Addvalidatortx And Adddelegatortx" description: "Details for Avalanche Community Proposal 62: Disable Addvalidatortx And Adddelegatortx" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/62-disable-addvalidatortx-and-adddelegatortx/README.md --- | ACP | 62 | | :--- | :--- | | **Title** | Disable `AddValidatorTx` and `AddDelegatorTx` | | **Author(s)** | Jacob Everly ([@JacobEv3rly](https://twitter.com/JacobEv3rly)), Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu)) | | **Status** | Activated | | **Track** | Standards | ## Abstract Disable `AddValidatorTx` and `AddDelegatorTx` to push all new stakers to use `AddPermissionlessValidatorTx` and `AddPermissionlessDelegatorTx`. `AddPermissionlessValidatorTx` requires validators to register a BLS key. Wide adoption of registered BLS keys accelerates the timeline for future P-Chain upgrades. Additionally, this reduces the number of ways to participate in Primary Network validation from two to one. ## Motivation `AddPermissionlessValidatorTx` and `AddPermissionlessDelegatorTx` were activated on the Avalanche Network in October 2022 with Banff (v1.9.0). This unlocked the ability for Subnet creators to activate Proof-of-Stake validation using their own token on their own Subnet. See more details about Banff [here](https://medium.com/avalancheavax/banff-elastic-subnets-44042f41e34c). These new transaction types can also be used to register a Primary Network validator, leaving two redundant transactions: `AddValidatorTx` and `AddDelegatorTx`. [`AddPermissionlessDelegatorTx`](https://github.com/ava-labs/avalanchego/blob/v1.10.18/vms/platformvm/txs/add_permissionless_delegator_tx.go#L25-L37) contains the same fields as [`AddDelegatorTx`](https://github.com/ava-labs/avalanchego/blob/v1.10.18/vms/platformvm/txs/add_delegator_tx.go#L29-L39) with an additional `Subnet` field. [`AddPermissionlessValidatorTx`](https://github.com/ava-labs/avalanchego/blob/v1.10.18/vms/platformvm/txs/add_permissionless_validator_tx.go#L35-L59) contains the same fields as [`AddValidatorTx`](https://github.com/ava-labs/avalanchego/blob/v1.10.18/vms/platformvm/txs/add_validator_tx.go#L29-L42) with additional `Subnet` and `Signer` fields. `RewardsOwner` was also split into `ValidationRewardsOwner` and `DelegationRewardsOwner` letting validators divert rewards they receive from delegators into a separate rewards owner. By disabling support of `AddValidatorTx`, all new validators on the Primary Network must use `AddPermissionlessValidatorTx` and register a BLS key with their NodeID. As more validators attach BLS keys to their nodes, future upgrades using these BLS keys can be activated through the ACP process. BLS keys can be used to efficiently sign a common message via [Public Key Aggregation](https://crypto.stanford.edu/~dabo/pubs/papers/BLSmultisig.html). Applications of this include, but are not limited to: - **Arbitrary Subnet Rewards**: The P-Chain currently restricts Elastic Subnets to follow the reward curve defined in a `TransformSubnetTx`. With sufficient BLS key adoption, Elastic Subnets can define their own reward curve and reward conditions. The P-Chain can be modified to take in a message indicating if a Subnet validator should be rewarded with how many tokens signed with a BLS Multi-Signature. - **Subnet Attestations**: Elastic Subnets can attest to the state of their Subnet with a BLS Multi-Signature. This can enable clients to fetch the current state of the Subnet without syncing the entire Subnet. `StateSync` enables clients to download chain state from peers up to a recent block near tip. However, it is up to the client to query these peers and resolve any potential conflicts in the responses. With Subnet Attestations, clients can query an API node to prove information about a Subnet without querying the Subnet's validators. This can especially be useful for [Subnet-Only Validators](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/62-disable-addvalidatortx-and-adddelegatortx/13-subnet-only-validators.md) to prove information about the C-Chain. To accelerate future BLS-powered advancements in the Avalanche Network, this ACP aims to disable `AddValidatorTx` and `AddDelegatorTx` in Durango. ## Specification `AddValidatorTx` and `AddDelegatorTx` should be marked as dropped when added to the mempool after activation. Any blocks including these transactions should be considered invalid. ## Backwards Compatibility Disabling a transaction type is an execution change and requires a mandatory upgrade for activation. Implementers must take care to not alter the execution behavior prior to activation. After this ACP is activated, any new issuance of `AddValidatorTx` or `AddDelegatorTx` will be considered invalid and dropped by the network. Any consumers of these transactions must transition to using `AddPermissionlessValidatorTx` and `AddPermissionlessDelegatorTx` to participate in Primary Network validation. The [Avalanche Ledger App](https://github.com/LedgerHQ/app-avalanche) supports both of these transaction types. Note that `AddSubnetValidatorTx` and `RemoveSubnetValidatorTx` are unchanged by this ACP. ## Reference Implementation An implementation disabling `AddValidatorTx` and `AddDelegatorTx` was created [here](https://github.com/ava-labs/avalanchego/pull/2662). Until activation, these transactions will continue to be accepted by AvalancheGo. If modifications are made to the specification as part of the ACP process, the code must be updated prior to activation. ## Security Considerations No security considerations. ## Open Questions ## Acknowledgements Thanks to [@StephenButtolph](https://github.com/StephenButtolph) and [@patrick-ogrady](https://github.com/patrick-ogrady) for their feedback on these ideas. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-75: Acceptance Proofs (/docs/acps/75-acceptance-proofs) --- title: "ACP-75: Acceptance Proofs" description: "Details for Avalanche Community Proposal 75: Acceptance Proofs" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/75-acceptance-proofs/README.md --- | ACP | 75 | | :--- | :--- | | **Title** | Acceptance Proofs | | **Author(s)** | Joshua Kim | | **Status** | Proposed ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/82)) | | **Track** | Standards | ## Abstract Introduces support for a proof of a block’s acceptance in consensus. ## Motivation Subnets are able to prove arbitrary events using warp messaging, but native support for proving block acceptance at the protocol layer enables more utility. Acceptance proofs are introduced to prove that a block has been accepted by a subnet. One example use case for acceptance proofs is to provide stronger fault isolation guarantees from the primary network to subnets. Subnets use the [ProposerVM](https://github.com/ava-labs/avalanchego/blob/416fbdf1f783c40f21e7009a9f06d192e69ba9b5/vms/proposervm/README.md) to implement soft leader election for block proposal. The ProposerVM determines the block producer schedule from a randomly shuffled validator set at a specified P-Chain block height. Validators are therefore required to have the P-Chain block referenced in a block's header to verify the block producer against the expected block producer schedule. If a block's header specifies a P-Chain height that has not been accepted yet, the block is treated as invalid. If a block referencing an unknown P-Chain height was produced virtuously, it is expected that the validator will eventually discover the block as its P-Chain height advances and accept the block. If many validators disagree about the current tip of the P-Chain, it can lead to a liveness concern on the subnet where block production entirely stalls. In practice, this almost never occurs because nodes produce blocks with a lagging P-Chain height because it’s likely that most nodes will have accepted a sufficiently stale block. This however, relies on an assumption that validators are constantly making progress in consensus on the P-Chain to prevent the subnet from stalling. This leaves an open concern where the P-Chain stalling on a node would prevent it from verifying any blocks, leading to a subnet unable to produce blocks if many validators stalled at different P-Chain heights. --- Figure 1: A Validator that has synced P-Chain blocks `A` and `B` fails verification of a block proposed at block `C`. figure 1 --- We introduce "acceptance proofs", so that a peer can verify any block accepted by consensus. In the aforementioned use-case, if a P-Chain block is unknown by a peer, it can request the block and proof at the provided height from a peer. If a block's proof is valid, the block can be executed to advance the local P-Chain and verify the proposed subnet block. Peers can request blocks from any peer without requiring consensus locally or communication with a validator. This has the added benefit of reducing the number of required connections and p2p message load served by P-Chain validators. --- Figure 2: A Validator is verifying a subnet’s block `Z` which references an unknown P-Chain block `C` in its block header figure 2 Figure 3: A Validator requests the blocks and proofs for `B` and `C` from a peer figure 3 Figure 4: The Validator accepts the P-Chain blocks and is now able to verify `Z` figure 4 --- ## Specification Note: The following is pseudocode. ### P2P #### Aggregation ```diff + message GetAcceptanceSignatureRequest { + bytes chain_id = 1; + uint32 request_id = 2; + bytes block_id = 3; + } ``` The `GetAcceptanceSignatureRequest` message is sent to a peer to request their signature for a given block id. ```diff + message GetAcceptanceSignatureResponse { + bytes chain_id = 1; + uint32 request_id = 2; + bytes bls_signature = 3; + } ``` `GetAcceptanceSignatureResponse` is sent to a peer as a response for `GetAcceptanceSignatureRequest`. `bls_signature` is the peer’s signature using their registered primary network BLS staking key over the requested `block_id`. An empty `bls_signature` field indicates that the block was not accepted yet. ## Security Considerations Nodes that bootstrap using state sync may not have the entire history of the P-Chain and therefore will not be able to provide the entire history for a block that is referenced in a block that they propose. This would be needed to unblock a node that is attempting to fast-forward their P-Chain, as they require the entire ancestry between their current accepted tip and the block they are attempting to forward to. It is assumed that nodes will have some minimum amount of recent state so that the requester can eventually be unblocked by retrying, as only one node with the requested ancestry is required to unblock the requester. An alternative is to make a churn assumption and validate the proposed block's proof with a stale validator set to avoid complexity, but this introduces more security concerns. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-77: Reinventing Subnets (/docs/acps/77-reinventing-subnets) --- title: "ACP-77: Reinventing Subnets" description: "Details for Avalanche Community Proposal 77: Reinventing Subnets" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/77-reinventing-subnets/README.md --- | ACP | 77 | | :------------ | :---------------------------------------------------------------------------------------- | | **Title** | Reinventing Subnets | | **Author(s)** | Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu)) | | **Status** | Activated ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/78)) | | **Track** | Standards | | **Replaces** | [ACP-13](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/13-subnet-only-validators/README.md) | ## Abstract Overhaul Subnet creation and management to unlock increased flexibility for Subnet creators by: - Separating Subnet validators from Primary Network validators (Primary Network Partial Sync, Removal of 2000 $AVAX requirement) - Moving ownership of Subnet validator set management from P-Chain to Subnets (ERC-20/ERC-721/Arbitrary Staking, Staking Reward Management) - Introducing a continuous P-Chain fee mechanism for Subnet validators (Continuous Subnet Staking) This ACP supersedes [ACP-13](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/13-subnet-only-validators/README.md) and borrows some of its language. ## Motivation Each node operator must stake at least 2000 $AVAX ($70k at time of writing) to first become a Primary Network validator before they qualify to become a Subnet validator. Most Subnets aim to launch with at least 8 Subnet validators, which requires staking 16000 $AVAX ($560k at time of writing). All Subnet validators, to satisfy their role as Primary Network validators, must also [allocate 8 AWS vCPU, 16 GB RAM, and 1 TB storage](https://github.com/ava-labs/avalanchego/blob/master/README.md#installation) to sync the entire Primary Network (X-Chain, P-Chain, and C-Chain) and participate in its consensus, in addition to whatever resources are required for each Subnet they are validating. Regulated entities that are prohibited from validating permissionless, smart contract-enabled blockchains (like the C-Chain) cannot launch a Subnet because they cannot opt-out of Primary Network Validation. This deployment blocker prevents a large cohort of Real World Asset (RWA) issuers from bringing unique, valuable tokens to the Avalanche Ecosystem (that could move between C-Chain <-> Subnets using Avalanche Warp Messaging/Teleporter). A widely validated Subnet that is not properly metered could destabilize the Primary Network if usage spikes unexpectedly. Underprovisioned Primary Network validators running such a Subnet may exit with an OOM exception, see degraded disk performance, or find it difficult to allocate CPU time to P/X/C-Chain validation. The inverse also holds for Subnets with the Primary Network (where some undefined behavior could bring a Subnet offline). Although the fee paid to the Primary Network to operate a Subnet does not go up with the amount of activity on the Subnet, the fixed, upfront cost of setting up a Subnet validator on the Primary Network deters new projects that prefer smaller, even variable, costs until demand is observed. _Unlike L2s that pay some increasing fee (usually denominated in units per transaction byte) to an external chain for data availability and security as activity scales, Subnets provide their own security/data availability and the only cost operators must pay from processing more activity is the hardware cost of supporting additional load._ Elastic Subnets, introduced in [Banff](https://medium.com/avalancheavax/banff-elastic-subnets-44042f41e34c), enabled Subnet creators to activate Proof-of-Stake validation and uptime-based rewards using their own token. However, this token is required to be an ANT (created on the X-Chain) and locked on the P-Chain. All staking rewards were distributed on the P-Chain with the reward curve being defined in the `TransformSubnetTx` and, once set, was unable to be modified. With no Elastic Subnets live on Mainnet, it is clear that Permissionless Subnets as they stand today could be more desirable. There are many successful Permissioned Subnets in production but many Subnet creators have raised the above as points of concern. In summary, the Avalanche community could benefit from a more flexible and affordable mechanism to launch Permissionless Subnets. ### A Note on Nomenclature Avalanche Subnets are subnetworks validated by a subset of the Primary Network validator set. The new network creation flow outlined in this ACP does not require any intersection between the new network's validator set and the Primary Network's validator set. Moreover, the new networks have greater functionality and sovereignty than Subnets. To distinguish between these two kinds of networks, the community has been referring to these new networks as _Avalanche Layer 1s_, or L1s for short. All networks created through the old network creation flow will continue to be referred to as Avalanche Subnets. ## Specification At a high-level, L1s can manage their validator sets externally to the P-Chain by setting the blockchain ID and address of their _validator manager_. The P-Chain will consume Warp messages that modify the L1's validator set. To confirm modification of the L1's validator set, the P-Chain will also produce Warp messages. L1 validators are not required to validate the Primary Network, and do not have the same 2000 $AVAX stake requirement that Subnet validators have. To maintain an active L1 validator, a continuous fee denominated in $AVAX is assessed. L1 validators are only required to sync the P-Chain (not X/C-Chain) in order to track validator set changes and support cross-L1 communication. ### P-Chain Warp Message Payloads To enable management of an L1's validator set externally to the P-Chain, Warp message verification will be added to the [`PlatformVM`](https://github.com/ava-labs/avalanchego/tree/master/vms/platformvm). For a Warp message to be considered valid by the P-Chain, at least 67% of the `sourceChainID`'s weight must have participated in the aggregate BLS signature. This is equivalent to the threshold set for the C-Chain. A future ACP may be proposed to support modification of this threshold on a per-L1 basis. The following Warp message payloads are introduced on the P-Chain: - `SubnetToL1ConversionMessage` - `RegisterL1ValidatorMessage` - `L1ValidatorRegistrationMessage` - `L1ValidatorWeightMessage` The method of requesting signatures for these messages is left unspecified. A viable option for supporting this functionality is laid out in [ACP-118](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/118-warp-signature-request/README.md) with the `SignatureRequest` message. All node IDs contained within the message specifications are represented as variable length arrays such that they can support new node IDs types should the P-Chain add support for them in the future. The serialization of each of these messages is as follows. #### `SubnetToL1ConversionMessage` The P-Chain can produce a `SubnetToL1ConversionMessage` for consumers (i.e. validator managers) to be aware of the initial validator set. The following serialization is defined as the `ValidatorData`: | Field | Type | Size | | -------------: | ---------: | -----------------------: | | `nodeID` | `[]byte` | 4 + len(`nodeID`) bytes | | `blsPublicKey` | `[48]byte` | 48 bytes | | `weight` | `uint64` | 8 bytes | | | | 60 + len(`nodeID`) bytes | The following serialization is defined as the `ConversionData`: | Field | Type | Size | | ---------------: | ----------------: | ---------------------------------------------------------: | | `codecID` | `uint16` | 2 bytes | | `subnetID` | `[32]byte` | 32 bytes | | `managerChainID` | `[32]byte` | 32 bytes | | `managerAddress` | `[]byte` | 4 + len(`managerAddress`) bytes | | `validators` | `[]ValidatorData` | 4 + sum(`validatorLengths`) bytes | | | | 74 + len(`managerAddress`) + sum(`validatorLengths`) bytes | - `codecID` is the codec version used to serialize the payload, and is hardcoded to `0x0000` - `sum(validatorLengths)` is the sum of the lengths of `ValidatorData` serializations included in `validators`. - `subnetID` identifies the Subnet that is being converted to an L1 (described further below). - `managerChainID` and `managerAddress` identify the validator manager for the newly created L1. This is the (blockchain ID, address) tuple allowed to send Warp messages to modify the L1's validator set. - `validators` are the initial continuous-fee-paying validators for the given L1. The `SubnetToL1ConversionMessage` is specified as an `AddressedCall` with `sourceChainID` set to the P-Chain ID, the `sourceAddress` set to an empty byte array, and a payload of: | Field | Type | Size | | -------------: | ---------: | -------: | | `codecID` | `uint16` | 2 bytes | | `typeID` | `uint32` | 4 bytes | | `conversionID` | `[32]byte` | 32 bytes | | | | 38 bytes | - `codecID` is the codec version used to serialize the payload, and is hardcoded to `0x0000` - `typeID` is the payload type identifier and is `0x00000000` for this message - `conversionID` is the SHA256 hash of the `ConversionData` from a given `ConvertSubnetToL1Tx` #### `RegisterL1ValidatorMessage` The P-Chain can consume a `RegisterL1ValidatorMessage` from validator managers through a `RegisterL1ValidatorTx` to register an addition to the L1's validator set. The following is the serialization of a `PChainOwner`: | Field | Type | Size | | ----------: | -----------: | -------------------------------: | | `threshold` | `uint32` | 4 bytes | | `addresses` | `[][20]byte` | 4 + len(`addresses`) \\* 20 bytes | | | | 8 + len(`addresses`) \\* 20 bytes | - `threshold` is the number of `addresses` that must provide a signature for the `PChainOwner` to authorize an action. - Validation criteria: - If `threshold` is `0`, `addresses` must be empty - `threshold` <= len(`addresses`) - Entries of `addresses` must be unique and sorted in ascending order The `RegisterL1ValidatorMessage` is specified as an `AddressedCall` with a payload of: | Field | Type | Size | | ----------------------: | ------------: | ------------------------------------------------------------------------: | | `codecID` | `uint16` | 2 bytes | | `typeID` | `uint32` | 4 bytes | | `subnetID` | `[32]byte` | 32 bytes | | `nodeID` | `[]byte` | 4 + len(`nodeID`) bytes | | `blsPublicKey` | `[48]byte` | 48 bytes | | `expiry` | `uint64` | 8 bytes | | `remainingBalanceOwner` | `PChainOwner` | 8 + len(`addresses`) \\* 20 bytes | | `disableOwner` | `PChainOwner` | 8 + len(`addresses`) \\* 20 bytes | | `weight` | `uint64` | 8 bytes | | | | 122 + len(`nodeID`) + (len(`addresses1`) + len(`addresses2`)) \\* 20 bytes | - `codecID` is the codec version used to serialize the payload, and is hardcoded to `0x0000` - `typeID` is the payload type identifier and is `0x00000001` for this payload - `subnetID`, `nodeID`, `weight`, and `blsPublicKey` are for the validator being added - `expiry` is the time at which this message becomes invalid. As of a P-Chain timestamp `>= expiry`, this Avalanche Warp Message can no longer be used to add the `nodeID` to the validator set of `subnetID` - `remainingBalanceOwner` is the P-Chain owner where leftover $AVAX from the validator's Balance will be issued to when this validator it is removed from the validator set. - `disableOwner` is the only P-Chain owner allowed to disable the validator using `DisableL1ValidatorTx`, specified below. #### `L1ValidatorRegistrationMessage` The P-Chain can produce an `L1ValidatorRegistrationMessage` for consumers to verify that a validation period has either begun or has been invalidated. The `L1ValidatorRegistrationMessage` is specified as an `AddressedCall` with `sourceChainID` set to the P-Chain ID, the `sourceAddress` set to an empty byte array, and a payload of: | Field | Type | Size | | -------------: | ---------: | -------: | | `codecID` | `uint16` | 2 bytes | | `typeID` | `uint32` | 4 bytes | | `validationID` | `[32]byte` | 32 bytes | | `registered` | `bool` | 1 byte | | | | 39 bytes | - `codecID` is the codec version used to serialize the payload, and is hardcoded to `0x0000` - `typeID` is the payload type identifier and is `0x00000002` for this message - `validationID` identifies the validator for the message - `registered` is a boolean representing the status of the `validationID`. If true, the `validationID` corresponds to a validator in the current validator set. If false, the `validationID` does not correspond to a validator in the current validator set, and never will in the future. #### `L1ValidatorWeightMessage` The P-Chain can consume an `L1ValidatorWeightMessage` through a `SetL1ValidatorWeightTx` to update the weight of an existing validator. The P-Chain can also produce an `L1ValidatorWeightMessage` for consumers to verify that the validator weight update has been effectuated. The `L1ValidatorWeightMessage` is specified as an `AddressedCall` with the following payload. When sent from the P-Chain, the `sourceChainID` is set to the P-Chain ID, and the `sourceAddress` is set to an empty byte array. | Field | Type | Size | | -------------: | ---------: | -------: | | `codecID` | `uint16` | 2 bytes | | `typeID` | `uint32` | 4 bytes | | `validationID` | `[32]byte` | 32 bytes | | `nonce` | `uint64` | 8 bytes | | `weight` | `uint64` | 8 bytes | | | | 54 bytes | - `codecID` is the codec version used to serialize the payload, and is hardcoded to `0x0000` - `typeID` is the payload type identifier and is `0x00000003` for this message - `validationID` identifies the validator for the message - `nonce` is a strictly increasing number that denotes the latest validator weight update and provides replay protection for this transaction - `weight` is the new `weight` of the validator ### New P-Chain Transaction Types Both before and after this ACP, to create a Subnet, a `CreateSubnetTx` must be issued on the P-Chain. This transaction includes an `Owner` field which defines the key that today can be used to authorize any validator set additions (`AddSubnetValidatorTx`) or removals (`RemoveSubnetValidatorTx`). To be considered a permissionless network, or Avalanche Layer 1: - This `Owner` key must no longer have the ability to modify the validator set. - New transaction types must support modification of the validator set via Warp messages. The following new transaction types are introduced on the P-Chain to support this functionality: - `ConvertSubnetToL1Tx` - `RegisterL1ValidatorTx` - `SetL1ValidatorWeightTx` - `DisableL1ValidatorTx` - `IncreaseL1ValidatorBalanceTx` #### `ConvertSubnetToL1Tx` To convert a Subnet into an L1, a `ConvertSubnetToL1Tx` must be issued to set the `(chainID, address)` pair that will manage the L1's validator set. The `Owner` key defined in `CreateSubnetTx` must provide a signature to authorize this conversion. The `ConvertSubnetToL1Tx` specification is: ```go type PChainOwner struct { // The threshold number of `Addresses` that must provide a signature in order for // the `PChainOwner` to be considered valid. Threshold uint32 `json:"threshold"` // The 20-byte addresses that are allowed to sign to authenticate a `PChainOwner`. // Note: It is required for: // - len(Addresses) == 0 if `Threshold` is 0. // - len(Addresses) >= `Threshold` // - The values in Addresses to be sorted in ascending order. Addresses []ids.ShortID `json:"addresses"` } type L1Validator struct { // NodeID of this validator NodeID []byte `json:"nodeID"` // Weight of this validator used when sampling Weight uint64 `json:"weight"` // Initial balance for this validator Balance uint64 `json:"balance"` // [Signer] is the BLS public key and proof-of-possession for this validator. // Note: We do not enforce that the BLS key is unique across all validators. // This means that validators can share a key if they so choose. // However, a NodeID + L1 does uniquely map to a BLS key Signer signer.ProofOfPossession `json:"signer"` // Leftover $AVAX from the [Balance] will be issued to this // owner once it is removed from the validator set. RemainingBalanceOwner PChainOwner `json:"remainingBalanceOwner"` // The only owner allowed to disable this validator on the P-Chain. DisableOwner PChainOwner `json:"disableOwner"` } type ConvertSubnetToL1Tx struct { // Metadata, inputs and outputs BaseTx // ID of the Subnet to transform // Restrictions: // - Must not be the Primary Network ID Subnet ids.ID `json:"subnetID"` // BlockchainID where the validator manager lives ChainID ids.ID `json:"chainID"` // Address of the validator manager Address []byte `json:"address"` // Initial continuous-fee-paying validators for the L1 Validators []L1Validator `json:"validators"` // Authorizes this conversion SubnetAuth verify.Verifiable `json:"subnetAuthorization"` } ``` After this transaction is accepted, `CreateChainTx` and `AddSubnetValidatorTx` are disabled on the Subnet. The only action that the `Owner` key is able to take is removing Subnet validators with `RemoveSubnetValidatorTx` that had been added using `AddSubnetValidatorTx`. Unless removed by the `Owner` key, any Subnet validators added previously with an `AddSubnetValidatorTx` will continue to validate the Subnet until their [`End`](https://github.com/ava-labs/avalanchego/blob/a1721541754f8ee23502b456af86fea8c766352a/vms/platformvm/txs/validator.go#L27) time is reached. Once all Subnet validators added with `AddSubnetValidatorTx` are no longer in the validator set, the `Owner` key is powerless. `RegisterL1ValidatorTx` and `SetL1ValidatorWeightTx` must be used to manage the L1's validator set. The `validationID` for validators added through `ConvertSubnetToL1Tx` is defined as the SHA256 hash of the 36 bytes resulting from concatenating the 32 byte `subnetID` with the 4 byte `validatorIndex` (index in the `Validators` array within the transaction). Once this transaction is accepted, the P-Chain must be willing sign a `SubnetToL1ConversionMessage` with a `conversionID` corresponding to `ConversionData` populated with the values from this transaction. #### `RegisterL1ValidatorTx` After a `ConvertSubnetToL1Tx` has been accepted, new validators can only be added by using a `RegisterL1ValidatorTx`. The specification of this transaction is: ```go type RegisterL1ValidatorTx struct { // Metadata, inputs and outputs BaseTx // Balance <= sum($AVAX inputs) - sum($AVAX outputs) - TxFee. Balance uint64 `json:"balance"` // [Signer] is a BLS signature proving ownership of the BLS public key specified // below in `Message` for this validator. // Note: We do not enforce that the BLS key is unique across all validators. // This means that validators can share a key if they so choose. // However, a NodeID + L1 does uniquely map to a BLS key Signer [96]byte `json:"signer"` // A RegisterL1ValidatorMessage payload Message warp.Message `json:"message"` } ``` The `validationID` of validators added via `RegisterL1ValidatorTx` is defined as the SHA256 hash of the `Payload` of the `AddressedCall` in `Message`. When a `RegisterL1ValidatorTx` is accepted on the P-Chain, the validator is added to the L1's validator set. A `minNonce` field corresponding to the `validationID` will be stored on addition to the validator set (initially set to `0`). This field will be used when validating the `SetL1ValidatorWeightTx` defined below. This `validationID` will be used for replay protection. Used `validationID`s will be stored on the P-Chain. If a `RegisterL1ValidatorTx`'s `validationID` has already been used, the transaction will be considered invalid. To prevent storing an unbounded number of `validationID`s, the `expiry` of the `RegisterL1ValidatorMessage` is required to be no more than 24 hours in the future of the time the transaction is issued on the P-Chain. Any `validationIDs` corresponding to an expired timestamp can be flushed from the P-Chain's state. L1s are responsible for defining the procedure on how to retrieve the above information from prospective validators. An EVM-compatible L1 may choose to implement this step like so: - Use the number of tokens the user has staked into a smart contract on the L1 to determine the weight of their validator - Require the user to submit an on-chain transaction with their validator information - Generate the Warp message For a `RegisterL1ValidatorTx` to be valid, `Signer` must be a valid proof-of-possession of the `blsPublicKey` defined in the `RegisterL1ValidatorMessage` contained in the transaction. After a `RegisterL1ValidatorTx` is accepted, the P-Chain must be willing to sign an `L1ValidatorRegistrationMessage` for the given `validationID` with `registered` set to `true`. This remains the case until the time at which the validator is removed from the validator set using a `SetL1ValidatorWeightTx`, as described below. When it is known that a given `validationID` _is not and never will be_ registered, the P-Chain must be willing to sign an `L1ValidatorRegistrationMessage` for the `validationID` with `registered` set to `false`. This could be the case if the `expiry` time of the message has passed prior to the message being delivered in a `RegisterL1ValidatorTx`, or if the validator was successfully registered and then later removed. This enables the P-Chain to prove to validator managers that a validator has been removed or never added. The P-Chain must refuse to sign any `L1ValidatorRegistrationMessage` where the `validationID` does not correspond to an active validator and the `expiry` is in the future. #### `SetL1ValidatorWeightTx` `SetL1ValidatorWeightTx` is used to modify the voting weight of a validator. The specification of this transaction is: ```go type SetL1ValidatorWeightTx struct { // Metadata, inputs and outputs BaseTx // An L1ValidatorWeightMessage payload Message warp.Message `json:"message"` } ``` Applications of this transaction could include: - Increase the voting weight of a validator if a delegation is made on the L1 - Increase the voting weight of a validator if the stake amount is increased (by staking rewards for example) - Decrease the voting weight of a misbehaving validator - Remove an inactive validator The validation criteria for `L1ValidatorWeightMessage` is: - `nonce >= minNonce`. Note that `nonce` is not required to be incremented by `1` with each successive validator weight update. - When `minNonce == MaxUint64`, `nonce` must be `MaxUint64` and `weight` must be `0`. This prevents L1s from being unable to remove `nodeID` in a subsequent transaction. - If `weight == 0`, the validator being removed must not be the last one in the set. If all validators are removed, there are no valid Warp messages that can be produced to register new validators through `RegisterL1ValidatorMessage`. With no validators, block production will halt and the L1 is unrecoverable. This validation criteria serves as a guardrail against this situation. A future ACP can remove this guardrail as users get more familiar with the new L1 mechanics and tooling matures to fork an L1. When `weight != 0`, the weight of the validator is updated to `weight` and `minNonce` is updated to `nonce + 1`. When `weight == 0`, the validator is removed from the validator set. All state related to the validator, including the `minNonce` and `validationID`, are reaped from the P-Chain state. Tracking these post-removal is not required since `validationID` can never be re-initialized due to the replay protection provided by `expiry` in `RegisterL1ValidatorTx`. Any unspent $AVAX in the validator's `Balance` will be issued in a single UTXO to the `RemainingBalanceOwner` for this validator. Recall that `RemainingBalanceOwner` is specified when the validator is first added to the L1's validator set (in either `ConvertSubnetToL1Tx` or `RegisterL1ValidatorTx`). Note: There is no explicit `EndTime` for L1 validators added in a `ConvertSubnetToL1Tx` or `RegisterL1ValidatorTx`. The only time when L1 validators are removed from the L1's validator set is through this transaction when `weight == 0`. #### `DisableL1ValidatorTx` L1 validators can use `DisableL1ValidatorTx` to mark their validator as inactive. The specification of this transaction is: ```go type DisableL1ValidatorTx struct { // Metadata, inputs and outputs BaseTx // ID corresponding to the validator ValidationID ids.ID `json:"validationID"` // Authorizes this validator to be disabled DisableAuth verify.Verifiable `json:"disableAuthorization"` } ``` The `DisableOwner` specified for this validator must sign the transaction. Any unspent $AVAX in the validator's `Balance` will be issued in a single UTXO to the `RemainingBalanceOwner` for this validator. Recall that both `DisableOwner` and `RemainingBalanceOwner` are specified when the validator is first added to the L1's validator set (in either `ConvertSubnetToL1Tx` or `RegisterL1ValidatorTx`). For full removal from an L1's validator set, a `SetL1ValidatorWeightTx` must be issued with weight `0`. To do so, a Warp message is required from the L1's validator manager. However, to support the ability to claim the unspent `Balance` for a validator without authorization is critical for failed L1s. Note that this does not modify an L1's total staking weight. This transaction marks the validator as inactive, but does not remove it from the L1's validator set. Inactive validators can re-activate at any time by increasing their balance with an `IncreaseL1ValidatorBalanceTx`. L1 creators should be aware that there is no notion of `MinStakeDuration` that is enforced by the P-Chain. It is expected that L1s who choose to enforce a `MinStakeDuration` will lock the validator's Stake for the L1's desired `MinStakeDuration`. #### `IncreaseL1ValidatorBalanceTx` L1 validators are required to maintain a non-zero balance used to pay the continuous fee on the P-Chain in order to be considered active. The `IncreaseL1ValidatorBalanceTx` can be used by anybody to add additional $AVAX to the `Balance` to a validator. The specification of this transaction is: ```go type IncreaseL1ValidatorBalanceTx struct { // Metadata, inputs and outputs BaseTx // ID corresponding to the validator ValidationID ids.ID `json:"validationID"` // Balance <= sum($AVAX inputs) - sum($AVAX outputs) - TxFee Balance uint64 `json:"balance"` } ``` If the validator corresponding to `ValidationID` is currently inactive (`Balance` was exhausted or `DisableL1ValidatorTx` was issued), this transaction will move them back to the active validator set. Note: The $AVAX added to `Balance` can be claimed at any time by the validator using `DisableL1ValidatorTx`. ### Bootstrapping L1 Nodes Bootstrapping a node/validator is the process of securely recreating the latest state of the blockchain locally. At the end of this process, the local state of a node/validator must be in sync with the local state of other virtuous nodes/validators. The node/validator can then verify new incoming transactions and reach consensus with other nodes/validators. To bootstrap a node/validator, a few critical questions must be answered: How does one discover peers in the network? How does one determine that a discovered peer is honestly participating in the network? For standalone networks like the Avalanche Primary Network, this is done by connecting to a hardcoded [set](https://github.com/ava-labs/avalanchego/blob/master/genesis/bootstrappers.json) of trusted bootstrappers to then discover new peers. Ethereum calls their set [bootnodes](https://ethereum.org/developers/docs/nodes-and-clients/bootnodes). Since L1 validators are not required to be Primary Network validators, a list of validator IPs to connect to (the functional bootstrappers of the L1) cannot be provided by simply connecting to the Primary Network validators. However, the Primary Network can enable nodes tracking an L1 to seamlessly connect to the validators by tracking and gossiping L1 validator IPs. L1s will not need to operate and maintain a set of bootstrappers and can rely on the Primary Network for peer discovery. ### Sidebar: L1 Sovereignty After this ACP is activated, the P-Chain will no longer support staking of any assets other than $AVAX for the Primary Network. The P-Chain will not support the distribution of staking rewards for L1s. All staking-related operations for L1 validation must be managed by the L1's validator manager. The P-Chain simply requires a continuous fee per validator. If an L1 would like to manage their validator's balances on the P-Chain, it can cover the cost for all L1 validators by posting the $AVAX balance on the P-Chain. L1s can implement any mechanism they want to pay the continuous fee charged by the P-Chain for its participants. The L1 has full ownership over its validator set, not the P-Chain. There are no restrictions on what requirements an L1 can have for validators to join. Any stake that is required to join the L1's validator set is not locked on the P-Chain. If a validator is removed from the L1's validator set via a `SetL1ValidatorWeightTx` with weight `0`, the stake will continue to be locked outside of the P-Chain. How each L1 handles stake associated with the validator is entirely left up to the L1 and can be treated independently to what happens on the P-Chain. The relationship between the P-Chain and L1s provides a dynamic where L1s can use the P-Chain as an impartial judge to modify parameters (in addition to its existing role of helping to validate incoming Avalanche Warp Messages). If a validator is misbehaving, the L1 validators can collectively generate a BLS multisig to reduce the voting weight of a misbehaving validator. This operation is fully secured by the Avalanche Primary Network (225M $AVAX or $8.325B at the time of writing). Follow-up ACPs could extend the P-Chain <-> L1 relationship to include parametrization of the 67% threshold to enable L1s to choose a different threshold based on their security model (e.g. a simple majority of 51%). ### Continuous Fee Mechanism Every additional validator on the P-Chain adds persistent load to the Avalanche Network. When a validator transaction is issued on the P-Chain, it is charged for the computational cost of the transaction itself but is not charged for the cost of an active validator over the time they are validating on the network (which may be indefinitely). This is a common problem in blockchains, spawning many state rent proposals in the broader blockchain space to address it. The following fee mechanism takes advantage of the fact that each L1 validator uses the same amount of computation and charges each L1 validator the dynamic base fee for every discrete unit of time it is active. To charge each L1 validator, the notion of a `Balance` is introduced. The `Balance` of a validator will be continuously charged during the time they are active to cover the cost of storing the associated validator properties (BLS key, weight, nonce) in memory and to track IPs (in addition to other services provided by the Primary Network). This `Balance` is initialized with the `RegisterL1ValidatorTx` that added them to the active validator set. `Balance` can be increased at any time using the `IncreaseL1ValidatorBalanceTx`. When this `Balance` reaches `0`, the validator will be considered "inactive" and will no longer participate in validating the L1. Inactive validators can be moved back to the active validator set at any time using the same `IncreaseL1ValidatorBalanceTx`. Once a validator is considered inactive, the P-Chain will remove these properties from memory and only retain them on disk. All messages from that validator will be considered invalid until it is revived using the `IncreaseL1ValidatorBalanceTx`. L1s can reduce the amount of inactive weight by removing inactive validators with the `SetL1ValidatorWeightTx` (`Weight` = 0). Since each L1 validator is charged the same amount at each point in time, tracking the fees for the entire validator set is straight-forward. The accumulated dynamic base fee for the entire network is tracked in a single uint. This accumulated value should be equal to the fee charged if a validator was active from the time the accumulator was instantiated. The validator set is maintained in a priority queue. A pseudocode implementation of the continuous fee mechanism is provided below. ```python # Pseudocode class ValidatorQueue: def __init__(self, fee_getter): self.acc = 0 self.queue = PriorityQueue() self.fee_getter = fee_getter # At each time period, increment the accumulator and # pop all validators from the top of the queue that # ran out of funds. # Note: The amount of work done in a single block # should be bounded to prevent a large number of # validator operations from happening at the same # time. def time_elapse(self, t): self.acc = self.acc + self.fee_getter(t) while True: vdr = self.queue.peek() if vdr.balance < self.acc: self.queue.pop() continue return # Validator was added def validator_enter(self, vdr): vdr.balance = vdr.balance + self.acc self.queue.add(vdr) # Validator was removed def validator_remove(self, vdrNodeID): vdr = find_and_remove(self.queue, vdrNodeID) vdr.balance = vdr.balance - self.acc vdr.refund() # Refund [vdr.balance] to [RemainingBalanceOwner] self.queue.remove() # Validator's balance was topped up def validator_increase(self, vdrNodeID, balance): vdr = find_and_remove(self.queue, vdrNodeID) vdr.balance = vdr.balance + balance self.queue.add(vdr) ``` #### Fee Algorithm [ACP-103](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/103-dynamic-fees/README.md) proposes a dynamic fee mechanism for transactions on the P-Chain. This mechanism is repurposed with minor modifications for the active L1 validator continuous fee. At activation, the number of excess active L1 validators $x$ is set to `0`. The fee rate per second for an active L1 validator is: $$M \cdot \exp\left(\frac{x}{K}\right)$$ Where: - $M$ is the minimum price for an active L1 validator - $\exp\left(x\right)$ is an approximation of $e^x$ following the EIP-4844 specification ```python # Approximates factor * e ** (numerator / denominator) using Taylor expansion def fake_exponential(factor: int, numerator: int, denominator: int) -> int: i = 1 output = 0 numerator_accum = factor * denominator while numerator_accum > 0: output += numerator_accum numerator_accum = (numerator_accum * numerator) // (denominator * i) i += 1 return output // denominator ``` - $K$ is a constant to control the rate of change for the L1 validator price After every second, $x$ will be updated: $$x = \max(x + (V - T), 0)$$ Where: - $V$ is the number of active L1 validators - $T$ is the target number of active L1 validators Whenever $x$ increases by $K$, the price per active L1 validator increases by a factor of `~2.7`. If the price per active L1 validator gets too expensive, some active L1 validators will exit the active validator set, decreasing $x$, dropping the price. The price per active L1 validator constantly adjusts to make sure that, on average, the P-Chain has no more than $T$ active L1 validators. #### Block Processing Before processing the transactions inside a block, all validators that no longer have a sufficient (non-zero) balance are deactivated. After processing the transactions inside a block, all validators that do not have a sufficient balance for the next second are deactivated. ##### Block Timestamp Validity Change To ensure that validators are charged accurately, blocks are only considered valid if advancing the chain times would not cause a validator to have a negative balance. This upholds the expectation that the number of L1 validators remains constant between blocks. The block building protocol is modified to account for this change by first checking if the wall clock time removes any validator due to a lack of funds. If the wall clock time does not remove any L1 validators, the wall clock time is used to build the block. If it does, the time at which the first validator gets removed is used. ##### Fee Calculation The total validator fee assessed in $\Delta t$ is: ```python # Calculate the fee to charge over Δt def cost_over_time(V:int, T:int, x:int, Δt: int) -> int: cost = 0 for _ in range(Δt): x = max(x + V - T, 0) cost += fake_exponential(M, x, K) return cost ``` #### Parameters The parameters at activation are: | Parameter | Definition | Value | | --------- | ------------------------------------------- | ------------- | | $T$ | target number of validators | 10_000 | | $C$ | capacity number of validators | 20_000 | | $M$ | minimum fee rate | 512 nAVAX/s | | $K$ | constant to control the rate of fee changes | 1_246_488_515 | An $M$ of 512 nAVAX/s equates to ~1.33 AVAX/month to run an L1 validator, so long as the total number of continuous-fee-paying L1 validators stays at or below $T$. $K$ was chosen to set the maximum fee doubling rate to ~24 hours. This is in the extreme case that the network has $C$ validators for prolonged periods of time; if the network has $T$+1 validators for example, the fee rate would double every ~27 years. A future ACP can adjust the parameters to increase $T$, reduce $M$, and/or modify $K$. #### User Experience L1 validators are continuously charged a fee, albeit a small one. This poses a challenge for L1 validators: How do they maintain the balance over time? Node clients should expose an API to track how much balance is remaining in the validator's account. This will provide a way for L1 validators to track how quickly it is going down and top-up when needed. A nice byproduct of the above design is the balance in the validator's account is claimable. This means users can top-up as much $AVAX as they want and rest-assured knowing they can always retrieve it if there is an excessive amount. The expectation is that most users will not interact with node clients or track when or how much they need to top-up their validator account. Wallet providers will abstract away most of this process. For users who desire more convenience, L1-as-a-Service providers will abstract away all of this process. ## Backwards Compatibility This new design for Subnets proposes a large rework to all L1-related mechanics. Rollout should be done on a going-forward basis to not cause any service disruption for live Subnets. All current Subnet validators will be able to continue validating both the Primary Network and whatever Subnets they are validating. Any state execution changes must be coordinated through a mandatory upgrade. Implementors must take care to continue to verify the existing ruleset until the upgrade is activated. After activation, nodes should verify the new ruleset. Implementors must take care to only verify the presence of 2000 $AVAX prior to activation. ### Deactivated Transactions - P-Chain - `TransformSubnetTx` After this ACP is activated, Elastic Subnets will be disabled. `TransformSubnetTx` will not be accepted post-activation. As there are no Mainnet Elastic Subnets, there should be no production impact with this deactivation. ### New Transactions - P-Chain - `ConvertSubnetToL1Tx` - `RegisterL1ValidatorTx` - `SetL1ValidatorWeightTx` - `DisableL1ValidatorTx` - `IncreaseL1ValidatorBalanceTx` ## Reference Implementation ACP-77 was implemented and will be merged into AvalancheGo behind the `Etna` upgrade flag. The full body of work can be found tagged with the `acp77` label [here](https://github.com/ava-labs/avalanchego/issues?q=sort%3Aupdated-desc+label%3Aacp77). Since Etna is not yet activated, all new transactions introduced in ACP-77 will be rejected by AvalancheGo. If any modifications are made to ACP-77 as part of the ACP process, the implementation must be updated prior to activation. ## Security Considerations This ACP introduces Avalanche Layer 1s, a new network type that costs significantly less than Avalanche Subnets. This can lead to a large increase in the number of networks and, by extension, the number of validators. Each additional validator adds consistent RAM usage to the P-Chain. However, this should be appropriately metered by the continuous fee mechanism outlined above. With the sovereignty L1s have from the P-Chain, L1 staking tokens are not locked on the P-Chain. This poses a security consideration for L1 validators: Malicious chains can choose to remove validators at will and take any funds that the validator has locked on the L1. The P-Chain only provides the guarantee that L1 validators can retrieve the remaining $AVAX Balance for their validator via a `DisableL1ValidatorTx`. Any assets on the L1 is entirely under the purview of the L1. The onus is on L1 validators to vet the L1's security for any assets transferred onto the L1. With a long window of expiry (24 hours) for the Warp message in `RegisterL1ValidatorTx`, spam of validator registration could lead to high memory pressure on the P-Chain. A future ACP can reduce the window of expiry if 24 hours proves to be a problem. NodeIDs can be added to an L1's validator set involuntarily. However, it is important to note that any stake/rewards are _not_ at risk. For a node operator who was added to a validator set involuntarily, they would only need to generate a new NodeID via key rotation as there is no lock-up of any stake to create a NodeID. This is an explicit tradeoff for easier on-boarding of NodeIDs. This mirrors the Primary Network validators guarantee of no stake/rewards at risk. The continuous fee mechanism outlined above does not apply to inactive L1 validators since they are not stored in memory. However, inactive L1 validators are persisted on disk which can lead to persistent P-Chain state growth. A future ACP can introduce a mechanism to decrease the rate of P-Chain state growth or provide a state expiry path to reduce the amount of P-Chain state. ## Acknowledgements Special thanks to [@StephenButtolph](https://github.com/StephenButtolph), [@aaronbuchwald](https://github.com/aaronbuchwald), and [@patrick-ogrady](https://github.com/patrick-ogrady) for their feedback on these ideas. Thank you to the broader Ava Labs Platform Engineering Group for their feedback on this ACP prior to publication. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-83: Dynamic Multidimensional Fees (/docs/acps/83-dynamic-multidimensional-fees) --- title: "ACP-83: Dynamic Multidimensional Fees" description: "Details for Avalanche Community Proposal 83: Dynamic Multidimensional Fees" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/83-dynamic-multidimensional-fees/README.md --- | ACP | 83 | | :--- | :--- | | **Title** | Dynamic multidimensional fees for P-chain and X-chain | | **Author(s)** | Alberto Benegiamo ([@abi87](https://github.com/abi87)) | | **Status** | Stale | | **Track** | Standards | | **Superseded-By** | [ACP-103](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/103-dynamic-fees/README.md) | ## Abstract Introduce a dynamic and multidimensional fees scheme for the P-chain and X-chain. Dynamic fees helps to preserve the stability of the chain as it provides a feedback mechanism that increases the cost of resources when the network operates above its target utilization. Multidimensional fees ensures that high demand for orthogonal resources does not drive up the price of underutilized resources. For example, networks provide and consume orthogonal resources including, but not limited to, bandwidth, chain state, read/write throughput, and CPU. By independently metering each resource, they can be granularly priced and stay closer to optimal resource utilization. ## Motivation The P-Chain and X-Chain currently have fixed fees and in some cases those fees are fixed to zero. This makes transaction issuance predictable, but does not provide feedback mechanism to preserve chain stability under high load. In contrast, the C-Chain, which has the highest and most regular load among the chains on the Primary Network, already supports dynamic fees. This ACP proposes to introduce a similar dynamic fee mechanism for the P-Chain and X-Chain to further improve the Primary Network's stability and resilience under load. However, unlike the C-Chain, we propose a multidimensional fee scheme with an exponential update rule for each fee dimension. The [HyperSDK](https://github.com/ava-labs/hypersdk) already utilizes a multidimensional fee scheme with optional priority fees and its efficiency is backed by [academic research](https://arxiv.org/abs/2208.07919). Finally, we split the fee into two parts: a `base fee` and a `priority fee`. The `base fee` is calculated by the network each block to accurately price each resource at a given point in time. Whatever amount greater than the base fee is burnt is treated as the `priority fee` to prioritize faster transaction inclusion. ## Specification We introduce the multidimensional scheme first and then how to apply the dynamic fee update rule for each fee dimension. Finally we list the new block verification rules, valid once the new fee scheme activates. ### Multidimensional scheme components We define four fee dimensions, `Bandwidth`, `Reads`, `Writes`, `Compute`, to describe transactions complexity. In more details: - `Bandwidth` measures the transaction size in bytes, as encoded by the AvalancheGo codec. Byte length is a proxy for the network resources needed to disseminate the transaction. - `Reads` measures the number of DB reads needed to verify the transactions. DB reads include UTXOs reads and any other state quantity relevant for the specific transaction. - `Writes` measures the number of DB writes following the transaction verification. DB writes include UTXOs generated as outputs of the transactions and any other state quantity relevant for the specific transaction. - `Compute` measures the number of signatures to be verified, including UTXOs ones and those related to authorization of specific operations. For each fee dimension $i$, we define: - *fee rate* $r_i$ as the price, denominated in AVAX, to be paid for a transaction with complexity $u_i$ along the fee dimension $i$. - *base fee* as the minimal fee needed to accept a transaction. Base fee is given be the formula $$base \ fee = \sum_{i=0}^3 r_i \times u_i$$ - *priority fee* as an optional fee paid on top of the base fee to speed up the transaction inclusion in a block. ### Dynamic scheme components Fee rates are updated in time, to allow a fee increase when network is getting congested. Each new block is a potential source of congestion, as its transactions carry complexity that each validator must process to verify and eventually accept the block. The more complexity carries a block, and the more rapidly blocks are produced, the higher the congestion. We seek a scheme that rapidly increases the fees when blocks complexity goes above a defined threshold and that equally rapidly decreases the fees once complexity goes down (because blocks carry less/simpler transactions, or because they are produced more slowly). We define the desired threshold as a *target complexity rate* $T$: we would want to process every second a block whose complexity is $T$. Any complexity more than that causes some congestion that we want to penalize via fees. In order to update fees rates we track, per each block and each fee dimension, a parameter called cumulative excess complexity. Fee rates applied to a block will be defined in terms of cumulative excess complexity as we show in the following. Suppose that a block $B_t$ is the current chain tip. $B_t$ has the following features: - $t$ is its timestamp. - $\Delta C_t$ is the cumulative excess complexity along fee dimension $i$. Say a new block $B_{t + \Delta T}$ is built on top of $B$, with the following features: - $t + \Delta T$ is its timestamp - $C_{t + \Delta T}$ is its complexity along fee dimension $i$. Then the fee rate $r_{t + \Delta T}$ applied for the block $B_{t + \Delta T}$ along dimension $i$ will be: $$ r_{t + \Delta T} = r^{min} \times e^{\frac{max(0, \Delta C_t - T \times \Delta T)}{Denom}} $$ where - $r^{min}$ is the minimal fee rate along fee dimension $i$ - $T$ is the target complexity rate along fee dimension $i$ - $Denom$ is a normalization constant for the fee dimension $i$ Moreover, once the block $B_{t + \Delta T}$ is accepted, the cumulative excess complexity is updated as follows: $$\Delta C_{t + \Delta T} = max\large(0, \Delta C_{t} - T \times \Delta T\large) + C_{t + \Delta T}$$ The fee rate update formula guarantees that fee rates increase if incoming blocks are complex (large $C_{t + \Delta T}$) and if blocks are emitted rapidly (small $\Delta T$). Symmetrically, fee rates decrease to the minimum if incoming blocks are less complex and if blocks are produced less frequently. The update formula has a few paramenters to be tuned, independently, for each fee dimension. We defer discussion about tuning to the [implementation section](#tuning-the-update-formula). ## Block verification rules Upon activation of the dynamic multidimensional fees scheme we modify block processing as follows: - **Bound block complexity**. For each fee dimension $i$, we define a *maximal block complexity* $Max$. A block is only valid if the block complexity $C$ is less than the maximum block complexity: $C \leq Max$. - **Verify transaction fee**. When verifying each transaction in a block, we confirm that it can cover its own base fee. Note that both base fee and optional priority fees are burned. ## User Experience ### How will the wallets estimate the fees? AvalancheGo nodes will provide new APIs exposing the current and expected fee rates, as they are likely to change block by block. Wallets can then use the fees rates to select UTXOs to pay the transaction fees. Moreover, the AvalancheGo implementation proposed above offers a `fees.Calculator` struct that can be reused by wallets and downstream projects to evaluate calculate fees. ### How will wallets be able to re-issue Txs at a higher fee? Wallets should be able to simply re-issue the transaction since current AvalancheGo implementation drops mempool transactions whose fee rate is lower than current one. More specifically, a transaction may be valid the moment it enters the mempool and it won’t be re-verified as long as it stays in there. However, as soon as the transaction is selected to be included in the next block, it is re-verified against the latest preferred tip. If fees are not enough by this time, the transaction is dropped and the wallet can simply re-issue it at a higher fee, or wait for the fee rate to go down. Note that priority fees offer some buffer space against an increase in the fee rate. A transaction paying just the base fee will be evicted from the mempool in the face of a fee rate increase, while a transaction paying some extra priority fee may have enough buffer room to stay valid after some amount of fee increase. ### How does priority fees guarantee a faster block inclusion? AvalancheGo mempool will be restructured to order transactions by priority fees. Transactions paying priority fees will be selected for block inclusion first, without violating any spend dependency. ## Backwards Compatibility Modifying the fee scheme for P-Chain and X-Chain requires a mandatory upgrade for activation. Moreover, wallets must be modified to properly handle the new fee scheme once activated. ## Reference Implementation The implementation is split across multiple PRs: - P-Chain work is tracked in this issue: [https://github.com/ava-labs/avalanchego/issues/2707](https://github.com/ava-labs/avalanchego/issues/2707) - X-Chain work is tracked in this issue: [https://github.com/ava-labs/avalanchego/issues/2708](https://github.com/ava-labs/avalanchego/issues/2708) A very important implementation step is tuning the update formula parameters for each chain and each fee dimension. We show here the principles we followed for tuning and a simulation based on historical data. ### Tuning the update formula The basic idea is to measure the complexity of blocks already accepted and derive the parameters from it. You can find the historical data in [this repo](https://github.com/abi87/complexities). To simplify the exposition I am purposefully ignoring chain specifics (like P-chain proposal blocks). We can account for chain specifics while processing the historical data. Here are the principles: - **Target block complexity rate $T$**: calculate the distribution of block complexity and pick a high enough quantile. - **Max block complexity $Max$**: this is probably the trickiest parameter to set. Historically we had [pretty big transactions](https://subnets.avax.network/p-chain/tx/27pjHPRCvd3zaoQUYMesqtkVfZ188uP93zetNSqk3kSH1WjED1) (more than $1.000$ referenced utxos). Setting a max block complexity so high that these big transactions are allowed is akin to setting no complexity cap. On the other side, we still want to allow, even encourage, UTXO consolidation, so we may want to allow transactions [like this](https://subnets.avax.network/p-chain/tx/2LxyHzbi2AGJ4GAcHXth6pj5DwVLWeVmog2SAfh4WrqSBdENhV). A principled way to set max block complexity may be the following: - calculate the target block complexity rate (see previous point) - calculate the median time elapsed among consecutive blocks - The product of these two quantities should gives us something like a target block complexity. - Set the max block complexity to say $\times 50$ the target value. - **Normalization coefficient $Denom$**: I suggest we size it as follows: - Find the largest historical peak, i.e. the sequence of consecutive blocks which contained the most complexity in the shortest period of time - Tune $Denom$ so that it would cause a $\times 10000$ increase in the fee rate for such a peak. This increase would push fees from the milliAVAX we normally pay under stable network condition up to tens of AVAX. - **Minimal fee rates $r^{min}$**: we could size them so that transactions fees do not change very much with respect to the currently fixed values. We simulate below how the update formula would behave on an peak period from Avalanche mainnet.

/> />

Figure 1 shows a peak period, starting with block [wqKJcvEv86TBpmJY2pAY7X65hzqJr3VnHriGh4oiAktWx5qT1](https://subnets.avax.network/p-chain/block/wqKJcvEv86TBpmJY2pAY7X65hzqJr3VnHriGh4oiAktWx5qT1) and going for roughly 30 blocks. We only show `Bandwidth` for clarity, but other fees dimensions have similar behaviour. The network load is much larger than target and sustained. Figure 2 show the fee dynamic in response to the peak: fees scale up from a few milliAVAX up to around 25 AVAX. Moreover as soon as the peak is over, and complexity goes back to the target value, fees are reduced very rapidly. ## Security Considerations The new fee scheme is expected to help network stability as it offers economic incentives to users to hold transactions issuance in times of high load. While fees are expected to remain generally low when the system is not loaded, a sudden load increase, with fuller blocks, would push the dynamic fees algo to increase fee rates. The increase is expected to continue until the load is reduced. Load reduction happens by both dropping unconfirmed transactions whose fee-rate is not sufficient anymore and by pushing users that optimize their transactions costs to delay transaction issuance until the fee rate goes down to an acceptable level. Note finally that the exponential fee update mechanism detailed above is [proven](https://ethresear.ch/t/multidimensional-eip-1559/11651) to be robust against strategic behaviors of users delaying transactions issuance and then suddenly push a bulk of transactions once the fee rate is low enough. ## Acknowledgements Thanks to @StephenButtolph @patrick-ogrady and @dhrubabasu for their feedback on these ideas. ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-84: Table Preamble (/docs/acps/84-table-preamble) --- title: "ACP-84: Table Preamble" description: "Details for Avalanche Community Proposal 84: Table Preamble" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/84-table-preamble/README.md --- | ACP | 84 | | :--- | :--- | | **Title** | Table Preamble for ACPs | | **Author(s)** | Gauthier Leonard ([@Nuttymoon](https://github.com/Nuttymoon)) | | **Status** | Activated | | **Track** | Meta | ## Abstract The current ACP template features a plain-text code block containing "RFC 822 style headers" as `Preamble` (see [What belongs in a successful ACP?](https://github.com/avalanche-foundation/ACPs?tab=readme-ov-file#what-belongs-in-a-successful-acp)). This header includes multiple links to discussions, authors, and other ACPs. This ACP proposes to replace the `Preamble` code block with a Markdown table format (similar to what is used in [Ethereum EIPs](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-1.md)). ## Motivation The current ACPs `Preamble` is (i) not very readable and (ii) not user-friendly as links are not clickable. The proposed table format aims to fix these issues. ## Specification The following Markdown table format is proposed: | ACP | PR Number | | :----------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------- | | **Title** | ACP title | | **Author(s)** | A list of the author's name(s) and optionally contact info: FirstName LastName ([@GitHubUsername](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/84-table-preamble/README.md) or [email@address.com](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/84-table-preamble/README.md)) | | **Status** | Proposed, Implementable, Activated, Stale ([Discussion](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/84-table-preamble/README.md)) | | **Track** | Standards, Best Practices, Meta, Subnet | | **Replaces (\\*optional)** | [ACP-XX](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/84-table-preamble/README.md) | | **Superseded-By (\\*optional)** | [ACP-XX](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/84-table-preamble/README.md) | It features all the existing fields of the current ACP template, and would replace the current `Preamble` code block in [ACPs/TEMPLATE.md](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/TEMPLATE.md). ## Backwards Compatibility Existing ACPs could be updated to use the new table format, but it is not mandatory. ## Reference Implementation For this ACP, the table would look like this: | ACP | 84 | | :------------ | :----------------------------------------------------------------------------------- | | **Title** | Table Preamble for ACPs | | **Author(s)** | Gauthier Leonard ([@Nuttymoon](https://github.com/Nuttymoon)) | | **Status** | Proposed ([Discussion](https://github.com/avalanche-foundation/ACPs/discussions/86)) | | **Track** | Meta | ## Security Considerations NA ## Open Questions NA ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # ACP-99: Validatorsetmanager Contract (/docs/acps/99-validatorsetmanager-contract) --- title: "ACP-99: Validatorsetmanager Contract" description: "Details for Avalanche Community Proposal 99: Validatorsetmanager Contract" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/ACPs/99-validatorsetmanager-contract/README.md --- | ACP | 99 | | :----------- | :-------------------------------------------------------------------------------------------------------------------------- | | Title | Validator Manager Solidity Standard | | Author(s) | Gauthier Leonard ([@Nuttymoon](https://github.com/Nuttymoon)), Cam Schultz ([@cam-schultz](https://github.com/cam-schultz)) | | Status | Activated | | Track | Best Practices | | Dependencies | [ACP-77](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/77-reinventing-subnets/README.md) | ## Abstract Define a standard Validator Manager Solidity smart contract to be deployed on any Avalanche EVM chain. This ACP relies on concepts introduced in [ACP-77 (Reinventing Subnets)](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/77-reinventing-subnets). It depends on it to be marked as `Implementable`. ## Motivation [ACP-77 (Reinventing Subnets)](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/77-reinventing-subnets) opens the door to managing an L1 validator set (stored on the P-Chain) from any chain on the Avalanche Network. The P-Chain allows a Subnet to specify a "validator manager" if it is converted to an L1 using `ConvertSubnetToL1Tx`. This `(blockchainID, address)` pair is responsible for sending ICM messages contained within `RegisterL1ValidatorTx` and `SetL1ValidatorWeightTx` on the P-Chain. This enables an on-chain program to add, modify the weight of, and remove validators. On each validator set change, the P-Chain is willing to sign an `AddressedCall` to notify any on-chain program tracking the validator set. On-chain programs must be able to interpret this message, so they can trigger the appropriate action. The 2 kinds of `AddressedCall`s [defined in ACP-77](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/77-reinventing-subnets#p-chain-warp-message-payloads) are `L1ValidatorRegistrationMessage` and `L1ValidatorWeightMessage`. Given these assumptions and the fact that most of the active blockchains on Avalanche Mainnet are EVM-based, we propose `ACP99Manager` as the standard Solidity contract specification that can: 1. Hold relevant information about the current L1 validator set 2. Send validator set updates to the P-Chain by generating `AdressedCall`s defined in ACP-77 3. Correctly update the validator set by interpreting notification messages received from the P-Chain 4. Be easily integrated into validator manager implementations that utilize various security models (e.g. Proof-of-Stake). Having an audited and open-source reference implementation freely available will contribute to lowering the cost of launching L1s on Avalanche. Once deployed, the `ACP99Manager` implementation contract can be used as the `Address` in the [`ConvertSubnetToL1Tx`](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/77-reinventing-subnets#convertsubnettol1tx). ## Specification > **Note:**: The naming convention followed for the interfaces and contracts are inspired from the way [OpenZeppelin Contracts](https://docs.openzeppelin.com/contracts/5.x/) are named after ERC standards, using `ACP` instead of `ERC`. ### Type Definitions The following type definitions are used in the function signatures described in [Contract Specification](#contract-specification) ```solidity /** * @notice Description of the conversion data used to convert * a subnet to an L1 on the P-Chain. * This data is the pre-image of a hash that is authenticated by the P-Chain * and verified by the Validator Manager. */ struct ConversionData { bytes32 subnetID; bytes32 validatorManagerBlockchainID; address validatorManagerAddress; InitialValidator[] initialValidators; } /// @notice Specifies an initial validator, used in the conversion data. struct InitialValidator { bytes nodeID; bytes blsPublicKey; uint64 weight; } /// @notice L1 validator status. enum ValidatorStatus { Unknown, PendingAdded, Active, PendingRemoved, Completed, Invalidated } /** * @notice Specifies the owner of a validator's remaining balance or disable owner on the P-Chain. * P-Chain addresses are also 20-bytes, so we use the address type to represent them. */ struct PChainOwner { uint32 threshold; address[] addresses; } /** * @notice Contains the active state of a Validator. * @param status The validator status. * @param nodeID The NodeID of the validator. * @param startingWeight The weight of the validator at the time of registration. * @param sentNonce The current weight update nonce sent by the manager. * @param receivedNonce The highest nonce received from the P-Chain. * @param weight The current weight of the validator. * @param startTime The start time of the validator. * @param endTime The end time of the validator. */ struct Validator { ValidatorStatus status; bytes nodeID; uint64 startingWeight; uint64 sentNonce; uint64 receivedNonce; uint64 weight; uint64 startTime; uint64 endTime; } ``` #### About `Validator`s A `Validator` represents the continuous time frame during which a node is part of the validator set. Each `Validator` is identified by its `validationID`. If a validator was added as part of the initial set of continuous dynamic fee paying validators, its `validationID` is the SHA256 hash of the 36 bytes resulting from concatenating the 32 byte `ConvertSubnetToL1Tx` transaction ID and the 4 byte index of the initial validator within the transaction. If a validator was added to the L1's validator set post-conversion, its `validationID` is the SHA256 of the payload of the `AddressedCall` in the `RegisterL1ValidatorTx` used to add it, as defined in ACP-77. ### Contract Specification The standard `ACP99Manager` functionality is defined by a set of events, public methods, and private methods that must be included by a compliant implementation. For a full implementation, please see the [Reference Implementation](#reference-implementation) #### Events ```solidity /** * @notice Emitted when an initial validator is registered. * @notice The field index is the index of the initial validator in the conversion data. * This is used along with the subnetID as the ACP-118 justification in * signature requests to P-Chain validators over a L1ValidatorRegistrationMessage * when removing the validator */ event RegisteredInitialValidator( bytes32 indexed validationID, bytes20 indexed nodeID, bytes32 indexed subnetID, uint64 weight, uint32 index ); /// @notice Emitted when a validator registration to the L1 is initiated. event InitiatedValidatorRegistration( bytes32 indexed validationID, bytes20 indexed nodeID, bytes32 registrationMessageID, uint64 registrationExpiry, uint64 weight ); /// @notice Emitted when a validator registration to the L1 is completed. event CompletedValidatorRegistration(bytes32 indexed validationID, uint64 weight); /// @notice Emitted when removal of an L1 validator is initiated. event InitiatedValidatorRemoval( bytes32 indexed validationID, bytes32 validatorWeightMessageID, uint64 weight, uint64 endTime ); /// @notice Emitted when removal of an L1 validator is completed. event CompletedValidatorRemoval(bytes32 indexed validationID); /// @notice Emitted when a validator weight update is initiated. event InitiatedValidatorWeightUpdate( bytes32 indexed validationID, uint64 nonce, bytes32 weightUpdateMessageID, uint64 weight ); /// @notice Emitted when a validator weight update is completed. event CompletedValidatorWeightUpdate(bytes32 indexed validationID, uint64 nonce, uint64 weight); ``` #### Public Methods ```solidity /// @notice Returns the SubnetID of the L1 tied to this manager function subnetID() public view returns (bytes32 id); /// @notice Returns the validator details for a given validation ID. function getValidator(bytes32 validationID) public view returns (Validator memory validator); /// @notice Returns the total weight of the current L1 validator set. function l1TotalWeight() public view returns (uint64 weight); /** * @notice Verifies and sets the initial validator set for the chain by consuming a * SubnetToL1ConversionMessage from the P-Chain. * * Emits a {RegisteredInitialValidator} event for each initial validator in {conversionData}. * * @param conversionData The Subnet conversion message data used to recompute and verify against the ConversionID. * @param messsageIndex The index that contains the SubnetToL1ConversionMessage ICM message containing the * ConversionID to be verified against the provided {conversionData}. */ function initializeValidatorSet( ConversionData calldata conversionData, uint32 messsageIndex ) public; /** * @notice Completes the validator registration process by returning an acknowledgement of the registration of a * validationID from the P-Chain. The validator should not be considered active until this method is successfully called. * * Emits a {CompletedValidatorRegistration} event on success. * * @param messageIndex The index of the L1ValidatorRegistrationMessage to be received providing the acknowledgement. * @return validationID The ID of the registered validator. */ function completeValidatorRegistration(uint32 messageIndex) public returns (bytes32 validationID); /** * @notice Completes validator removal by consuming an RegisterL1ValidatorMessage from the P-Chain acknowledging * that the validator has been removed. * * Emits a {CompletedValidatorRemoval} on success. * * @param messageIndex The index of the RegisterL1ValidatorMessage. */ function completeValidatorRemoval(uint32 messageIndex) public returns (bytes32 validationID); /** * @notice Completes the validator weight update process by consuming an L1ValidatorWeightMessage from the P-Chain * acknowledging the weight update. The validator weight change should not have any effect until this method is successfully called. * * Emits a {CompletedValidatorWeightUpdate} event on success. * * @param messageIndex The index of the L1ValidatorWeightMessage message to be received providing the acknowledgement. * @return validationID The ID of the validator, retreived from the L1ValidatorWeightMessage. * @return nonce The nonce of the validator, retreived from the L1ValidatorWeightMessage. */ function completeValidatorWeightUpdate(uint32 messageIndex) public returns (bytes32 validationID, uint64 nonce); ``` > Note: While `getValidator` provides a way to fetch a `Validator` based on its `validationID`, no method that returns all active validators is specified. This is because a `mapping` is a reasonable way to store active validators internally, and Solidity `mapping`s are not iterable. This can be worked around by storing additional indexing metadata in the contract, but not all applications may wish to incur that added complexity. #### Private Methods The following methods are specified as `internal` to account for different semantics of initiating validator set changes, such as checking uptime attested to via ICM message, or transferring funds to be locked as stake. Rather than broaden the definitions of these functions to cover all use cases, we leave it to the implementer to define a suitable external interface and call the appropriate `ACP99Manager` function internally. ```solidity /** * @notice Initiates validator registration by issuing a RegisterL1ValidatorMessage. The validator should * not be considered active until completeValidatorRegistration is called. * * Emits an {InitiatedValidatorRegistration} event on success. * * @param nodeID The ID of the node to add to the L1. * @param blsPublicKey The BLS public key of the validator. * @param remainingBalanceOwner The remaining balance owner of the validator. * @param disableOwner The disable owner of the validator. * @param weight The weight of the node on the L1. * @return validationID The ID of the registered validator. */ function _initiateValidatorRegistration( bytes memory nodeID, bytes memory blsPublicKey, PChainOwner memory remainingBalanceOwner, PChainOwner memory disableOwner, uint64 weight ) internal returns (bytes32 validationID); /** * @notice Initiates validator removal by issuing a L1ValidatorWeightMessage with the weight set to zero. * The validator should be considered inactive as soon as this function is called. * * Emits an {InitiatedValidatorRemoval} on success. * * @param validationID The ID of the validator to remove. */ function _initiateValidatorRemoval(bytes32 validationID) internal; /** * @notice Initiates a validator weight update by issuing an L1ValidatorWeightMessage with a nonzero weight. * The validator weight change should not have any effect until completeValidatorWeightUpdate is successfully called. * * Emits an {InitiatedValidatorWeightUpdate} event on success. * * @param validationID The ID of the validator to modify. * @param weight The new weight of the validator. * @return nonce The validator nonce associated with the weight change. * @return messageID The ID of the L1ValidatorWeightMessage used to update the validator's weight. */ function _initiateValidatorWeightUpdate( bytes32 validationID, uint64 weight ) internal returns (uint64 nonce, bytes32 messageID); ``` ##### About `DisableL1ValidatorTx` In addition to calling `_initiateValidatorRemoval`, a validator may be disabled by issuing a `DisableL1ValidatorTx` on the P-Chain. This transaction allows the `DisableOwner` of a validator to disable it directly from the P-Chain to claim the unspent `Balance` linked to the validator of a failed L1. Therefore it is not meant to be called in the `Manager` contract. ## Backwards Compatibility `ACP99Manager` is a reference specification. As such, it doesn't have any impact on the current behavior of the Avalanche protocol. ## Reference Implementation A reference implementation will be provided in Ava Labs' [ICM Contracts](https://github.com/ava-labs/icm-contracts/tree/main/contracts/validator-manager) repository. This reference implementation will need to be updated to conform to `ACP99Manager` before this ACP may be marked `Implementable`. ### Example Integrations `ACP99Manager` is designed to be easily incorporated into any architecture. Two example integrations are included in this ACP, each of which uses a different architecture. #### Multi-contract Design The multi-contract design consists of a contract that implements `ACP99Manager`, and separate "security module" contracts that implement security models, such as PoS or PoA. Each `ACP99Manager` implementation contract is associated with one or more "security modules" that are the only contracts allowed to call the `ACP99Manager` functions that initiate validator set changes (`initiateValidatorRegistration`, and `initiateValidatorWeightUpdate`). Every time a validator is added/removed or a weight change is initiated, the `ACP99Manager` implementation will, in turn, call the corresponding function of the "security module" (`handleValidatorRegistration` or `handleValidatorWeightChange`). We recommend that the "security modules" reference an immutable `ACP99Manager` contract address for security reasons. It is up to the "security module" to decide what action to take when a validator is added/removed or a weight change is confirmed by the P-Chain. Such actions could be starting the withdrawal period and allocating rewards in a PoS L1. |Own| SecurityModule Safe -.->|Own| Manager SecurityModule <-.->|Reference| Manager Safe -->|addValidator| SecurityModule SecurityModule -->|initiateValidatorRegistration| Manager Manager -->|sendWarpMessage| P P -->|completeValidatorRegistration| Manager Manager -->|handleValidatorRegistration| SecurityModule`} /> "Security modules" could implement PoS, Liquid PoS, etc. The specification of such smart contracts is out of the scope of this ACP. A work in progress implementation is available in the [Suzaku Contracts Library](https://github.com/suzaku-network/suzaku-contracts-library/blob/main/README.md#acp99-contracts-library) repository. It will be updated until this ACP is considered `Implementable` based on the outcome of the discussion. Ava Labs' V2 Validator Manager also implements this architecture for a Proof-of-Stake security module, and is available in their [ICM Contracts Repository](https://github.com/ava-labs/icm-contracts/tree/validator-manager-v2.0.0/contracts/validator-manager/StakingManager.sol). #### Single-contract Design The single-contract design consists of a class hierarchy with the base class implementing `ACP99Manager`. The `PoAValidatorManager` child class in the below diagram may be swapped out for another class implementing a different security model, such as PoS. > ACP99Manager class ValidatorManager { completeValidatorRegistration } <> ValidatorManager class PoAValidatorManager { initiateValidatorRegistration initiateEndValidation completeEndValidation } ACP99Manager <|--ValidatorManager ValidatorManager <|-- PoAValidatorManager`} /> No reference implementation is provided for this architecture in particular, but Ava Labs' V1 [Validator Manager](https://github.com/ava-labs/icm-contracts/tree/validator-manager-v1.0.0/contracts/validator-manager) implements much of the functional behavior described by the specification. It predates the specification, however, so there are some deviations. It should at most be treated as a model of an approximate implementation of this standard. ## Security Considerations The audit process of `ACP99Manager` and reference implementations is of the utmost importance for the future of the Avalanche ecosystem as most L1s would rely upon it to secure their L1. ## Open Questions ### Is there an interest to keep historical information about the validator set on the manager chain? It is left to the implementor to decide if `getValidator` should return information about historical validators. Information about past validator performance may not be relevant for all applications (e.g. PoA has no need to know about past validator's uptimes). This information will still be available in archive nodes and offchain tools (e.g. explorers), but it is not enforced at the contract level. ### Should `ACP99Manager` include a churn control mechanism? The Ava Labs [implementation](https://github.com/ava-labs/icm-contracts/blob/main/contracts/validator-manager/ValidatorManager.sol) of the `ValidatorManager` contract includes a churn control mechanism that prevents too much weight from being added or removed from the validator set in a short amount of time. Excessive churn can cause consensus failures, so it may be appropriate to require that churn tracking is implemented in some capacity. ## Acknowledgments Special thanks to [@leopaul36](https://github.com/leopaul36), [@aaronbuchwald](https://github.com/aaronbuchwald), [@dhrubabasu](https://github.com/dhrubabasu), [@minghinmatthewlam](https://github.com/minghinmatthewlam) and [@michaelkaplan13](https://github.com/michaelkaplan13) for their reviews of previous versions of this ACP! ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). # Avalanche Community Proposals (ACPs) (/docs/acps) --- title: "Avalanche Community Proposals (ACPs)" description: "Official framework for proposing improvements and gathering consensus around changes to the Avalanche Network" edit_url: https://github.com/avalanche-foundation/ACPs/edit/main/README.md ---
>
## What is an Avalanche Community Proposal (ACP)? An Avalanche Community Proposal is a concise document that introduces a change or best practice for adoption on the [Avalanche Network](https://www.avax.com). ACPs should provide clear technical specifications of any proposals and a compelling rationale for their adoption. ACPs are an open framework for proposing improvements and gathering consensus around changes to the Avalanche Network. ACPs can be proposed by anyone and will be merged into this repository as long as they are well-formatted and coherent. Once an overwhelming majority of the Avalanche Network/Community have [signaled their support for an ACP](https://docs.avax.network/nodes/configure/avalanchego-config-flags#avalanche-community-proposals), it may be scheduled for activation on the Avalanche Network by Avalanche Network Clients (ANCs). It is ultimately up to members of the Avalanche Network/Community to adopt ACPs they support by running a compatible ANC, such as [AvalancheGo](https://github.com/ava-labs/avalanchego). ## ACP Tracks There are three kinds of ACP: * A `Standards Track` ACP describes a change to the design or function of the Avalanche Network, such as a change to the P2P networking protocol, P-Chain design, Subnet architecture, or any change/addition that affects the interoperability of Avalanche Network Clients (ANCs). * A `Best Practices Track` ACP describes a design pattern or common interface that should be used across the Avalanche Network to make it easier to integrate with Avalanche or for Subnets to interoperate with each other. This would include things like proposing a smart contract interface, not proposing a change to how smart contracts are executed. * A `Meta Track` ACP describes a change to the ACP process or suggests a new way for the Avalanche Community to collaborate. * A `Subnet Track` ACP describes a change to a particular Subnet. This would include things like configuration changes or coordinated Subnet upgrades. ## ACP Statuses There are four statuses of an ACP: * A `Proposed` ACP has been merged into the main branch of the ACP repository. It is actively being discussed by the Avalanche Community and may be modified based on feedback. * An `Implementable` ACP is considered "ready for implementation" by the author(s) and will no longer change meaningfully from its current form (which would require a new ACP). * An `Activated` ACP has been activated on the Avalanche Network via a coordinated upgrade by the Avalanche Community. Once an ACP is `Activated`, it is locked. * A `Stale` ACP has been abandoned by its author(s) because it is not supported by the Avalanche Community or has been replaced with another ACP. ## ACP Workflow ### Step 0: Think of a Novel Improvement to Avalanche The ACP process begins with a new idea for Avalanche. Each potential ACP must have an author(s): someone who writes the ACP using the style and format described below, shepherds the associated GitHub Discussion, and attempts to build consensus around the idea. Note that ideas and any resulting ACP is public. Authors should not post any ideas or anything in an ACP that the Author wants to keep confidential or to keep ownership rights in (such as intellectual property rights). ### Step 1: Post Your Idea to [GitHub Discussions](https://github.com/avalanche-foundation/ACPs/discussions/categories/ideas) The author(s) should first attempt to ascertain whether there is support for their idea by posting in the "Ideas" category of GitHub Discussions. Vetting an idea publicly before going as far as writing an ACP is meant to save both the potential author(s) and the wider Avalanche Community time. Asking the Avalanche Community first if an idea is original helps prevent too much time being spent on something that is guaranteed to be rejected based on prior discussions (searching the Internet does not always do the trick). It also helps to make sure the idea is applicable to the entire community and not just the author(s). Small enhancements or patches often don't need standardization between multiple projects; these don't need an ACP and should be injected into the relevant development workflow with a patch submission to the applicable ANC issue tracker. ### Step 2: Propose an ACP via [Pull Request](https://github.com/avalanche-foundation/ACPs/pulls) Once the author(s) feels confident that an idea has a decent chance of acceptance, an ACP should be drafted and submitted as a pull request (PR). This draft must be written in ACP style as described below. It is highly recommended that a single ACP contain a single key proposal or new idea. The more focused the ACP, the more successful it tends to be. If in doubt, split your ACP into several well-focused ones. The PR number of the ACP will become its assigned number. ### Step 3: Build Consensus on [GitHub Discussions](https://github.com/avalanche-foundation/ACPs/discussions/categories/discussion) and Provide an Implementation (if Applicable) ACPs will be merged by ACP maintainers if the proposal is generally well-formatted and coherent. ACP editors will attempt to merge anything worthy of discussion, regardless of feasibility or complexity, that is not a duplicate or incomplete. After an ACP is merged, an official GitHub Discussion will be opened for the ACP and linked to the proposal for community discussion. It is recommended for author(s) or supportive Avalanche Community members to post an accompanying non-technical overview of their ACP for general consumption in this GitHub Discussion. The ACP should be reviewed and broadly supported before a reference implementation is started, again to avoid wasting the author(s) and the Avalanche Community's time, unless a reference implementation will aid people in studying the ACP. ### Step 4: Mark ACP as `Implementable` via [Pull Request](https://github.com/avalanche-foundation/ACPs/pulls) Once an ACP is considered complete by the author(s), it should be marked as `Implementable`. At this point, all open questions should be addressed and an associated reference implementation should be provided (if applicable). As mentioned earlier, the Avalanche Foundation meets periodically to recommend the ratification of specific ACPs but it is ultimately up to members of the Avalanche Network/Community to adopt ACPs they support by running a compatible Avalanche Network Client (ANC), such as [AvalancheGo](https://github.com/ava-labs/avalanchego). ### [Optional] Step 5: Mark ACP as `Stale` via [Pull Request](https://github.com/avalanche-foundation/ACPs/pulls) An ACP can be superseded by a different ACP, rendering the original obsolete. If this occurs, the original ACP will be marked as `Stale`. ACPs may also be marked as `Stale` if the author(s) abandon work on it for a prolonged period of time (12+ months). ACPs may be reopened and moved back to `Proposed` if the author(s) restart work. ### Maintenance ACP maintainers will only merge PRs updating an ACP if it is created or approved by at least one of the author(s). ACP maintainers are not responsible for ensuring ACP author(s) approve the PR. ACP author(s) are expected to review PRs that target their unlocked ACP (`Proposed` or `Implementable`). Any PRs opened against a locked ACP (`Activated` or `Stale`) will not be merged by ACP maintainers. ## What belongs in a successful ACP? Each ACP must have the following parts: * `Preamble`: Markdown table containing metadata about the ACP, including the ACP number, a short descriptive title, the author(s), and optionally the contact info for each author, etc. * `Abstract`: Concise (~200 word) description of the ACP * `Motivation`: Rationale for adopting the ACP and the specific issue/challenge/opportunity it addresses * `Specification`: Complete description of the semantics of any change should allow any ANC/Avalanche Community member to implement the ACP * `Security Considerations`: Security implications of the proposed ACP Each ACP can have the following parts: * `Open Questions`: Questions that should be resolved before implementation Each `Standards Track` ACP must have the following parts: * `Backwards Compatibility`: List of backwards incompatible changes required to implement the ACP and their impact on the Avalanche Community * `Reference Implementation`: Code, documentation, and telemetry (from a local network) of the ACP change Each `Best Practices Track` ACP can have the following parts: * `Backwards Compatibility`: List of backwards incompatible changes required to implement the ACP and their impact on the Avalanche Community * `Reference Implementation`: Code, documentation, and telemetry (from a local network) of the ACP change ### ACP Formats and Templates Each ACP is allocated a unique subdirectory in the `ACPs` directory. The name of this subdirectory must be of the form `N-T` where `N` is the ACP number and `T` is the ACP title with any spaces replaced by hyphens. ACPs must be written in [markdown](https://daringfireball.net/projects/markdown/syntax) format and stored at `ACPs/N-T/README.md`. Please see the [ACP template](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/TEMPLATE.md) for an example of the correct layout. ### Auxiliary Files ACPs may include auxiliary files such as diagrams or code snippets. Such files should be stored in the ACP's subdirectory (`ACPs/N-T/*`). There is no required naming convention for auxiliary files. ### Waived Copyright ACP authors must waive any copyright claims before an ACP will be merged into the repository. This can be done by including the following text in an ACP: ```text ## Copyright Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/). ``` ## Proposals _You can view the status of each ACP on the [ACP Tracker](https://github.com/orgs/avalanche-foundation/projects/1/views/1)._ | Number | Title | Author(s) | Type | |:-------|:------|:-------|:-----| |[13](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/13-subnet-only-validators/README.md)|Subnet-Only Validators (SOVs)|Patrick O'Grady (contact@patrickogrady.xyz)|Standards| |[20](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/20-ed25519-p2p/README.md)|Ed25519 p2p|Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu))|Standards| |[23](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/23-p-chain-native-transfers/README.md)|P-Chain Native Transfers|Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu))|Standards| |[24](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/24-shanghai-eips/README.md)|Activate Shanghai EIPs on C-Chain|Darioush Jalali ([@darioush](https://github.com/darioush))|Standards| |[25](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/25-vm-application-errors/README.md)|Virtual Machine Application Errors|Joshua Kim ([@joshua-kim](https://github.com/joshua-kim))|Standards| |[30](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/30-avalanche-warp-x-evm/README.md)|Integrate Avalanche Warp Messaging into the EVM|Aaron Buchwald (aaron.buchwald56@gmail.com)|Standards| |[31](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/31-enable-subnet-ownership-transfer/README.md)|Enable Subnet Ownership Transfer|Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu))|Standards| |[41](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/41-remove-pending-stakers/README.md)|Remove Pending Stakers|Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu))|Standards| |[62](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/62-disable-addvalidatortx-and-adddelegatortx/README.md)|Disable `AddValidatorTx` and `AddDelegatorTx`|Jacob Everly (https://twitter.com/JacobEv3rly), Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu))|Standards| |[75](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/75-acceptance-proofs/README.md)|Acceptance Proofs|Joshua Kim ([@joshua-kim](https://github.com/joshua-kim))|Standards| |[77](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/77-reinventing-subnets/README.md)|Reinventing Subnets|Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu))|Standards| |[83](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/83-dynamic-multidimensional-fees/README.md)|Dynamic Multidimensional Fees for P-Chain and X-Chain|Alberto Benegiamo ([@abi87](https://github.com/abi87))|Standards| |[84](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/84-table-preamble/README.md)|Table Preamble for ACPs|Gauthier Leonard ([@Nuttymoon](https://github.com/Nuttymoon))|Meta| |[99](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/99-validatorsetmanager-contract/README.md)|Validator Manager Solidity Standard|Gauthier Leonard ([@Nuttymoon](https://github.com/Nuttymoon)), Cam Schultz ([@cam-schultz](https://github.com/cam-schultz))|Best Practices| |[103](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/103-dynamic-fees/README.md)|Add Dynamic Fees to the X-Chain and P-Chain|Dhruba Basu ([@dhrubabasu](https://github.com/dhrubabasu)), Alberto Benegiamo ([@abi87](https://github.com/abi87)), Stephen Buttolph ([@StephenButtolph](https://github.com/StephenButtolph))|Standards| |[108](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/108-evm-event-importing/README.md)|EVM Event Importing|Michael Kaplan ([@michaelkaplan13](https://github.com/michaelkaplan13))|Best Practices| |[113](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/113-provable-randomness/README.md)|Provable Virtual Machine Randomness|Tsachi Herman ([@tsachiherman](https://github.com/tsachiherman))|Standards| |[118](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/118-warp-signature-request/README.md)|Standardized P2P Warp Signature Request Interface|Cam Schultz ([@cam-schultz](https://github.com/cam-schultz))|Best Practices| |[125](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/125-basefee-reduction/README.md)|Reduce C-Chain minimum base fee from 25 nAVAX to 1 nAVAX|Stephen Buttolph ([@StephenButtolph](https://github.com/StephenButtolph)), Darioush Jalali ([@darioush](https://github.com/darioush))|Standards| |[131](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/131-cancun-eips/README.md)|Activate Cancun EIPs on C-Chain and Subnet-EVM chains|Darioush Jalali ([@darioush](https://github.com/darioush)), Ceyhun Onur ([@ceyonur](https://github.com/ceyonur))|Standards| |[151](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/151-use-current-block-pchain-height-as-context/README.md)|Use current block P-Chain height as context for state verification|Ian Suvak ([@iansuvak](https://github.com/iansuvak))|Standards| |[176](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/176-dynamic-evm-gas-limit-and-price-discovery-updates/README.md)|Dynamic EVM Gas Limits and Price Discovery Updates|Stephen Buttolph ([@StephenButtolph](https://github.com/StephenButtolph)), Michael Kaplan ([@michaelkaplan13](https://github.com/michaelkaplan13))|Standards| |[181](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/181-p-chain-epoched-views/README.md)|P-Chain Epoched Views|Cam Schultz ([@cam-schultz](https://github.com/cam-schultz))|Standards| |[191](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/191-seamless-l1-creation/README.md)|Seamless L1 Creations (CreateL1Tx)|Martin Eckardt ([@martineckardt](https://github.com/martineckardt)), Aaron Buchwald ([aaronbuchwald](https://github.com/aaronbuchwald)), Michael Kaplan ([@michaelkaplan13](https://github.com/michaelkaplan13)), Meag FitzGerald ([@meaghanfitzgerald](https://github.com/meaghanfitzgerald))|Standards| |[194](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/194-streaming-asynchronous-execution/README.md)|Streaming Asynchronous Execution|Arran Schlosberg ([@ARR4N](https://github.com/ARR4N)), Stephen Buttolph ([@StephenButtolph](https://github.com/StephenButtolph))|Standards| |[204](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/204-precompile-secp256r1/README.md)|Precompile for secp256r1 Curve Support|Santiago Cammi ([@scammi](https://github.com/scammi)), Arran Schlosberg ([@ARR4N](https://github.com/ARR4N))|Standards| |[209](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/209-eip7702-style-account-abstraction/README.md)|EIP-7702-style Set Code for EOAs|Stephen Buttolph ([@StephenButtolph](https://github.com/StephenButtolph)), Arran Schlosberg ([@ARR4N](https://github.com/ARR4N)), Aaron Buchwald ([aaronbuchwald](https://github.com/aaronbuchwald)), Michael Kaplan ([@michaelkaplan13](https://github.com/michaelkaplan13))|Standards| |[224](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/224-dynamic-gas-limit-in-subnet-evm/README.md)|Introduce ACP-176-Based Dynamic Gas Limits and Fee Manager Precompile in Subnet-EVM|Ceyhun Onur ([@ceyonur](https://github.com/ceyonur)), Michael Kaplan ([@michaelkaplan13](https://github.com/michaelkaplan13))|Standards| |[226](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/226-dynamic-minimum-block-times/README.md)|Dynamic Minimum Block Times|Stephen Buttolph ([@StephenButtolph](https://github.com/StephenButtolph)), Michael Kaplan ([@michaelkaplan13](https://github.com/michaelkaplan13))|Standards| |[236](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/236-auto-renewed-staking/README.md)|Auto-Renewed Staking|Razvan Angheluta ([@rrazvan1](https://github.com/rrazvan1))|Standards| |[247](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/247-delegation-multiplier-increase-maximum-validator-weight-reduction/README.md)|Delegation Multiplier Increase & Maximum Validator Weight Reduction|Giacomo Barbieri ([@ijaack94](https://x.com/ijaack94)), BENQI ([@benqifinance](https://x.com/benqifinance))|Standards| |[256](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/256-hardware-recommendations/README.md)|Update Hardware Requirements for Primary Network Nodes|Aaron Buchwald ([@aaronbuchwald](https://github.com/aaronbuchwald)), Martin Eckardt ([@martineckardt](https://github.com/martineckardt)), Meaghan FitzGerald ([@meaghanfitzgerald](https://github.com/meaghanfitzgerald))|Best Practices| |[267](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/267-uptime-requirement-increase/README.md)|Increase Validator Uptime Requirement from 80% to 90%|Martin Eckardt ([@martineckardt](https://github.com/martineckardt))|Best Practices| |[273](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/273-reduce-minimum-staking-duration/README.md)| Reduce Minimum Validator Staking Duration|Eric Lu ([ericlu-avax](https://github.com/ericlu-avax)), Martin Eckardt ([@martineckardt](https://github.com/martineckardt)), Meaghan FitzGerald ([@meaghanfitzgerald](https://github.com/meaghanfitzgerald)), Stephen Buttolph ([@StephenButtolph](https://github.com/StephenButtolph))|Standards| ## Contributing Before contributing to ACPs, please read the [ACP Terms of Contribution](https://github.com/avalanche-foundation/ACPs/blob/main/CONTRIBUTING.md). # Introduction (/docs/cross-chain) --- title: Introduction description: Learn about different interoperability protocols in the Avalanche ecosystem. --- # Snowman Consensus (/docs/primary-network/avalanche-consensus) --- title: Snowman Consensus description: Learn about the Snowman Consensus protocol. --- Consensus is the task of getting a group of computers (a.k.a. nodes) to come to an agreement on a decision. In blockchain, this means that all the participants in a network have to agree on the changes made to the shared ledger. This agreement is reached through a specific process, a consensus protocol, that ensures that everyone sees the same information and that the information is accurate and trustworthy. ## Snowman Consensus Snowman Consensus is a consensus protocol that is scalable, robust, and decentralized. It combines features of both classical and Nakamoto consensus mechanisms to achieve high throughput, fast finality, and energy efficiency. For the whitepaper, see [here](https://www.avalabs.org/whitepapers). Key Features Include: - Speed: Snowman Consensus provides sub-second, immutable finality, ensuring that transactions are quickly confirmed and irreversible. - Scalability: Snowman Consensus enables high network throughput while ensuring low latency. - Energy Efficiency: Unlike other popular consensus protocols, participation in Snowman Consensus is neither computationally intensive nor expensive. - Adaptive Security: Snowman Consensus is designed to resist various attacks, including sybil attacks, distributed denial-of-service (DDoS) attacks, and collusion attacks. Its probabilistic nature ensures that the consensus outcome converges to the desired state, even when the network is under attack. ## Conceptual Overview Consensus protocols in the Avalanche family operate through repeated sub-sampled voting. When a node is determining whether a [transaction](http://support.avalabs.org/en/articles/4587384-what-is-a-transaction) should be accepted, it asks a small, random subset of [validator nodes](http://support.avalabs.org/en/articles/4064704-what-is-a-blockchain-validator) for their preference. Each queried validator replies with the transaction that it prefers, or thinks should be accepted. Consensus will never include a transaction that is determined to be **invalid**. For example, if you were to submit a transaction to send 100 AVAX to a friend, but your wallet only has 2 AVAX, this transaction is considered **invalid** and will not participate in consensus. If a sufficient majority of the validators sampled reply with the same preferred transaction, this becomes the preferred choice of the validator that inquired. In the future, this node will reply with the transaction preferred by the majority. The node repeats this sampling process until the validators queried reply with the same answer for a sufficient number of consecutive rounds. - The number of validators required to be considered a "sufficient majority" is referred to as "α" (_alpha_). - The number of consecutive rounds required to reach consensus, a.k.a. the "Confidence Threshold," is referred to as "β" (_beta_). - Both α and β are configurable. When a transaction has no conflicts, finalization happens very quickly. When conflicts exist, honest validators quickly cluster around conflicting transactions, entering a positive feedback loop until all correct validators prefer that transaction. This leads to the acceptance of non-conflicting transactions and the rejection of conflicting transactions. ![How Snowman Consensus Works](/images/avalanche-consensus1.png) Snowman Consensus guarantees that if any honest validator accepts a transaction, all honest validators will come to the same conclusion. For a great visualization, check out [this demo](https://tedyin.com/archive/snow-bft-demo/#/snow). ## Deep Dive Into Snowman Consensus ### Intuition First, let's develop some intuition about the protocol. Imagine a room full of people trying to agree on what to get for lunch. Suppose it's a binary choice between pizza and barbecue. Some people might initially prefer pizza while others initially prefer barbecue. Ultimately, though, everyone's goal is to achieve **consensus**. Everyone asks a random subset of the people in the room what their lunch preference is. If more than half say pizza, the person thinks, "OK, looks like things are leaning toward pizza. I prefer pizza now." That is, they adopt the _preference_ of the majority. Similarly, if a majority say barbecue, the person adopts barbecue as their preference. Everyone repeats this process. Each round, more and more people have the same preference. This is because the more people that prefer an option, the more likely someone is to receive a majority reply and adopt that option as their preference. After enough rounds, they reach consensus and decide on one option, which everyone prefers. ### Snowball The intuition above outlines the Snowball Algorithm, which is a building block of Snowman Consensus. Let's review the Snowball algorithm. #### Parameters - _n_: number of participants - _k_ (sample size): between 1 and _n_ - α (quorum size): between 1 and _k_ - β (decision threshold): >= 1 #### Algorithm ``` preference := pizza consecutiveSuccesses := 0 while not decided: ask k random people their preference if >= α give the same response: preference := response with >= α if preference == old preference: consecutiveSuccesses++ else: consecutiveSuccesses = 1 else: consecutiveSuccesses = 0 if consecutiveSuccesses > β: decide(preference) ``` #### Algorithm Explained Everyone has an initial preference for pizza or barbecue. Until someone has _decided_, they query _k_ people (the sample size) and ask them what they prefer. If α or more people give the same response, that response is adopted as the new preference. α is called the _quorum size_. If the new preference is the same as the old preference, the `consecutiveSuccesses` counter is incremented. If the new preference is different then the old preference, the `consecutiveSuccesses` counter is set to `1`. If no response gets a quorum (an α majority of the same response) then the `consecutiveSuccesses` counter is set to `0`. Everyone repeats this until they get a quorum for the same response β times in a row. If one person decides pizza, then every other person following the protocol will eventually also decide on pizza. Random changes in preference, caused by random sampling, cause a network preference for one choice, which begets more network preference for that choice until it becomes irreversible and then the nodes can decide. In our example, there is a binary choice between pizza or barbecue, but Snowball can be adapted to achieve consensus on decisions with many possible choices. The liveness and safety thresholds are parameterizable. As the quorum size, α, increases, the safety threshold increases, and the liveness threshold decreases. This means the network can tolerate more byzantine (deliberately incorrect, malicious) nodes and remain safe, meaning all nodes will eventually agree whether something is accepted or rejected. The liveness threshold is the number of malicious participants that can be tolerated before the protocol is unable to make progress. These values, which are constants, are quite small on the Avalanche Network. The sample size, _k_, is `20`. So when a node asks a group of nodes their opinion, it only queries `20` nodes out of the whole network. The quorum size, α, is `14`. So if `14` or more nodes give the same response, that response is adopted as the querying node's preference. The decision threshold, β, is `20`. A node decides on choice after receiving `20` consecutive quorum (α majority) responses. Snowball is very scalable as the number of nodes on the network, _n_, increases. Regardless of the number of participants in the network, the number of consensus messages sent remains the same because in a given query, a node only queries `20` nodes, even if there are thousands of nodes in the network. Everything discussed to this point is how Avalanche is described in [the Avalanche white-paper](https://assets-global.website-files.com/5d80307810123f5ffbb34d6e/6009805681b416f34dcae012_Avalanche%20Consensus%20Whitepaper.pdf). The implementation of the Snowman Consensus protocol by Ava Labs (namely in AvalancheGo) has some optimizations for latency and throughput. ### Blocks A block is a fundamental component that forms the structure of a blockchain. It serves as a container or data structure that holds a collection of transactions or other relevant information. Each block is cryptographically linked to the previous block, creating a chain of blocks, hence the term "blockchain." In addition to storing a reference of its parent, a block contains a set of transactions. These transactions can represent various types of information, such as financial transactions, smart contract operations, or data storage requests. If a node receives a vote for a block, it also counts as a vote for all of the block's ancestors (its parent, the parents' parent, etc.). ### Finality Snowman Consensus is probabilistically safe up to a safety threshold. That is, the probability that a correct node accepts a transaction that another correct node rejects can be made arbitrarily low by adjusting system parameters. In Nakamoto consensus protocol (as used in Bitcoin and Ethereum, for example), a block may be included in the chain but then be removed and not end up in the canonical chain. This means waiting an hour for transaction settlement. In Avalanche, acceptance/rejection are **final and irreversible** and only take a few seconds. ### Optimizations It's not safe for nodes to just ask, "Do you prefer this block?" when they query validators. In Ava Labs' implementation, during a query a node asks, "Given that this block exists, which block do you prefer?" Instead of getting back a binary yes/no, the node receives the other node's preferred block. Nodes don't only query upon hearing of a new block; they repeatedly query other nodes until there are no blocks processing. Nodes may not need to wait until they get all _k_ query responses before registering the outcome of a poll. If a block has already received _alpha_ votes, then there's no need to wait for the rest of the responses. ### Validators If it were free to become a validator on the Avalanche network, that would be problematic because a malicious actor could start many, many nodes which would get queried very frequently. The malicious actor could make the node act badly and cause a safety or liveness failure. The validators, the nodes which are queried as part of consensus, have influence over the network. They have to pay for that influence with real-world value in order to prevent this kind of ballot stuffing. This idea of using real-world value to buy influence over the network is called Proof of Stake. To become a validator, a node must **bond** (stake) something valuable (**AVAX**). The more AVAX a node bonds, the more often that node is queried by other nodes. When a node samples the network it's not uniformly random. Rather, it's weighted by stake amount. Nodes are incentivized to be validators because they get a reward if, while they validate, they're sufficiently correct and responsive. Avalanche doesn't have slashing. If a node doesn't behave well while validating, such as giving incorrect responses or perhaps not responding at all, its stake is still returned in whole, but with no reward. As long as a sufficient portion of the bonded AVAX is held by correct nodes, then the network is safe, and is live for virtuous transactions. ### Big Ideas Two big ideas in Avalanche are **subsampling** and **transitive voting**. Subsampling has low message overhead. It doesn't matter if there are twenty validators or two thousand validators; the number of consensus messages a node sends during a query remains constant. Transitive voting, where a vote for a block is a vote for all its ancestors, helps with transaction throughput. Each vote is actually many votes in one. ### Loose Ends Transactions are created by users which call an API on an [AvalancheGo](https://github.com/ava-labs/avalanchego) full node or create them using a library such as [AvalancheJS](https://github.com/ava-labs/avalanchejs). ### Other Observations Conflicting transactions are not guaranteed to be live. That's not really a problem because if you want your transaction to be live then you should not issue a conflicting transaction. Snowman is the name of Ava Labs' implementation of the Snowman Consensus protocol for linear chains. If there are no undecided transactions, the Snowman Consensus protocol _quiesces_. That is, it does nothing if there is no work to be done. This makes Avalanche more sustainable than Proof-of-work where nodes need to constantly do work. Avalanche has no leader. Any node can propose a transaction and any node that has staked AVAX can vote on every transaction, which makes the network more robust and decentralized. ## Why Do We Care? Avalanche is a general consensus engine. It doesn't matter what type of application is put on top of it. The protocol allows the decoupling of the application layer from the consensus layer. If you're building a dapp on Avalanche then you just need to define a few things, like how conflicts are defined and what is in a transaction. You don't need to worry about how nodes come to an agreement. The consensus protocol is a black box that put something into it and it comes back as accepted or rejected. Avalanche can be used for all kinds of applications, not just P2P payment networks. Avalanche's Primary Network has an instance of the Ethereum Virtual Machine, which is backward compatible with existing Ethereum Dapps and dev tooling. The Ethereum consensus protocol has been replaced with Snowman Consensus to enable lower block latency and higher throughput. Avalanche is very performant. It can process thousands of transactions per second with ~1 second acceptance latency. ## Summary Snowman Consensus is a radical breakthrough in distributed systems. It represents as large a leap forward as the classical and Nakamoto consensus protocols that came before it. Now that you have a better understanding of how it works, check out other documentations for building game-changing Dapps and financial instruments on Avalanche. # AVAX Token (/docs/primary-network/avax-token) --- title: AVAX Token description: Learn about the native token of Avalanche Primary Network. --- AVAX is the native utility token of Avalanche. It's a hard-capped, scarce asset that is used to pay for fees, secure the platform through staking, and provide a basic unit of account between the multiple Avalanche L1s created on Avalanche. `1 nAVAX` is equal to `0.000000001 AVAX`. Use the [AVAX Unit Converter](/console/primary-network/unit-converter) to convert between different AVAX denominations. ## Utility AVAX is a capped-supply (up to 720M) resource in the Avalanche ecosystem that's used to power the network. AVAX is used to secure the ecosystem through staking and for day-to-day operations like issuing transactions. AVAX represents the weight that each node has in network decisions. No single actor owns the Avalanche Network, so each validator in the network is given a proportional weight in the network's decisions corresponding to the proportion of total stake that they own through proof of stake (PoS). Any entity trying to execute a transaction on Avalanche Primary Network pays a corresponding fee (commonly known as "gas") to run it on the network. The fees used to execute a transaction on Avalanche is burned, or permanently removed from circulating supply. ## Tokenomics A fixed amount of 360M AVAX was minted at genesis, but a small amount of AVAX is constantly minted as a reward to validators. The protocol rewards validators for good behavior by minting them AVAX rewards at the end of their staking period. The minting process offsets the AVAX burned by transactions fees. While AVAX is still far away from its supply cap, it will almost always remain an inflationary asset. Avalanche does not take away any portion of a validator's already staked tokens (commonly known as "slashing") for negligent/malicious staking periods, however this behavior is disincentivized as validators who attempt to do harm to the network would expend their node's computing resources for no reward. AVAX is minted according to the following formula, where $R_j$ is the total number of tokens at year $j$, with $R_1 = 360M$, and $R_l$ representing the last year that the values of $\gamma,\lambda \in \R$ were changed; $c_j$ is the yet un-minted supply of coins to reach $720M$ at year $j$ such that $c_j \leq 360M$; $u$ represents a staker, with $u.s_{amount}$ representing the total amount of stake that $u$ possesses, and $u.s_{time}$ the length of staking for $u$. AVAX is minted according to the following formula, where $R_j$ is the total number of tokens at: $$ R_j = R_l + \sum_{\forall u} \rho(u.s_{amount}, u.s_{time}) \times \frac{c_j}{L} \times \left( \sum_{i=0}^{j}\frac{1}{\left(\gamma + \frac{1}{1 + i^\lambda}\right)^i} \right) $$ where, $$ L = \left(\sum_{i=0}^{\infty} \frac{1}{\left(\gamma + \frac{1}{1 + i^\lambda} \right)^i} \right) $$ At genesis, $c_1 = 360M$. The values of $\gamma$ and $\lambda$ are governable, and if changed, the function is recomputed with the new value of $c_*$. We have that $\sum_{*}\rho(*) \le 1$. $\rho(*)$ is a linear function that can be computed as follows ($u.s_{time}$ is measured in weeks, and $u.s_{amount}$ is measured in AVAX tokens): $$ \rho(u.s_{amount}, u.s_{time}) = (0.002 \times u.s_{time} + 0.896) \times \frac{u.s_{amount}}{R_j} $$ If the entire supply of tokens at year $j$ is staked for the maximum amount of staking time (one year, or 52 weeks), then $\sum_{\forall u}\rho(u.s_{amount}, u.s_{time}) = 1$. If, instead, every token is staked continuously for the minimal stake duration of two weeks, then $\sum_{\forall u}\rho(u.s_{amount}, u.s_{time}) = 0.9$. Therefore, staking for the maximum amount of time incurs an additional 11.11% of tokens minted, incentivizing stakers to stake for longer periods. Due to the capped-supply, the above function guarantees that AVAX will never exceed a total of $720M$ tokens, or $\lim_{j \to \infty} R(j) = 720M$. # Coreth Architecture (/docs/primary-network/coreth-architecture) --- title: Coreth Architecture description: How the C-Chain EVM (Coreth) runs inside AvalancheGo, including consensus, execution, and cross-chain transfers. --- Coreth is the EVM implementation that powers the C-Chain. It is shipped with AvalancheGo under [`graft/coreth`](https://github.com/ava-labs/avalanchego/tree/master/graft/coreth) and wrapped by Snowman++ ([`vms/proposervm`](https://github.com/ava-labs/avalanchego/tree/master/vms/proposervm)) for block production. At a glance: - Snowman++ engine calls into Coreth’s block builder and execution pipeline. - Coreth executes EVM bytecode, maintains state (trie over Pebble/LevelDB), and exposes JSON-RPC/WS. - Atomic import/export uses shared UTXO memory and writes to the node database. ## Consensus & Block Production - Runs **Snowman++** via the ProposerVM wrapper; a stake-weighted proposer list gates each 5s window before falling back to anyone building. - Blocks are built by Coreth's block builder ([`graft/coreth/plugin/evm/block_builder.go`](https://github.com/ava-labs/avalanchego/blob/master/graft/coreth/plugin/evm/block_builder.go)), which applies EIP-1559 base fee rules and proposer-specific metadata. - Chain ID: Mainnet `43114`, Fuji `43113`. JSON-RPC is exposed at `/ext/bc/C/rpc` with optional WebSocket at `/ext/bc/C/ws`. ## Execution Pipeline - **Execution**: Standard go-ethereum VM with Avalanche-specific patches (fee handling, atomic tx support, bootstrapping/state sync) in [`graft/coreth`](https://github.com/ava-labs/avalanchego/tree/master/graft/coreth). - **State**: Uses PebbleDB/LevelDB via AvalancheGo's database interface; state pruning and state-sync are configurable. - **APIs**: Supports `eth`, `net`, `web3`, `debug` (optional), `txpool` (optional) namespaces. Enable/disable via chain config. ## Cross-Chain (Atomic) Transfers - Coreth supports **atomic import/export** to the X-Chain and P-Chain using shared UTXO memory ([`graft/coreth/plugin/evm/atomic`](https://github.com/ava-labs/avalanchego/tree/master/graft/coreth/plugin/evm/atomic)). - Exports lock AVAX into an atomic UTXO set; imports consume those UTXOs to credit balance on the destination chain. - Wallet helpers and SDKs build these atomic txs against the C-Chain RPC; on-chain they show up as `ImportTx`/`ExportTx` wrapping atomic inputs/outputs. ## Configuration Chain-specific config lives at: ```json title="~/.avalanchego/configs/chains/C/config.json" { "eth-apis": ["eth", "net", "web3", "eth-filter"], "pruning-enabled": true, "state-sync-enabled": true } ``` Key knobs: - `eth-apis`: List of RPC namespaces to serve. - `pruning-enabled`: Enable state trie pruning. - `state-sync-enabled`: Allow state sync bootstrap instead of full replay. - P-chain fee recipient and other advanced options are also supported; see [`graft/coreth/plugin/evm/config.go`](https://github.com/ava-labs/avalanchego/blob/master/graft/coreth/plugin/evm/config.go). ## Developer Tips - Use **chain configs** to toggle RPC namespaces instead of patching code. - When running local devnets, use `--chain-config-content` to pass base64 configs inline. - For cross-chain AVAX moves, call the P-Chain/X-Chain import/export endpoints; Coreth handles the atomic mempool internally. # Exchange Integration (/docs/primary-network/exchange-integration) --- title: Exchange Integration description: Learn how to integrate your exchange with the EVM-Compatible Avalanche C-Chain. --- ## Overview The objective of this document is to provide a brief overview of how to integrate with the EVM-Compatible Avalanche C-Chain. For teams that already support ETH, supporting the C-Chain is as straightforward as spinning up an Avalanche node (which has the [same API](https://ethereum.org/en/developers/docs/apis/json-rpc/) as [`go-ethereum`](https://geth.ethereum.org/docs/rpc/server)) and populating Avalanche's ChainID (43114) when constructing transactions. Additionally, Ava Labs maintains an implementation of the [Rosetta API](https://docs.cdp.coinbase.com/mesh/docs/welcome) for the C-Chain called [avalanche-rosetta](https://github.com/ava-labs/avalanche-rosetta). You can learn more about this standardized integration path on the attached Rosetta API website. ## Integration Using EVM Endpoints ### Running an Avalanche Node If you want to build your node form source or include it in a docker image, reference the [AvalancheGo GitHub repository](https://github.com/ava-labs/avalanchego). To quickly get up and running, you can use the [node installation script](/docs/nodes/run-a-node/using-install-script/installing-avalanche-go) that automates installing and updating AvalancheGo node as a `systemd` service on Linux, using prebuilt binaries. ### Configuring an Avalanche Node All configuration options and their default values are described [here](/docs/nodes/configure/configs-flags). You can supply configuration options on the command line, or use a config file, which can be easier to work with when supplying many options. You can specify the config file location with `—config-file=config.json`, where `config.json` is a JSON file whose keys and values are option names and values. Individual chains, including the C-Chain, have their own configuration options which are separate from the node-level options. These can also be specified in a config file. For more details, see [here](/docs/nodes/chain-configs/primary-network/c-chain). The C-Chain config file should be at `$HOME/.avalanchego/configs/chains/C/config.json`. You can also tell AvalancheGo to look somewhere else for the C-Chain config file with option `--chain-config-dir`. An example C-Chain config file: If you need Ethereum's [Archive Node](https://ethereum.org/en/developers/docs/nodes-and-clients/#archive-node) functionality, you need to disable C-Chain pruning, which has been enabled by default since AvalancheGo v1.4.10. To disable pruning, include `"pruning-enabled": false` in the C-Chain config file as shown below. ```json { "snowman-api-enabled": false, "coreth-admin-api-enabled": false, "local-txs-enabled": true, "pruning-enabled": false, "eth-apis": [ "internal-eth", "internal-blockchain", "internal-transaction", "internal-tx-pool", "internal-account", "internal-personal", "debug-tracer", "web3", "eth", "eth-filter", "admin", "net" ] } ``` ### Interacting with the C-Chain Interacting with the C-Chain is identical to interacting with [`go-ethereum`](https://geth.ethereum.org/). You can find the reference material for C-Chain API [here](/docs/rpcs/c-chain). Please note that `personal_` namespace is turned off by default. To turn it on, you need to pass the appropriate command line switch to your node, like in the above config example. ## Integration Using Rosetta [Rosetta](https://docs.cdp.coinbase.com/mesh/docs/welcome) is an open-source specification and set of tools that makes integrating with different blockchain networks easier by presenting the same set of APIs for every network. The Rosetta API is made up of 2 core components, the [Data API](https://docs.cdp.coinbase.com/mesh/docs/api-data) and the [Construction API](https://docs.cdp.coinbase.com/mesh/docs/api-construction). Together, these APIs allow for anyone to read and write to blockchains in a standard format over a standard communication protocol. The specifications for these APIs can be found in the [rosetta-specifications](https://github.com/coinbase/rosetta-specifications) repository. You can find the Rosetta server implementation for Avalanche C-Chain [here](https://github.com/ava-labs/avalanche-rosetta), all you need to do is install and run the server with proper configuration. It comes with a `Dockerfile` that packages both the server and the Avalanche client. Detailed instructions can be found in the linked repository. ## Constructing Transactions Avalanche C-Chain transactions are identical to standard EVM transactions with 2 exceptions: - They must be signed with Avalanche's ChainID (43114). - The detailed dynamic gas fee can be found [here](/docs/rpcs/other/guides/txn-fees#c-chain-fees). For development purposes, Avalanche supports all the popular tooling for Ethereum, so developers familiar with Ethereum and Solidity can feel right at home. Popular development environments include: - [Remix IDE](https://remix.ethereum.org/) - [thirdweb](https://thirdweb.com/) - [Hardhat](https://hardhat.org/) ## Ingesting On-Chain Data You can use any standard way of ingesting on-chain data you use for Ethereum network. ### Determining Finality Avalanche consensus provides fast and irreversible finality with ~1 second. To query the most up-to-date finalized block, query any value (that is block, balance, state, etc) with the `latest` parameter. If you query above the last finalized block (that is eth_blockNumber returns 10 and you query 11), an error will be thrown indicating that unfinalized data cannot be queried (as of `avalanchego@v1.3.2`). ### (Optional) Custom Golang SDK If you plan on extracting data from the C-Chain into your own systems using Golang, we recommend using our custom [`ethclient`](https://github.com/ava-labs/avalanchego/tree/master/graft/coreth/ethclient). The standard `go-ethereum` Ethereum client does not compute block hashes correctly (when you call `block.Hash()`) because it doesn't take into account the added [ExtDataHash](https://github.com/ava-labs/avalanchego/blob/master/graft/coreth/core/types/block.go#L98) header field in Avalanche C-Chain blocks, which is used move AVAX between chains (X-Chain and P-Chain). You can read more about our multi-chain abstraction [here](/docs/primary-network) (out of scope for a normal C-Chain integration). If you plan on reading JSON responses directly or use web3.js (doesn't recompute hash received over the wire) to extract on-chain transaction data/logs/receipts, you shouldn't have any issues! ## Support If you have any problems or questions, reach out either directly to our developers, or on our public [Discord](https://chat.avalabs.org/) server. # Primary Network (/docs/primary-network) --- title: Primary Network description: Learn about the Avalanche Primary Network and its three blockchains. --- import { Network, Layers, Terminal, ArrowRight, Database, Package } from 'lucide-react'; Avalanche is a heterogeneous network of blockchains. As opposed to homogeneous networks, where all applications reside in the same chain, heterogeneous networks allow separate chains to be created for different applications. ![Primary Network Architecture](https://qizat5l3bwvomkny.public.blob.vercel-storage.com/builders-hub/course-images/multi-chain-architecture/multi-chain.png) The Primary Network is a special [Avalanche L1](/docs/avalanche-l1s) that runs three blockchains: - The Contract Chain [(C-Chain)](/docs/primary-network#c-chain-contract-chain) - The Platform Chain [(P-Chain)](/docs/primary-network#p-chain-platform-chain) - The Exchange Chain [(X-Chain)](/docs/primary-network#x-chain-exchange-chain) Avalanche Mainnet is comprised of the Primary Network and all deployed Avalanche L1s. A node can become a validator for the Primary Network by staking at least **2,000 AVAX**. ### C-Chain (Contract Chain) The **C-Chain** is an implementation of the Ethereum Virtual Machine (EVM). The [C-Chain's API](/docs/rpcs/c-chain) supports Geth's API and supports the deployment and execution of smart contracts written in Solidity. The C-Chain is an instance of the [Coreth](https://github.com/ava-labs/avalanchego/tree/master/graft/coreth) Virtual Machine. | Property | Mainnet | Fuji Testnet | |----------|---------|--------------| | **Network Name** | Avalanche C-Chain | Avalanche Fuji C-Chain | | **Chain ID** | 43114 (0xA86A) | 43113 (0xA869) | | **Currency** | AVAX | AVAX | | **RPC URL** | https://api.avax.network/ext/bc/C/rpc | https://api.avax-test.network/ext/bc/C/rpc | | **Explorer** | https://subnets.avax.network/c-chain | https://subnets-test.avax.network/c-chain | | **Faucet** | - | [Get Test AVAX](/console/primary-network/faucet) | | **Add to Wallet** | | | ### P-Chain (Platform Chain) The **P-Chain** is responsible for all validator and Avalanche L1-level operations. The [P-Chain API](/docs/rpcs/p-chain) supports the creation of new blockchains and Avalanche L1s, the addition of validators to Avalanche L1s, staking operations, and other platform-level operations. The P-Chain is an instance of the [Platform Virtual Machine](https://github.com/ava-labs/avalanchego/tree/master/vms/platformvm). | Property | Mainnet | Fuji Testnet | |----------|---------|--------------| | **RPC URL** | https://api.avax.network/ext/bc/P | https://api.avax-test.network/ext/bc/P | | **Currency** | AVAX | AVAX | | **Explorer** | https://subnets.avax.network/p-chain | https://subnets-test.avax.network/p-chain | ### X-Chain (Exchange Chain) The **X-Chain** is responsible for operations on digital smart assets known as **Avalanche Native Tokens**. A smart asset is a representation of a real-world resource (for example, equity, or a bond) with sets of rules that govern its behavior, like "can't be traded until tomorrow." The [X-Chain API](/docs/rpcs/x-chain) supports the creation and trade of Avalanche Native Tokens. One asset traded on the X-Chain is AVAX. When you issue a transaction to a blockchain on Avalanche, you pay a fee denominated in AVAX. The X-Chain is an instance of the Avalanche Virtual Machine (AVM). | Property | Mainnet | Fuji Testnet | |----------|---------|--------------| | **RPC URL** | https://api.avax.network/ext/bc/X | https://api.avax-test.network/ext/bc/X | | **Currency** | AVAX | AVAX | | **Explorer** | https://subnets.avax.network/x-chain | https://subnets-test.avax.network/x-chain | ## Explore More # PlatformVM Architecture (/docs/primary-network/platformvm-architecture) --- title: PlatformVM Architecture description: How the P-Chain manages validators, staking, and Avalanche L1 creation inside AvalancheGo. --- PlatformVM (P-Chain) runs on Snowman++ and controls validators, staking rewards, subnet membership, and chain creation. Source lives in [`vms/platformvm`](https://github.com/ava-labs/avalanchego/tree/master/vms/platformvm) and its block/tx types in [`vms/platformvm/txs`](https://github.com/ava-labs/avalanchego/tree/master/vms/platformvm/txs). At a glance: - Snowman++ engine drives PlatformVM block production; mempool feeds Standard/Proposal/Atomic blocks. - Validator registry, subnet membership, warp signing, and atomic UTXOs are persisted in the node database. - P-Chain APIs expose validator state, subnet/chain creation, staking ops, and block fetch. ## Responsibilities - **Validator registry & staking**: Tracks Primary Network validators and delegators, uptime, staking rewards, and validator fees. - **Subnet/L1 orchestration**: Creates Subnets and chains (`CreateSubnetTx`, `CreateChainTx`), maintains Subnet validator sets (including permissionless add/remove). - **Warp messaging**: Signs warp messages for cross-chain communication on Avalanche L1s. - **Atomic transfers**: Handles import/export of AVAX to/from other chains via shared memory. ## Consensus & Blocks - Uses **Snowman++** via the ProposerVM (single proposer windows with fallback). - Blocks are built by [`vms/platformvm/block/builder`](https://github.com/ava-labs/avalanchego/tree/master/vms/platformvm/block/builder); block types include **Standard**, **Proposal** (with **Commit/Abort** options), and **Atomic** blocks. - State sync is supported for faster bootstrap; bootstrapping peers can be overridden via `CustomBeacons` in the P-Chain `ChainParameters`. ## Key Transaction Types | Transaction | Purpose | |-------------|---------| | `AddValidatorTx`, `AddDelegatorTx` | Join the Primary Network validator set / delegate stake | | `AddSubnetValidatorTx` | Add a validator to a Subnet (validator must also be on Primary) | | `AddPermissionlessValidatorTx` / `AddPermissionlessDelegatorTx` | Permissionless validation on Subnets that allow it | | `CreateSubnetTx` | Create a new Subnet and owner controls | | `CreateChainTx` | Launch a new blockchain (VM + genesis) on a Subnet | | `ImportTx` / `ExportTx` | Move AVAX to/from other chains via atomic UTXOs | | `RewardValidatorTx` | Mint rewards after successful staking periods | | `TransformSubnetTx` | Legacy subnet transform (disabled post-Etna) | ## P-Chain APIs - Exposed at `/ext/bc/P` with namespaces such as `platform.getBlock`, `platform.getCurrentValidators`, `platform.issueTx`, `platform.getSubnets`, `platform.getBlockchains`. - Health and metrics are surfaced via the node-level `/ext/health` and `/ext/metrics`. ## Configuration Default chain config location: ```json title="~/.avalanchego/configs/chains/P/config.json" { "state-sync-enabled": true, "pruning-enabled": true } ``` - Subnet and chain aliases can be set in `~/.avalanchego/configs/chains/aliases.json`. - Upgrade rules and Subnet parameters are read from the chain config and network upgrade settings (`upgrade/`). ## Developer Tips - When testing new Subnets/VMs, pass `CreateChainTx` genesis bytes and VM IDs via `platform.issueTx`. - For permissionless Subnets, ensure the Subnet’s config enables the relevant validator/delegator transactions before issuing them. - Use `platform.getBlock` to inspect Proposal/Commit/Abort flow if debugging staking or subnet updates. # Streaming Asynchronous Execution (/docs/primary-network/streaming-async-execution) --- title: Streaming Asynchronous Execution description: ACP-194 decouples consensus from execution, enabling parallel processing and dramatically improving C-Chain throughput. full: true --- # Virtual Machines (/docs/primary-network/virtual-machines) --- title: Virtual Machines description: Learn about blockchain VMs and how you can build a custom VM-enabled blockchain in Avalanche. --- A **Virtual Machine** (VM) is the blueprint for a blockchain, meaning it defines a blockchain's complete application logic by specifying the blockchain's state, state transitions, transaction rules, and API interface. Developers can use the same VM to create multiple blockchains, each of which follows identical rules but is independent of all others. All Avalanche validators of the **Avalanche Primary Network** are required to run three VMs: - **Coreth**: Defines the Contract Chain (C-Chain); supports smart contract functionality and is EVM-compatible. - **Platform VM**: Defines the Platform Chain (P-Chain); supports operations on staking and Avalanche L1s. - **Avalanche VM**: Defines the Exchange Chain (X-Chain); supports operations on Avalanche Native Tokens. All three can easily be run on any computer with [AvalancheGo](/docs/nodes). ## Custom VMs on Avalanche Developers with advanced use-cases for utilizing distributed ledger technology are often forced to build everything from scratch - networking, consensus, and core infrastructure - before even starting on the actual application. Avalanche eliminates this complexity by: - Providing VMs as simple blueprints for defining blockchain behavior - Supporting development in any programming language with familiar tools - Handling all low-level infrastructure automatically This lets developers focus purely on building their dApps, ecosystems, and communities, rather than wrestling with blockchain fundamentals. ### How Custom VMs Work Customized VMs can communicate with Avalanche over a language agnostic request-response protocol known as [RPC](https://en.wikipedia.org/wiki/Remote_procedure_call). This allows the VM framework to open a world of endless possibilities, as developers can implement their dApps using the languages, frameworks, and libraries of their choice. Validators can install additional VMs on their node to validate additional [Avalanche L1s](/docs/avalanche-l1s) in the Avalanche ecosystem. In exchange, validators receive staking rewards in the form of a reward token determined by the Avalanche L1s. ## Building a Custom VM You can start building your first custom virtual machine in two ways: 1. Use the ready-to-deploy Subnet-EVM for Solidity-based development 2. Create a custom VM in Golang, Rust, or your preferred language The choice depends on your needs. Subnet-EVM provides a quick start with Ethereum compatibility, while custom VMs offer maximum flexibility. ### Golang Examples See here for a tutorial on [How to Build a Simple Golang VM](/docs/avalanche-l1s/golang-vms/simple-golang-vm). ### Rust Examples See here for a tutorial on [How to Build a Simple Rust VM](/docs/avalanche-l1s/rust-vms/setting-up-environment). # Introduction (/docs/nodes) --- title: Introduction description: A brief introduction to the concepts of nodes and validators within the Avalanche ecosystem. --- AvalancheGo nodes relay transactions/blocks, expose APIs, and (when staked) participate in consensus on the Primary Network and any Avalanche L1s they validate. ## Node Roles | Role | Purpose | Consensus Participation | |------|---------|-------------------------| | **Validator** | Stakes on the P-Chain, validates the Primary Network and any Subnets/L1s it joins | Yes (polled for Snowman/Snowman++) | | **Non-validating** | Tracks chains, serves APIs, used for infra and indexing | No (not polled) | All nodes: connect via P2P with staking certs, track P/C/X, bootstrap or state-sync chains, and serve APIs if enabled. ## Data Retention Modes | Mode | Description | When to use | |------|-------------|------------| | **Archive** | Keep full history | Auditing, full re-exec | | **Pruned** | Drop old data after sync | Save disk on long-running nodes | | **State sync** | Sync from state summaries instead of full replay | Fast catch-up for new nodes | Choose per-chain via chain configs. ## Validator Requirements | Network | Requirements | |---------|--------------| | **Primary Network** | Stake **2,000 AVAX** on the P-Chain; validation period **14–365 days**; meet uptime to earn rewards; must validate P-Chain, C-Chain, X-Chain | | **Avalanche L1s** | Validators pay **1.33 AVAX/month** (burned) to the P-Chain for validation slots; each L1 sets its own validation/staking rules beyond that. | Avalanche L1s are blockchains that run on a Subnet. When you validate a Subnet, you validate all Avalanche L1s on that Subnet. ### Validator Responsibilities - **Validate & build blocks**: Participate in Snowman++ consensus (all Primary Network chains and most L1s). - **Maintain APIs**: Serve RPCs for wallets/apps if enabled. - **Stay healthy**: Meet uptime and networking requirements to remain in good standing and earn rewards. # AvalancheGo Releases (/docs/nodes/releases) --- title: AvalancheGo Releases description: Track AvalancheGo releases, network upgrades, and version compatibility for your node. --- This page is automatically generated from the [AvalancheGo GitHub releases](https://github.com/ava-labs/avalanchego/releases). ## Current Recommended Version | Network | Version | Release Name | Type | |---------|---------|--------------|------| | **Mainnet** | v1.14.1 | Granite.1 - Grafting EVM Repos | Stable | | **Fuji Testnet** | v1.14.1 | Granite.1 - Grafting EVM Repos | Stable | **Always run the latest stable release** unless you're specifically testing pre-release versions on Fuji. ## Using the Installer Script The [AvalancheGo installer script](/docs/nodes/run-a-node/using-install-script/installing-avalanche-go) supports installing specific versions. ### List Available Versions ```bash ./avalanchego-installer.sh --list ``` ### Install a Specific Version ```bash ./avalanchego-installer.sh --version v1.14.1 ``` ### Upgrade Existing Installation Simply run the installer script again to upgrade to the latest version: ```bash ./avalanchego-installer.sh ``` ## Recent Releases ### v1.14.1 - Granite.1 - Grafting EVM Repos **Released:** January 6, 2026 | [View on GitHub](https://github.com/ava-labs/avalanchego/releases/tag/v1.14.1) This version is backwards compatible to [v1.14.0](https://github.com/ava-labs/avalanchego/releases/tag/v1.14.0). It is optional, but encouraged. The plugin version is unchanged at `44` and is compatible with version `v1.14.0`. | Platform | File | Size | |----------|------|------| | Linux (AMD64) | [avalanchego-linux-amd64-v1.14.1.tar.gz](https://github.com/ava-labs/avalanchego/releases/download/v1.14.1/avalanchego-linux-amd64-v1.14.1.tar.gz) | 42.0 MB | | Linux (ARM64) | [avalanchego-linux-arm64-v1.14.1.tar.gz](https://github.com/ava-labs/avalanchego/releases/download/v1.14.1/avalanchego-linux-arm64-v1.14.1.tar.gz) | 39.5 MB | | macOS | [avalanchego-macos-v1.14.1.zip](https://github.com/ava-labs/avalanchego/releases/download/v1.14.1/avalanchego-macos-v1.14.1.zip) | 40.6 MB | ### v1.14.0 - Granite - Improving ICM and Dynamic Block Times **Released:** November 5, 2025 | [View on GitHub](https://github.com/ava-labs/avalanchego/releases/tag/v1.14.0) This release schedules the activation of the following Avalanche Community Proposals (ACPs): - [ACP-181](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/181-p-chain-epoched-views/README.md) P-Chain Epoched Views - [ACP-204](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/20... **ACPs Included:** ACP-181, ACP-204, ACP-226, ACP-176 | Platform | File | Size | |----------|------|------| | Linux (AMD64) | [avalanchego-linux-amd64-v1.14.0.tar.gz](https://github.com/ava-labs/avalanchego/releases/download/v1.14.0/avalanchego-linux-amd64-v1.14.0.tar.gz) | 41.5 MB | | Linux (ARM64) | [avalanchego-linux-arm64-v1.14.0.tar.gz](https://github.com/ava-labs/avalanchego/releases/download/v1.14.0/avalanchego-linux-arm64-v1.14.0.tar.gz) | 39.0 MB | | macOS | [avalanchego-macos-v1.14.0.zip](https://github.com/ava-labs/avalanchego/releases/download/v1.14.0/avalanchego-macos-v1.14.0.zip) | 41.4 MB | ### v1.14.0-fuji - Granite - Improving ICM and Dynamic Block Times - Fuji Pre-Release (Pre-release) **Released:** October 21, 2025 | [View on GitHub](https://github.com/ava-labs/avalanchego/releases/tag/v1.14.0-fuji) **ACPs Included:** ACP-181, ACP-204, ACP-226, ACP-176 | Platform | File | Size | |----------|------|------| | Linux (AMD64) | [avalanchego-linux-amd64-v1.14.0-fuji.tar.gz](https://github.com/ava-labs/avalanchego/releases/download/v1.14.0-fuji/avalanchego-linux-amd64-v1.14.0-fuji.tar.gz) | 41.6 MB | | Linux (ARM64) | [avalanchego-linux-arm64-v1.14.0-fuji.tar.gz](https://github.com/ava-labs/avalanchego/releases/download/v1.14.0-fuji/avalanchego-linux-arm64-v1.14.0-fuji.tar.gz) | 39.1 MB | | macOS | [avalanchego-macos-v1.14.0-fuji.zip](https://github.com/ava-labs/avalanchego/releases/download/v1.14.0-fuji/avalanchego-macos-v1.14.0-fuji.zip) | 41.4 MB | ### v1.13.5 - Fortuna.5 - Mempool Rate Limiting Improvements **Released:** August 29, 2025 | [View on GitHub](https://github.com/ava-labs/avalanchego/releases/tag/v1.13.5) This version is backwards compatible to [v1.13.0](https://github.com/ava-labs/avalanchego/releases/tag/v1.13.0). It is optional, but encouraged. The plugin version is unchanged at `43` and is compatible with version `v1.13.4`. | Platform | File | Size | |----------|------|------| | Linux (AMD64) | [avalanchego-linux-amd64-v1.13.5.tar.gz](https://github.com/ava-labs/avalanchego/releases/download/v1.13.5/avalanchego-linux-amd64-v1.13.5.tar.gz) | 40.2 MB | | Linux (ARM64) | [avalanchego-linux-arm64-v1.13.5.tar.gz](https://github.com/ava-labs/avalanchego/releases/download/v1.13.5/avalanchego-linux-arm64-v1.13.5.tar.gz) | 37.9 MB | | macOS | [avalanchego-macos-v1.13.5.zip](https://github.com/ava-labs/avalanchego/releases/download/v1.13.5/avalanchego-macos-v1.13.5.zip) | 40.0 MB | ### v1.13.4 - Fortuna.4 - Cubist Signer Integration **Released:** August 1, 2025 | [View on GitHub](https://github.com/ava-labs/avalanchego/releases/tag/v1.13.4) This version is backwards compatible to [v1.13.0](https://github.com/ava-labs/avalanchego/releases/tag/v1.13.0). It is optional, but encouraged. The plugin version is updated to `43` all plugins must update to be compatible. | Platform | File | Size | |----------|------|------| | Linux (AMD64) | [avalanchego-linux-amd64-v1.13.4.tar.gz](https://github.com/ava-labs/avalanchego/releases/download/v1.13.4/avalanchego-linux-amd64-v1.13.4.tar.gz) | 40.3 MB | | Linux (ARM64) | [avalanchego-linux-arm64-v1.13.4.tar.gz](https://github.com/ava-labs/avalanchego/releases/download/v1.13.4/avalanchego-linux-arm64-v1.13.4.tar.gz) | 38.0 MB | | macOS | [avalanchego-macos-v1.13.4.zip](https://github.com/ava-labs/avalanchego/releases/download/v1.13.4/avalanchego-macos-v1.13.4.zip) | 40.1 MB | ### v1.13.3 - Fortuna.3 - ToEngine Channel Removal **Released:** July 22, 2025 | [View on GitHub](https://github.com/ava-labs/avalanchego/releases/tag/v1.13.3) This version is backwards compatible to [v1.13.0](https://github.com/ava-labs/avalanchego/releases/tag/v1.13.0). It is optional, but encouraged. The plugin version is updated to `42` all plugins must update to be compatible. **This release removes the support for running on Windows. Any users ... | Platform | File | Size | |----------|------|------| | Linux (AMD64) | [avalanchego-linux-amd64-v1.13.3.tar.gz](https://github.com/ava-labs/avalanchego/releases/download/v1.13.3/avalanchego-linux-amd64-v1.13.3.tar.gz) | 39.2 MB | | Linux (ARM64) | [avalanchego-linux-arm64-v1.13.3.tar.gz](https://github.com/ava-labs/avalanchego/releases/download/v1.13.3/avalanchego-linux-arm64-v1.13.3.tar.gz) | 36.9 MB | | macOS | [avalanchego-macos-v1.13.3.zip](https://github.com/ava-labs/avalanchego/releases/download/v1.13.3/avalanchego-macos-v1.13.3.zip) | 38.9 MB | ### v1.13.2 - Fortuna.2 - VM HTTP2 Support **Released:** June 24, 2025 | [View on GitHub](https://github.com/ava-labs/avalanchego/releases/tag/v1.13.2) This version is backwards compatible to [v1.13.0](https://github.com/ava-labs/avalanchego/releases/tag/v1.13.0). It is optional, but encouraged. The plugin version is updated to `41` all plugins must update to be compatible. | Platform | File | Size | |----------|------|------| | Linux (AMD64) | [avalanchego-linux-amd64-v1.13.2.tar.gz](https://github.com/ava-labs/avalanchego/releases/download/v1.13.2/avalanchego-linux-amd64-v1.13.2.tar.gz) | 36.2 MB | | Linux (ARM64) | [avalanchego-linux-arm64-v1.13.2.tar.gz](https://github.com/ava-labs/avalanchego/releases/download/v1.13.2/avalanchego-linux-arm64-v1.13.2.tar.gz) | 34.0 MB | | macOS | [avalanchego-macos-v1.13.2.zip](https://github.com/ava-labs/avalanchego/releases/download/v1.13.2/avalanchego-macos-v1.13.2.zip) | 36.7 MB | ### v1.13.1 - Fortuna.1 - LibEVM Migration **Released:** June 4, 2025 | [View on GitHub](https://github.com/ava-labs/avalanchego/releases/tag/v1.13.1) This version is backwards compatible to [v1.13.0](https://github.com/ava-labs/avalanchego/releases/tag/v1.13.0). It is optional, but encouraged. The plugin version is updated to `40` all plugins must update to be compatible. **ACPs Included:** ACP-77 | Platform | File | Size | |----------|------|------| | Linux (AMD64) | [avalanchego-linux-amd64-v1.13.1.tar.gz](https://github.com/ava-labs/avalanchego/releases/download/v1.13.1/avalanchego-linux-amd64-v1.13.1.tar.gz) | 36.1 MB | | Linux (ARM64) | [avalanchego-linux-arm64-v1.13.1.tar.gz](https://github.com/ava-labs/avalanchego/releases/download/v1.13.1/avalanchego-linux-arm64-v1.13.1.tar.gz) | 33.9 MB | | macOS | [avalanchego-macos-v1.13.1.zip](https://github.com/ava-labs/avalanchego/releases/download/v1.13.1/avalanchego-macos-v1.13.1.zip) | 36.7 MB | ### v1.13.0 - Fortuna - C-Chain Fee Overhaul **Released:** March 24, 2025 | [View on GitHub](https://github.com/ava-labs/avalanchego/releases/tag/v1.13.0) This upgrade consists of the following Avalanche Community Proposal (ACP): - [ACP-176](https://github.com/avalanche-foundation/ACPs/blob/main/ACPs/176-dynamic-evm-gas-limit-and-price-discovery-updates/README.md) Dynamic EVM Gas Limits and Price Discovery Updates The ACP in this upgrade goes into... **ACPs Included:** ACP-176, ACP-118 | Platform | File | Size | |----------|------|------| | Linux (AMD64) | [avalanchego-linux-amd64-v1.13.0.tar.gz](https://github.com/ava-labs/avalanchego/releases/download/v1.13.0/avalanchego-linux-amd64-v1.13.0.tar.gz) | 35.5 MB | | Linux (ARM64) | [avalanchego-linux-arm64-v1.13.0.tar.gz](https://github.com/ava-labs/avalanchego/releases/download/v1.13.0/avalanchego-linux-arm64-v1.13.0.tar.gz) | 33.4 MB | | macOS | [avalanchego-macos-v1.13.0.zip](https://github.com/ava-labs/avalanchego/releases/download/v1.13.0/avalanchego-macos-v1.13.0.zip) | 36.0 MB | ### v1.13.0-fuji - Fortuna - C-Chain Fee Overhaul - Fuji Pre-Release (Pre-release) **Released:** March 6, 2025 | [View on GitHub](https://github.com/ava-labs/avalanchego/releases/tag/v1.13.0-fuji) **ACPs Included:** ACP-176 | Platform | File | Size | |----------|------|------| | Linux (AMD64) | [avalanchego-linux-amd64-v1.13.0-fuji.tar.gz](https://github.com/ava-labs/avalanchego/releases/download/v1.13.0-fuji/avalanchego-linux-amd64-v1.13.0-fuji.tar.gz) | 35.5 MB | | Linux (ARM64) | [avalanchego-linux-arm64-v1.13.0-fuji.tar.gz](https://github.com/ava-labs/avalanchego/releases/download/v1.13.0-fuji/avalanchego-linux-arm64-v1.13.0-fuji.tar.gz) | 33.3 MB | | macOS | [avalanchego-macos-v1.13.0-fuji.zip](https://github.com/ava-labs/avalanchego/releases/download/v1.13.0-fuji/avalanchego-macos-v1.13.0-fuji.zip) | 36.0 MB | ## All Releases For a complete list of all AvalancheGo releases with full changelogs, visit the official [GitHub Releases page](https://github.com/ava-labs/avalanchego/releases). ## Staying Updated ### Avalanche Notify Service Subscribe to the [Avalanche Notify service](/docs/nodes/maintain/enroll-in-avalanche-notify) to receive email notifications about new releases. ### GitHub Notifications 1. Go to the [AvalancheGo repository](https://github.com/ava-labs/avalanchego) 2. Click **Watch** → **Custom** → Check **Releases** → **Apply** ## Related Resources - [Upgrade Your Node](/docs/nodes/maintain/upgrade) - Step-by-step upgrade instructions - [Installing AvalancheGo](/docs/nodes/run-a-node/using-install-script/installing-avalanche-go) - Initial installation guide - [Backup and Restore](/docs/nodes/maintain/backup-restore) - Backup your node before upgrading # System Requirements (/docs/nodes/system-requirements) --- title: System Requirements description: Hardware, storage, and networking requirements for running Avalanche nodes on the Primary Network and Avalanche L1s. --- ## Primary Network Validators Running a Primary Network validator requires careful consideration of your stake weight. Validators with higher stake receive more traffic and must process more data, requiring better hardware. ### Storage Requirements You **must** use a local NVMe SSD attached directly to your hardware with **minimum 3000 IOPS**. Cloud block storage (AWS EBS, GCP Persistent Disk, Azure Managed Disks) introduces latency that causes poor performance, missed blocks, and potential benching. If running in the cloud, use instance types with local NVMe storage (e.g., AWS i3/i4i instances, GCP N2 with local SSD). New validators should use **state sync** to bootstrap. While full sync from genesis is still possible, state sync is significantly faster—downloading only the active state (~500 GB) rather than replaying all historical blocks. | Storage Type | Initial Size | Description | |--------------|--------------|-------------| | Active State | ~500 GB | Current state required to validate. Downloaded via state sync. | | Full Archive | ~12.5 TB | Complete historical state. Only needed for archive nodes or block explorers. | Even with state sync, your node's storage usage will grow over time as new blocks are added and old state accumulates. A node starting at 500 GB can grow to 1 TB+ over months of operation. Plan for this growth when provisioning storage, or schedule periodic maintenance using [state management strategies](/docs/nodes/maintain/chain-state-management). ### Hardware Requirements Resource requirements scale with your stake weight. Higher stake means more validator duties and network traffic. | Component | Low Stake Validators | High Stake Validators | |-----------|---------------------|----------------------| | **Use Case** | Validators with modest stake delegations who want reliable operation without over-provisioning | Validators with significant stake who handle proportionally more network traffic and validation duties | | **CPU** | 4 cores / 8 threads (e.g., AMD Ryzen 5, Intel i5) | 8+ cores / 16 threads (e.g., AMD Ryzen 7/9, Intel i7/i9) | | **RAM** | 16 GB | 32 GB | | **Storage** | 1 TB NVMe SSD (local, not network-attached) | 2 TB NVMe SSD (local, not network-attached) | | **Network** | 100 Mbps symmetric, stable connection | 1 Gbps symmetric, low-latency connection | | **OS** | Ubuntu 22.04 LTS or macOS ≥ 12 | Ubuntu 22.04 LTS or macOS ≥ 12 | If you're unsure which tier applies to you: start with low-stake specs and monitor performance. If you see high CPU usage, memory pressure, or network saturation, upgrade accordingly. --- ## Avalanche L1 Validators L1 validators run your own blockchain with custom parameters. Hardware requirements depend on your chain's transaction throughput and state size. | Component | Low Throughput | Medium Throughput | High Throughput | |-----------|----------------|-------------------|-----------------| | **Use Case** | Testnets, development chains, or production L1s with minimal traffic (< 10 TPS) | Production L1s with moderate activity (10–100 TPS), gaming chains, or DeFi applications | High-performance L1s with heavy transaction volume (100+ TPS), large state, or complex smart contracts | | **CPU** | 2 cores | 4 cores | 8+ cores | | **RAM** | 4 GB | 8 GB | 16 GB+ | | **Storage** | 100 GB (SSD optional) | 500 GB SSD | 1 TB+ NVMe SSD | | **Network** | 25 Mbps | 100 Mbps | 1 Gbps | | **OS** | Ubuntu 22.04 LTS or macOS ≥ 12 | Ubuntu 22.04 LTS or macOS ≥ 12 | Ubuntu 22.04 LTS or macOS ≥ 12 | L1 validators sync the P-Chain to track validator sets and cross-chain messages. This adds minimal overhead to the requirements above. --- ## Networking AvalancheGo requires inbound connections on port `9651`. Before installation, ensure your networking environment is properly configured. ### IPv4 and IPv6 Support AvalancheGo supports both IPv4 and IPv6: - **IPv4**: Fully supported and most common - **IPv6**: Fully supported - your node can operate exclusively on IPv6 or dual-stack - **Dual-stack**: You can run both IPv4 and IPv6 simultaneously If using IPv6, ensure your firewall and network configuration properly allow inbound IPv6 connections on port `9651`. ### Cloud Providers Cloud instances have static IPs by default. Ensure your security group or firewall allows: - **Inbound**: TCP port 9651 (IPv4 and/or IPv6) - **Outbound**: All traffic ### Home Connections Residential connections typically have dynamic IPs. You'll need to: 1. Configure port forwarding for port `9651` on your router 2. Consider a dynamic DNS service if your IP changes frequently A fully connected Avalanche node maintains thousands of live TCP connections. Under-powered home routers may struggle with this load, causing lag on other devices or node synchronization issues. --- ## Monitoring Thresholds Set up monitoring and alerts to catch resource issues before they impact your validator: | Resource | Warning Threshold | Critical Threshold | Action Required | |----------|------------------|-------------------|-----------------| | **Disk Usage** | 80% | 90% | Run [offline pruning](/docs/nodes/maintain/reduce-disk-usage) or [state sync](/docs/nodes/maintain/chain-state-management) | | **CPU Usage** | 70% sustained | 90% sustained | Upgrade to higher-tier instance or optimize workload | | **Memory Usage** | 80% | 90% | Upgrade RAM or investigate memory leaks | | **Network Bandwidth** | 80% of capacity | 95% of capacity | Upgrade network tier or reduce other network traffic | | **Disk IOPS** | 80% of available | 95% of available | Upgrade to higher IOPS storage | **Disk usage** is the most common issue for validators. Consider setting up automated alerts at 80% to give yourself time to plan maintenance before your node runs out of space. --- ## Next Steps - Learn about [Active State vs Archive State](/docs/nodes/maintain/chain-state-management) to understand storage requirements - Set up [node monitoring](/docs/nodes/maintain/monitoring) to track resource usage and configure alerts # RPC APIs (/docs/rpcs) --- title: RPC APIs description: AvalancheGo RPC API References for interacting with Avalanche nodes --- # RPC APIs This section contains comprehensive documentation for all RPC (Remote Procedure Call) APIs available in the Avalanche ecosystem. ## Chain-Specific APIs ### C-Chain (Contract Chain) The C-Chain is an instance of the Ethereum Virtual Machine (EVM). Documentation for C-Chain RPC methods and transaction formats. ### P-Chain (Platform Chain) The P-Chain manages validators, staking, and subnets. Documentation for P-Chain RPC methods and transaction formats. ### X-Chain (Exchange Chain) The X-Chain is responsible for asset creation and trading. Documentation for X-Chain RPC methods and transaction formats. ### Subnet-EVM The Subnet-EVM is an instance of the EVM for Subnet / Layer 1 chains. Documentation for Subnet-EVM RPC methods and transaction formats. ## Other APIs Additional RPC APIs for node administration, health monitoring, indexing, metrics, and more. # CLI Commands (/docs/tooling/cli-commands) --- title: "CLI Commands" description: "Complete list of Avalanche CLI commands and their usage." edit_url: https://github.com/ava-labs/avalanche-cli/edit/main/cmd/commands.md --- ## avalanche blockchain The blockchain command suite provides a collection of tools for developing and deploying Blockchains. To get started, use the blockchain create command wizard to walk through the configuration of your very first Blockchain. Then, go ahead and deploy it with the blockchain deploy command. You can use the rest of the commands to manage your Blockchain configurations and live deployments. **Usage:** ```bash avalanche blockchain [subcommand] [flags] ``` **Subcommands:** - [`addValidator`](#avalanche-blockchain-addvalidator): The blockchain addValidator command adds a node as a validator to an L1 of the user provided deployed network. If the network is proof of authority, the owner of the validator manager contract must sign the transaction. If the network is proof of stake, the node must stake the L1's staking token. Both processes will issue a RegisterL1ValidatorTx on the P-Chain. This command currently only works on Blockchains deployed to either the Fuji Testnet or Mainnet. - [`changeOwner`](#avalanche-blockchain-changeowner): The blockchain changeOwner changes the owner of the deployed Blockchain. - [`changeWeight`](#avalanche-blockchain-changeweight): The blockchain changeWeight command changes the weight of a L1 Validator. The L1 has to be a Proof of Authority L1. - [`configure`](#avalanche-blockchain-configure): AvalancheGo nodes support several different configuration files. Each network (a Subnet or an L1) has their own config which applies to all blockchains/VMs in the network (see https://build.avax.network/docs/nodes/configure/avalanche-l1-configs) Each blockchain within the network can have its own chain config (see https://build.avax.network/docs/nodes/chain-configs/c-chain https://github.com/ava-labs/subnet-evm/blob/master/plugin/evm/config/config.go for subnet-evm options). A chain can also have special requirements for the AvalancheGo node configuration itself (see https://build.avax.network/docs/nodes/configure/configs-flags). This command allows you to set all those files. - [`create`](#avalanche-blockchain-create): The blockchain create command builds a new genesis file to configure your Blockchain. By default, the command runs an interactive wizard. It walks you through all the steps you need to create your first Blockchain. The tool supports deploying Subnet-EVM, and custom VMs. You can create a custom, user-generated genesis with a custom VM by providing the path to your genesis and VM binaries with the --genesis and --vm flags. By default, running the command with a blockchainName that already exists causes the command to fail. If you'd like to overwrite an existing configuration, pass the -f flag. - [`delete`](#avalanche-blockchain-delete): The blockchain delete command deletes an existing blockchain configuration. - [`deploy`](#avalanche-blockchain-deploy): The blockchain deploy command deploys your Blockchain configuration locally, to Fuji Testnet, or to Mainnet. At the end of the call, the command prints the RPC URL you can use to interact with the Subnet. Avalanche-CLI only supports deploying an individual Blockchain once per network. Subsequent attempts to deploy the same Blockchain to the same network (local, Fuji, Mainnet) aren't allowed. If you'd like to redeploy a Blockchain locally for testing, you must first call avalanche network clean to reset all deployed chain state. Subsequent local deploys redeploy the chain with fresh state. You can deploy the same Blockchain to multiple networks, so you can take your locally tested Blockchain and deploy it on Fuji or Mainnet. - [`describe`](#avalanche-blockchain-describe): The blockchain describe command prints the details of a Blockchain configuration to the console. By default, the command prints a summary of the configuration. By providing the --genesis flag, the command instead prints out the raw genesis file. - [`export`](#avalanche-blockchain-export): The blockchain export command write the details of an existing Blockchain deploy to a file. The command prompts for an output path. You can also provide one with the --output flag. - [`import`](#avalanche-blockchain-import): Import blockchain configurations into avalanche-cli. This command suite supports importing from a file created on another computer, or importing from blockchains running public networks (e.g. created manually or with the deprecated subnet-cli) - [`join`](#avalanche-blockchain-join): The blockchain join command configures your validator node to begin validating a new Blockchain. To complete this process, you must have access to the machine running your validator. If the CLI is running on the same machine as your validator, it can generate or update your node's config file automatically. Alternatively, the command can print the necessary instructions to update your node manually. To complete the validation process, the Blockchain's admins must add the NodeID of your validator to the Blockchain's allow list by calling addValidator with your NodeID. After you update your validator's config, you need to restart your validator manually. If you provide the --avalanchego-config flag, this command attempts to edit the config file at that path. This command currently only supports Blockchains deployed on the Fuji Testnet and Mainnet. - [`list`](#avalanche-blockchain-list): The blockchain list command prints the names of all created Blockchain configurations. Without any flags, it prints some general, static information about the Blockchain. With the --deployed flag, the command shows additional information including the VMID, BlockchainID and SubnetID. - [`publish`](#avalanche-blockchain-publish): The blockchain publish command publishes the Blockchain's VM to a repository. - [`removeValidator`](#avalanche-blockchain-removevalidator): The blockchain removeValidator command stops a whitelisted blockchain network validator from validating your deployed Blockchain. To remove the validator from the Subnet's allow list, provide the validator's unique NodeID. You can bypass these prompts by providing the values with flags. - [`stats`](#avalanche-blockchain-stats): The blockchain stats command prints validator statistics for the given Blockchain. - [`upgrade`](#avalanche-blockchain-upgrade): The blockchain upgrade command suite provides a collection of tools for updating your developmental and deployed Blockchains. - [`validators`](#avalanche-blockchain-validators): The blockchain validators command lists the validators of a blockchain and provides several statistics about them. - [`vmid`](#avalanche-blockchain-vmid): The blockchain vmid command prints the virtual machine ID (VMID) for the given Blockchain. **Flags:** ```bash -h, --help help for blockchain --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### addValidator The blockchain addValidator command adds a node as a validator to an L1 of the user provided deployed network. If the network is proof of authority, the owner of the validator manager contract must sign the transaction. If the network is proof of stake, the node must stake the L1's staking token. Both processes will issue a RegisterL1ValidatorTx on the P-Chain. This command currently only works on Blockchains deployed to either the Fuji Testnet or Mainnet. **Usage:** ```bash avalanche blockchain addValidator [subcommand] [flags] ``` **Flags:** ```bash --aggregator-allow-private-peers allow the signature aggregator to connect to peers with private IP (default true) --aggregator-extra-endpoints strings endpoints for extra nodes that are needed in signature aggregation --aggregator-log-level string log level to use with signature aggregator (default "Debug") --aggregator-log-to-stdout use stdout for signature aggregator logs --balance float set the AVAX balance of the validator that will be used for continuous fee on P-Chain --blockchain-genesis-key use genesis allocated key to pay fees for completing the validator's registration (blockchain gas token) --blockchain-key string CLI stored key to use to pay fees for completing the validator's registration (blockchain gas token) --blockchain-private-key string private key to use to pay fees for completing the validator's registration (blockchain gas token) --bls-proof-of-possession string set the BLS proof of possession of the validator to add --bls-public-key string set the BLS public key of the validator to add --cluster string operate on the given cluster --create-local-validator create additional local validator and add it to existing running local node --default-duration (for Subnets, not L1s) set duration so as to validate until primary validator ends its period --default-start-time (for Subnets, not L1s) use default start time for subnet validator (5 minutes later for fuji & mainnet, 30 seconds later for devnet) --default-validator-params (for Subnets, not L1s) use default weight/start/duration params for subnet validator --delegation-fee uint16 (PoS only) delegation fee (in bips) (default 100) --devnet operate on a devnet network --disable-owner string P-Chain address that will able to disable the validator with a P-Chain transaction --endpoint string use the given endpoint for network operations -e, --ewoq use ewoq key [fuji/devnet only] -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for addValidator -k, --key string select the key to use [fuji/devnet only] -g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji/devnet) --ledger-addrs strings use the given ledger addresses -l, --local operate on a local network -m, --mainnet operate on mainnet --node-endpoint string gather node id/bls from publicly available avalanchego apis on the given endpoint --node-id string node-id of the validator to add --output-tx-path string (for Subnets, not L1s) file path of the add validator tx --partial-sync set primary network partial sync for new validators (default true) --remaining-balance-owner string P-Chain address that will receive any leftover AVAX from the validator when it is removed from Subnet --rpc string connect to validator manager at the given rpc endpoint --stake-amount uint (PoS only) amount of tokens to stake --staking-period duration how long this validator will be staking --start-time string (for Subnets, not L1s) UTC start time when this validator starts validating, in 'YYYY-MM-DD HH:MM:SS' format --subnet-auth-keys strings (for Subnets, not L1s) control keys that will be used to authenticate add validator tx -t, --testnet fuji operate on testnet (alias to fuji) --wait-for-tx-acceptance (for Subnets, not L1s) just issue the add validator tx, without waiting for its acceptance (default true) --weight uint set the staking weight of the validator to add (default 20) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### changeOwner The blockchain changeOwner changes the owner of the deployed Blockchain. **Usage:** ```bash avalanche blockchain changeOwner [subcommand] [flags] ``` **Flags:** ```bash --auth-keys strings control keys that will be used to authenticate transfer blockchain ownership tx --cluster string operate on the given cluster --control-keys strings addresses that may make blockchain changes --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -e, --ewoq use ewoq key [fuji/devnet] -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for changeOwner -k, --key string select the key to use [fuji/devnet] -g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji/devnet) --ledger-addrs strings use the given ledger addresses -l, --local operate on a local network -m, --mainnet operate on mainnet --output-tx-path string file path of the transfer blockchain ownership tx -s, --same-control-key use the fee-paying key as control key -t, --testnet fuji operate on testnet (alias to fuji) --threshold uint32 required number of control key signatures to make blockchain changes --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### changeWeight The blockchain changeWeight command changes the weight of a L1 Validator. The L1 has to be a Proof of Authority L1. **Usage:** ```bash avalanche blockchain changeWeight [subcommand] [flags] ``` **Flags:** ```bash --cluster string operate on the given cluster --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -e, --ewoq use ewoq key [fuji/devnet only] -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for changeWeight -k, --key string select the key to use [fuji/devnet only] -g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji/devnet) --ledger-addrs strings use the given ledger addresses -l, --local operate on a local network -m, --mainnet operate on mainnet --node-endpoint string gather node id/bls from publicly available avalanchego apis on the given endpoint --node-id string node-id of the validator -t, --testnet fuji operate on testnet (alias to fuji) --weight uint set the new staking weight of the validator --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### configure AvalancheGo nodes support several different configuration files. Each network (a Subnet or an L1) has their own config which applies to all blockchains/VMs in the network (see https://build.avax.network/docs/nodes/configure/avalanche-l1-configs) Each blockchain within the network can have its own chain config (see https://build.avax.network/docs/nodes/chain-configs/c-chain https://github.com/ava-labs/subnet-evm/blob/master/plugin/evm/config/config.go for subnet-evm options). A chain can also have special requirements for the AvalancheGo node configuration itself (see https://build.avax.network/docs/nodes/configure/configs-flags). This command allows you to set all those files. **Usage:** ```bash avalanche blockchain configure [subcommand] [flags] ``` **Flags:** ```bash --chain-config string path to the chain configuration -h, --help help for configure --node-config string path to avalanchego node configuration --per-node-chain-config string path to per node chain configuration for local network --subnet-config string path to the subnet configuration --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### create The blockchain create command builds a new genesis file to configure your Blockchain. By default, the command runs an interactive wizard. It walks you through all the steps you need to create your first Blockchain. The tool supports deploying Subnet-EVM, and custom VMs. You can create a custom, user-generated genesis with a custom VM by providing the path to your genesis and VM binaries with the --genesis and --vm flags. By default, running the command with a blockchainName that already exists causes the command to fail. If you'd like to overwrite an existing configuration, pass the -f flag. **Usage:** ```bash avalanche blockchain create [subcommand] [flags] ``` **Flags:** ```bash --custom use a custom VM template --custom-vm-branch string custom vm branch or commit --custom-vm-build-script string custom vm build-script --custom-vm-path string file path of custom vm to use --custom-vm-repo-url string custom vm repository url --debug enable blockchain debugging (default true) --evm use the Subnet-EVM as the base template --evm-chain-id uint chain ID to use with Subnet-EVM --evm-defaults deprecation notice: use '--production-defaults' --evm-token string token symbol to use with Subnet-EVM --external-gas-token use a gas token from another blockchain -f, --force overwrite the existing configuration if one exists --from-github-repo generate custom VM binary from github repository --genesis string file path of genesis to use -h, --help help for create --icm interoperate with other blockchains using ICM --icm-registry-at-genesis setup ICM registry smart contract on genesis [experimental] --latest use latest Subnet-EVM released version, takes precedence over --vm-version --pre-release use latest Subnet-EVM pre-released version, takes precedence over --vm-version --production-defaults use default production settings for your blockchain --proof-of-authority use proof of authority(PoA) for validator management --proof-of-stake use proof of stake(PoS) for validator management --proxy-contract-owner string EVM address that controls ProxyAdmin for TransparentProxy of ValidatorManager contract --reward-basis-points uint (PoS only) reward basis points for PoS Reward Calculator (default 100) --sovereign set to false if creating non-sovereign blockchain (default true) --teleporter interoperate with other blockchains using ICM --test-defaults use default test settings for your blockchain --validator-manager-owner string EVM address that controls Validator Manager Owner --vm string file path of custom vm to use. alias to custom-vm-path --vm-version string version of Subnet-EVM template to use --warp generate a vm with warp support (needed for ICM) (default true) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### delete The blockchain delete command deletes an existing blockchain configuration. **Usage:** ```bash avalanche blockchain delete [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for delete --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### deploy The blockchain deploy command deploys your Blockchain configuration to Local Network, to Fuji Testnet, DevNet or to Mainnet. At the end of the call, the command prints the RPC URL you can use to interact with the L1 / Subnet. When deploying an L1, Avalanche-CLI lets you use your local machine as a bootstrap validator, so you don't need to run separate Avalanche nodes. This is controlled by the --use-local-machine flag (enabled by default on Local Network). If --use-local-machine is set to true: - Avalanche-CLI will call CreateSubnetTx, CreateChainTx, ConvertSubnetToL1Tx, followed by syncing the local machine bootstrap validator to the L1 and initialize Validator Manager Contract on the L1 If using your own Avalanche Nodes as bootstrap validators: - Avalanche-CLI will call CreateSubnetTx, CreateChainTx, ConvertSubnetToL1Tx - You will have to sync your bootstrap validators to the L1 - Next, Initialize Validator Manager contract on the L1 using avalanche contract initValidatorManager [L1_Name] Avalanche-CLI only supports deploying an individual Blockchain once per network. Subsequent attempts to deploy the same Blockchain to the same network (Local Network, Fuji, Mainnet) aren't allowed. If you'd like to redeploy a Blockchain locally for testing, you must first call avalanche network clean to reset all deployed chain state. Subsequent local deploys redeploy the chain with fresh state. You can deploy the same Blockchain to multiple networks, so you can take your locally tested Blockchain and deploy it on Fuji or Mainnet. **Usage:** ```bash avalanche blockchain deploy [subcommand] [flags] ``` **Flags:** ```bash --convert-only avoid node track, restart and poa manager setup -e, --ewoq use ewoq key [local/devnet deploy only] -h, --help help for deploy -k, --key string select the key to use [fuji/devnet deploy only] -g, --ledger use ledger instead of key --ledger-addrs strings use the given ledger addresses --mainnet-chain-id uint32 use different ChainID for mainnet deployment --output-tx-path string file path of the blockchain creation tx (for multi-sig signing) -u, --subnet-id string do not create a subnet, deploy the blockchain into the given subnet id --subnet-only command stops after CreateSubnetTx and returns SubnetID Network Flags (Select One): --cluster string operate on the given cluster --devnet operate on a devnet network --endpoint string use the given endpoint for network operations --fuji operate on fuji (alias to `testnet`) --local operate on a local network --mainnet operate on mainnet --testnet operate on testnet (alias to `fuji`) Bootstrap Validators Flags: --balance float64 set the AVAX balance of each bootstrap validator that will be used for continuous fee on P-Chain (setting balance=1 equals to 1 AVAX for each bootstrap validator) --bootstrap-endpoints stringSlice take validator node info from the given endpoints --bootstrap-filepath string JSON file path that provides details about bootstrap validators --change-owner-address string address that will receive change if node is no longer L1 validator --generate-node-id set to true to generate Node IDs for bootstrap validators when none are set up. Use these Node IDs to set up your Avalanche Nodes. --num-bootstrap-validators int number of bootstrap validators to set up in sovereign L1 validator) Local Machine Flags (Use Local Machine as Bootstrap Validator): --avalanchego-path string use this avalanchego binary path --avalanchego-version string use this version of avalanchego (ex: v1.17.12) --http-port uintSlice http port for node(s) --partial-sync set primary network partial sync for new validators --staking-cert-key-path stringSlice path to provided staking cert key for node(s) --staking-port uintSlice staking port for node(s) --staking-signer-key-path stringSlice path to provided staking signer key for node(s) --staking-tls-key-path stringSlice path to provided staking TLS key for node(s) --use-local-machine use local machine as a blockchain validator Local Network Flags: --avalanchego-path string use this avalanchego binary path --avalanchego-version string use this version of avalanchego (ex: v1.17.12) --num-nodes uint32 number of nodes to be created on local network deploy Non Subnet-Only-Validators (Non-SOV) Flags: --auth-keys stringSlice control keys that will be used to authenticate chain creation --control-keys stringSlice addresses that may make blockchain changes --same-control-key use the fee-paying key as control key --threshold uint32 required number of control key signatures to make blockchain changes ICM Flags: --cchain-funding-key string key to be used to fund relayer account on cchain --cchain-icm-key string key to be used to pay for ICM deploys on C-Chain --icm-key string key to be used to pay for ICM deploys --icm-version string ICM version to deploy --relay-cchain relay C-Chain as source and destination --relayer-allow-private-ips allow relayer to connec to private ips --relayer-amount float64 automatically fund relayer fee payments with the given amount --relayer-key string key to be used by default both for rewards and to pay fees --relayer-log-level string log level to be used for relayer logs --relayer-path string relayer binary to use --relayer-version string relayer version to deploy --skip-icm-deploy Skip automatic ICM deploy --skip-relayer skip relayer deploy --teleporter-messenger-contract-address-path string path to an ICM Messenger contract address file --teleporter-messenger-deployer-address-path string path to an ICM Messenger deployer address file --teleporter-messenger-deployer-tx-path string path to an ICM Messenger deployer tx file --teleporter-registry-bytecode-path string path to an ICM Registry bytecode file Proof Of Stake Flags: --pos-maximum-stake-amount uint64 maximum stake amount --pos-maximum-stake-multiplier uint8 maximum stake multiplier --pos-minimum-delegation-fee uint16 minimum delegation fee --pos-minimum-stake-amount uint64 minimum stake amount --pos-minimum-stake-duration uint64 minimum stake duration (in seconds) --pos-weight-to-value-factor uint64 weight to value factor Signature Aggregator Flags: --aggregator-log-level string log level to use with signature aggregator --aggregator-log-to-stdout use stdout for signature aggregator logs ``` ### describe The blockchain describe command prints the details of a Blockchain configuration to the console. By default, the command prints a summary of the configuration. By providing the --genesis flag, the command instead prints out the raw genesis file. **Usage:** ```bash avalanche blockchain describe [subcommand] [flags] ``` **Flags:** ```bash -g, --genesis Print the genesis to the console directly instead of the summary -h, --help help for describe --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### export The blockchain export command write the details of an existing Blockchain deploy to a file. The command prompts for an output path. You can also provide one with the --output flag. **Usage:** ```bash avalanche blockchain export [subcommand] [flags] ``` **Flags:** ```bash --custom-vm-branch string custom vm branch --custom-vm-build-script string custom vm build-script --custom-vm-repo-url string custom vm repository url -h, --help help for export -o, --output string write the export data to the provided file path --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### import Import blockchain configurations into avalanche-cli. This command suite supports importing from a file created on another computer, or importing from blockchains running public networks (e.g. created manually or with the deprecated subnet-cli) **Usage:** ```bash avalanche blockchain import [subcommand] [flags] ``` **Subcommands:** - [`file`](#avalanche-blockchain-import-file): The blockchain import command will import a blockchain configuration from a file or a git repository. To import from a file, you can optionally provide the path as a command-line argument. Alternatively, running the command without any arguments triggers an interactive wizard. To import from a repository, go through the wizard. By default, an imported Blockchain doesn't overwrite an existing Blockchain with the same name. To allow overwrites, provide the --force flag. - [`public`](#avalanche-blockchain-import-public): The blockchain import public command imports a Blockchain configuration from a running network. By default, an imported Blockchain doesn't overwrite an existing Blockchain with the same name. To allow overwrites, provide the --force flag. **Flags:** ```bash -h, --help help for import --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### import file The blockchain import command will import a blockchain configuration from a file or a git repository. To import from a file, you can optionally provide the path as a command-line argument. Alternatively, running the command without any arguments triggers an interactive wizard. To import from a repository, go through the wizard. By default, an imported Blockchain doesn't overwrite an existing Blockchain with the same name. To allow overwrites, provide the --force flag. **Usage:** ```bash avalanche blockchain import file [subcommand] [flags] ``` **Flags:** ```bash --blockchain string the blockchain configuration to import from the provided repo --branch string the repo branch to use if downloading a new repo -f, --force overwrite the existing configuration if one exists -h, --help help for file --repo string the repo to import (ex: ava-labs/avalanche-plugins-core) or url to download the repo from --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### import public The blockchain import public command imports a Blockchain configuration from a running network. By default, an imported Blockchain doesn't overwrite an existing Blockchain with the same name. To allow overwrites, provide the --force flag. **Usage:** ```bash avalanche blockchain import public [subcommand] [flags] ``` **Flags:** ```bash --blockchain-id string the blockchain ID --cluster string operate on the given cluster --custom use a custom VM template --devnet operate on a devnet network --endpoint string use the given endpoint for network operations --evm import a subnet-evm --force overwrite the existing configuration if one exists -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for public -l, --local operate on a local network -m, --mainnet operate on mainnet --node-url string [optional] URL of an already running validator -t, --testnet fuji operate on testnet (alias to fuji) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### join The blockchain join command configures your validator node to begin validating a new Blockchain. To complete this process, you must have access to the machine running your validator. If the CLI is running on the same machine as your validator, it can generate or update your node's config file automatically. Alternatively, the command can print the necessary instructions to update your node manually. To complete the validation process, the Blockchain's admins must add the NodeID of your validator to the Blockchain's allow list by calling addValidator with your NodeID. After you update your validator's config, you need to restart your validator manually. If you provide the --avalanchego-config flag, this command attempts to edit the config file at that path. This command currently only supports Blockchains deployed on the Fuji Testnet and Mainnet. **Usage:** ```bash avalanche blockchain join [subcommand] [flags] ``` **Flags:** ```bash --avalanchego-config string file path of the avalanchego config file --cluster string operate on the given cluster --data-dir string path of avalanchego's data dir directory --devnet operate on a devnet network --endpoint string use the given endpoint for network operations --force-write if true, skip to prompt to overwrite the config file -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for join -k, --key string select the key to use [fuji only] -g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji) --ledger-addrs strings use the given ledger addresses -l, --local operate on a local network -m, --mainnet operate on mainnet --node-id string set the NodeID of the validator to check --plugin-dir string file path of avalanchego's plugin directory --print if true, print the manual config without prompting --stake-amount uint amount of tokens to stake on validator --staking-period duration how long validator validates for after start time --start-time string start time that validator starts validating -t, --testnet fuji operate on testnet (alias to fuji) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### list The blockchain list command prints the names of all created Blockchain configurations. Without any flags, it prints some general, static information about the Blockchain. With the --deployed flag, the command shows additional information including the VMID, BlockchainID and SubnetID. **Usage:** ```bash avalanche blockchain list [subcommand] [flags] ``` **Flags:** ```bash --deployed show additional deploy information -h, --help help for list --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### publish The blockchain publish command publishes the Blockchain's VM to a repository. **Usage:** ```bash avalanche blockchain publish [subcommand] [flags] ``` **Flags:** ```bash --alias string We publish to a remote repo, but identify the repo locally under a user-provided alias (e.g. myrepo). --force If true, ignores if the blockchain has been published in the past, and attempts a forced publish. -h, --help help for publish --no-repo-path string Do not let the tool manage file publishing, but have it only generate the files and put them in the location given by this flag. --repo-url string The URL of the repo where we are publishing --subnet-file-path string Path to the Blockchain description file. If not given, a prompting sequence will be initiated. --vm-file-path string Path to the VM description file. If not given, a prompting sequence will be initiated. --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### removeValidator The blockchain removeValidator command stops a whitelisted blockchain network validator from validating your deployed Blockchain. To remove the validator from the Subnet's allow list, provide the validator's unique NodeID. You can bypass these prompts by providing the values with flags. **Usage:** ```bash avalanche blockchain removeValidator [subcommand] [flags] ``` **Flags:** ```bash --aggregator-allow-private-peers allow the signature aggregator to connect to peers with private IP (default true) --aggregator-extra-endpoints strings endpoints for extra nodes that are needed in signature aggregation --aggregator-log-level string log level to use with signature aggregator (default "Debug") --aggregator-log-to-stdout use stdout for signature aggregator logs --auth-keys strings (for non-SOV blockchain only) control keys that will be used to authenticate the removeValidator tx --blockchain-genesis-key use genesis allocated key to pay fees for completing the validator's removal (blockchain gas token) --blockchain-key string CLI stored key to use to pay fees for completing the validator's removal (blockchain gas token) --blockchain-private-key string private key to use to pay fees for completing the validator's removal (blockchain gas token) --cluster string operate on the given cluster --devnet operate on a devnet network --endpoint string use the given endpoint for network operations --force force validator removal even if it's not getting rewarded -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for removeValidator -k, --key string select the key to use [fuji deploy only] -g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji) --ledger-addrs strings use the given ledger addresses -l, --local operate on a local network -m, --mainnet operate on mainnet --node-endpoint string remove validator that responds to the given endpoint --node-id string node-id of the validator --output-tx-path string (for non-SOV blockchain only) file path of the removeValidator tx --rpc string connect to validator manager at the given rpc endpoint -t, --testnet fuji operate on testnet (alias to fuji) --uptime uint validator's uptime in seconds. If not provided, it will be automatically calculated --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### stats The blockchain stats command prints validator statistics for the given Blockchain. **Usage:** ```bash avalanche blockchain stats [subcommand] [flags] ``` **Flags:** ```bash --cluster string operate on the given cluster --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for stats -l, --local operate on a local network -m, --mainnet operate on mainnet -t, --testnet fuji operate on testnet (alias to fuji) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### upgrade The blockchain upgrade command suite provides a collection of tools for updating your developmental and deployed Blockchains. **Usage:** ```bash avalanche blockchain upgrade [subcommand] [flags] ``` **Subcommands:** - [`apply`](#avalanche-blockchain-upgrade-apply): Apply generated upgrade bytes to running Blockchain nodes to trigger a network upgrade. For public networks (Fuji Testnet or Mainnet), to complete this process, you must have access to the machine running your validator. If the CLI is running on the same machine as your validator, it can manipulate your node's configuration automatically. Alternatively, the command can print the necessary instructions to upgrade your node manually. After you update your validator's configuration, you need to restart your validator manually. If you provide the --avalanchego-chain-config-dir flag, this command attempts to write the upgrade file at that path. Refer to https://docs.avax.network/nodes/maintain/chain-config-flags#subnet-chain-configs for related documentation. - [`export`](#avalanche-blockchain-upgrade-export): Export the upgrade bytes file to a location of choice on disk - [`generate`](#avalanche-blockchain-upgrade-generate): The blockchain upgrade generate command builds a new upgrade.json file to customize your Blockchain. It guides the user through the process using an interactive wizard. - [`import`](#avalanche-blockchain-upgrade-import): Import the upgrade bytes file into the local environment - [`print`](#avalanche-blockchain-upgrade-print): Print the upgrade.json file content - [`vm`](#avalanche-blockchain-upgrade-vm): The blockchain upgrade vm command enables the user to upgrade their Blockchain's VM binary. The command can upgrade both local Blockchains and publicly deployed Blockchains on Fuji and Mainnet. The command walks the user through an interactive wizard. The user can skip the wizard by providing command line flags. **Flags:** ```bash -h, --help help for upgrade --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### upgrade apply Apply generated upgrade bytes to running Blockchain nodes to trigger a network upgrade. For public networks (Fuji Testnet or Mainnet), to complete this process, you must have access to the machine running your validator. If the CLI is running on the same machine as your validator, it can manipulate your node's configuration automatically. Alternatively, the command can print the necessary instructions to upgrade your node manually. After you update your validator's configuration, you need to restart your validator manually. If you provide the --avalanchego-chain-config-dir flag, this command attempts to write the upgrade file at that path. Refer to https://docs.avax.network/nodes/maintain/chain-config-flags#subnet-chain-configs for related documentation. **Usage:** ```bash avalanche blockchain upgrade apply [subcommand] [flags] ``` **Flags:** ```bash --avalanchego-chain-config-dir string avalanchego's chain config file directory (default "/home/runner/.avalanchego/chains") --config create upgrade config for future subnet deployments (same as generate) --force If true, don't prompt for confirmation of timestamps in the past --fuji fuji apply upgrade existing fuji deployment (alias for `testnet`) -h, --help help for apply --local local apply upgrade existing local deployment --mainnet mainnet apply upgrade existing mainnet deployment --print if true, print the manual config without prompting (for public networks only) --testnet testnet apply upgrade existing testnet deployment (alias for `fuji`) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### upgrade export Export the upgrade bytes file to a location of choice on disk **Usage:** ```bash avalanche blockchain upgrade export [subcommand] [flags] ``` **Flags:** ```bash --force If true, overwrite a possibly existing file without prompting -h, --help help for export --upgrade-filepath string Export upgrade bytes file to location of choice on disk --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### upgrade generate The blockchain upgrade generate command builds a new upgrade.json file to customize your Blockchain. It guides the user through the process using an interactive wizard. **Usage:** ```bash avalanche blockchain upgrade generate [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for generate --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### upgrade import Import the upgrade bytes file into the local environment **Usage:** ```bash avalanche blockchain upgrade import [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for import --upgrade-filepath string Import upgrade bytes file into local environment --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### upgrade print Print the upgrade.json file content **Usage:** ```bash avalanche blockchain upgrade print [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for print --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### upgrade vm The blockchain upgrade vm command enables the user to upgrade their Blockchain's VM binary. The command can upgrade both local Blockchains and publicly deployed Blockchains on Fuji and Mainnet. The command walks the user through an interactive wizard. The user can skip the wizard by providing command line flags. **Usage:** ```bash avalanche blockchain upgrade vm [subcommand] [flags] ``` **Flags:** ```bash --binary string Upgrade to custom binary --config upgrade config for future subnet deployments --fuji fuji upgrade existing fuji deployment (alias for `testnet`) -h, --help help for vm --latest upgrade to latest version --local local upgrade existing local deployment --mainnet mainnet upgrade existing mainnet deployment --plugin-dir string plugin directory to automatically upgrade VM --print print instructions for upgrading --testnet testnet upgrade existing testnet deployment (alias for `fuji`) --version string Upgrade to custom version --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### validators The blockchain validators command lists the validators of a blockchain and provides several statistics about them. **Usage:** ```bash avalanche blockchain validators [subcommand] [flags] ``` **Flags:** ```bash --cluster string operate on the given cluster --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for validators -l, --local operate on a local network -m, --mainnet operate on mainnet -t, --testnet fuji operate on testnet (alias to fuji) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### vmid The blockchain vmid command prints the virtual machine ID (VMID) for the given Blockchain. **Usage:** ```bash avalanche blockchain vmid [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for vmid --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche config Customize configuration for Avalanche-CLI **Usage:** ```bash avalanche config [subcommand] [flags] ``` **Subcommands:** - [`authorize-cloud-access`](#avalanche-config-authorize-cloud-access): set preferences to authorize access to cloud resources - [`metrics`](#avalanche-config-metrics): set user metrics collection preferences - [`migrate`](#avalanche-config-migrate): migrate command migrates old ~/.avalanche-cli.json and ~/.avalanche-cli/config to /.avalanche-cli/config.json.. - [`snapshotsAutoSave`](#avalanche-config-snapshotsautosave): set user preference between auto saving local network snapshots or not - [`update`](#avalanche-config-update): set user preference between update check or not **Flags:** ```bash -h, --help help for config --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### authorize-cloud-access set preferences to authorize access to cloud resources **Usage:** ```bash avalanche config authorize-cloud-access [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for authorize-cloud-access --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### metrics set user metrics collection preferences **Usage:** ```bash avalanche config metrics [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for metrics --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### migrate migrate command migrates old ~/.avalanche-cli.json and ~/.avalanche-cli/config to /.avalanche-cli/config.json.. **Usage:** ```bash avalanche config migrate [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for migrate --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### snapshotsAutoSave set user preference between auto saving local network snapshots or not **Usage:** ```bash avalanche config snapshotsAutoSave [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for snapshotsAutoSave --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### update set user preference between update check or not **Usage:** ```bash avalanche config update [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for update --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche contract The contract command suite provides a collection of tools for deploying and interacting with smart contracts. **Usage:** ```bash avalanche contract [subcommand] [flags] ``` **Subcommands:** - [`deploy`](#avalanche-contract-deploy): The contract command suite provides a collection of tools for deploying smart contracts. - [`initValidatorManager`](#avalanche-contract-initvalidatormanager): Initializes Proof of Authority(PoA) or Proof of Stake(PoS)Validator Manager contract on a Blockchain and sets up initial validator set on the Blockchain. For more info on Validator Manager, please head to https://github.com/ava-labs/icm-contracts/tree/main/contracts/validator-manager **Flags:** ```bash -h, --help help for contract --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### deploy The contract command suite provides a collection of tools for deploying smart contracts. **Usage:** ```bash avalanche contract deploy [subcommand] [flags] ``` **Subcommands:** - [`erc20`](#avalanche-contract-deploy-erc20): Deploy an ERC20 token into a given Network and Blockchain **Flags:** ```bash -h, --help help for deploy --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### deploy erc20 Deploy an ERC20 token into a given Network and Blockchain **Usage:** ```bash avalanche contract deploy erc20 [subcommand] [flags] ``` **Flags:** ```bash --blockchain string deploy the ERC20 contract into the given CLI blockchain --blockchain-id string deploy the ERC20 contract into the given blockchain ID/Alias --c-chain deploy the ERC20 contract into C-Chain --cluster string operate on the given cluster --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet --funded string set the funded address --genesis-key use genesis allocated key as contract deployer -h, --help help for erc20 --key string CLI stored key to use as contract deployer -l, --local operate on a local network -m, --mainnet operate on mainnet --private-key string private key to use as contract deployer --rpc string deploy the contract into the given rpc endpoint --supply uint set the token supply --symbol string set the token symbol -t, --testnet fuji operate on testnet (alias to fuji) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### initValidatorManager Initializes Proof of Authority(PoA) or Proof of Stake(PoS)Validator Manager contract on a Blockchain and sets up initial validator set on the Blockchain. For more info on Validator Manager, please head to https://github.com/ava-labs/icm-contracts/tree/main/contracts/validator-manager **Usage:** ```bash avalanche contract initValidatorManager [subcommand] [flags] ``` **Flags:** ```bash --aggregator-allow-private-peers allow the signature aggregator to connect to peers with private IP (default true) --aggregator-extra-endpoints strings endpoints for extra nodes that are needed in signature aggregation --aggregator-log-level string log level to use with signature aggregator (default "Debug") --aggregator-log-to-stdout dump signature aggregator logs to stdout --cluster string operate on the given cluster --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet --genesis-key use genesis allocated key as contract deployer -h, --help help for initValidatorManager --key string CLI stored key to use as contract deployer -l, --local operate on a local network -m, --mainnet operate on mainnet --pos-maximum-stake-amount uint (PoS only) maximum stake amount (default 1000) --pos-maximum-stake-multiplier uint8 (PoS only )maximum stake multiplier (default 1) --pos-minimum-delegation-fee uint16 (PoS only) minimum delegation fee (default 1) --pos-minimum-stake-amount uint (PoS only) minimum stake amount (default 1) --pos-minimum-stake-duration uint (PoS only) minimum stake duration (in seconds) (default 100) --pos-reward-calculator-address string (PoS only) initialize the ValidatorManager with reward calculator address --pos-weight-to-value-factor uint (PoS only) weight to value factor (default 1) --private-key string private key to use as contract deployer --rpc string deploy the contract into the given rpc endpoint -t, --testnet fuji operate on testnet (alias to fuji) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche help Help provides help for any command in the application. Simply type avalanche help [path to command] for full details. **Usage:** ```bash avalanche help [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for help --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche icm The messenger command suite provides a collection of tools for interacting with ICM messenger contracts. **Usage:** ```bash avalanche icm [subcommand] [flags] ``` **Subcommands:** - [`deploy`](#avalanche-icm-deploy): Deploys ICM Messenger and Registry into a given L1. - [`sendMsg`](#avalanche-icm-sendmsg): Sends and wait reception for a ICM msg between two blockchains. **Flags:** ```bash -h, --help help for icm --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### deploy Deploys ICM Messenger and Registry into a given L1. For Local Networks, it also deploys into C-Chain. **Usage:** ```bash avalanche icm deploy [subcommand] [flags] ``` **Flags:** ```bash --blockchain string deploy ICM into the given CLI blockchain --blockchain-id string deploy ICM into the given blockchain ID/Alias --c-chain deploy ICM into C-Chain --cchain-key string key to be used to pay fees to deploy ICM to C-Chain --cluster string operate on the given cluster --deploy-messenger deploy ICM Messenger (default true) --deploy-registry deploy ICM Registry (default true) --devnet operate on a devnet network --endpoint string use the given endpoint for network operations --force-registry-deploy deploy ICM Registry even if Messenger has already been deployed -f, --fuji testnet operate on fuji (alias to testnet --genesis-key use genesis allocated key to fund ICM deploy -h, --help help for deploy --include-cchain deploy ICM also to C-Chain --key string CLI stored key to use to fund ICM deploy -l, --local operate on a local network -m, --mainnet operate on mainnet --messenger-contract-address-path string path to a messenger contract address file --messenger-deployer-address-path string path to a messenger deployer address file --messenger-deployer-tx-path string path to a messenger deployer tx file --private-key string private key to use to fund ICM deploy --registry-bytecode-path string path to a registry bytecode file --rpc-url string use the given RPC URL to connect to the subnet -t, --testnet fuji operate on testnet (alias to fuji) --version string version to deploy (default "latest") --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### sendMsg Sends and wait reception for a ICM msg between two blockchains. **Usage:** ```bash avalanche icm sendMsg [subcommand] [flags] ``` **Flags:** ```bash --cluster string operate on the given cluster --dest-rpc string use the given destination blockchain rpc endpoint --destination-address string deliver the message to the given contract destination address --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet --genesis-key use genesis allocated key as message originator and to pay source blockchain fees -h, --help help for sendMsg --hex-encoded given message is hex encoded --key string CLI stored key to use as message originator and to pay source blockchain fees -l, --local operate on a local network -m, --mainnet operate on mainnet --private-key string private key to use as message originator and to pay source blockchain fees --source-rpc string use the given source blockchain rpc endpoint -t, --testnet fuji operate on testnet (alias to fuji) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche ictt The ictt command suite provides tools to deploy and manage Interchain Token Transferrers. **Usage:** ```bash avalanche ictt [subcommand] [flags] ``` **Subcommands:** - [`deploy`](#avalanche-ictt-deploy): Deploys a Token Transferrer into a given Network and Subnets **Flags:** ```bash -h, --help help for ictt --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### deploy Deploys a Token Transferrer into a given Network and Subnets **Usage:** ```bash avalanche ictt deploy [subcommand] [flags] ``` **Flags:** ```bash --c-chain-home set the Transferrer's Home Chain into C-Chain --c-chain-remote set the Transferrer's Remote Chain into C-Chain --cluster string operate on the given cluster --deploy-erc20-home string deploy a Transferrer Home for the given Chain's ERC20 Token --deploy-native-home deploy a Transferrer Home for the Chain's Native Token --deploy-native-remote deploy a Transferrer Remote for the Chain's Native Token --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for deploy --home-blockchain string set the Transferrer's Home Chain into the given CLI blockchain --home-genesis-key use genesis allocated key to deploy Transferrer Home --home-key string CLI stored key to use to deploy Transferrer Home --home-private-key string private key to use to deploy Transferrer Home --home-rpc string use the given RPC URL to connect to the home blockchain -l, --local operate on a local network -m, --mainnet operate on mainnet --remote-blockchain string set the Transferrer's Remote Chain into the given CLI blockchain --remote-genesis-key use genesis allocated key to deploy Transferrer Remote --remote-key string CLI stored key to use to deploy Transferrer Remote --remote-private-key string private key to use to deploy Transferrer Remote --remote-rpc string use the given RPC URL to connect to the remote blockchain --remote-token-decimals uint8 use the given number of token decimals for the Transferrer Remote [defaults to token home's decimals (18 for a new wrapped native home token)] --remove-minter-admin remove the native minter precompile admin found on remote blockchain genesis -t, --testnet fuji operate on testnet (alias to fuji) --use-home string use the given Transferrer's Home Address --version string tag/branch/commit of Avalanche Interchain Token Transfer (ICTT) to be used (defaults to main branch) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche interchain The interchain command suite provides a collection of tools to set and manage interoperability between blockchains. **Usage:** ```bash avalanche interchain [subcommand] [flags] ``` **Subcommands:** - [`messenger`](#avalanche-interchain-messenger): The messenger command suite provides a collection of tools for interacting with ICM messenger contracts. - [`relayer`](#avalanche-interchain-relayer): The relayer command suite provides a collection of tools for deploying and configuring an ICM relayers. - [`tokenTransferrer`](#avalanche-interchain-tokentransferrer): The tokenTransfer command suite provides tools to deploy and manage Token Transferrers. **Flags:** ```bash -h, --help help for interchain --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### messenger The messenger command suite provides a collection of tools for interacting with ICM messenger contracts. **Usage:** ```bash avalanche interchain messenger [subcommand] [flags] ``` **Subcommands:** - [`deploy`](#avalanche-interchain-messenger-deploy): Deploys ICM Messenger and Registry into a given L1. - [`sendMsg`](#avalanche-interchain-messenger-sendmsg): Sends and wait reception for a ICM msg between two blockchains. **Flags:** ```bash -h, --help help for messenger --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### messenger deploy Deploys ICM Messenger and Registry into a given L1. For Local Networks, it also deploys into C-Chain. **Usage:** ```bash avalanche interchain messenger deploy [subcommand] [flags] ``` **Flags:** ```bash --blockchain string deploy ICM into the given CLI blockchain --blockchain-id string deploy ICM into the given blockchain ID/Alias --c-chain deploy ICM into C-Chain --cchain-key string key to be used to pay fees to deploy ICM to C-Chain --cluster string operate on the given cluster --deploy-messenger deploy ICM Messenger (default true) --deploy-registry deploy ICM Registry (default true) --devnet operate on a devnet network --endpoint string use the given endpoint for network operations --force-registry-deploy deploy ICM Registry even if Messenger has already been deployed -f, --fuji testnet operate on fuji (alias to testnet --genesis-key use genesis allocated key to fund ICM deploy -h, --help help for deploy --include-cchain deploy ICM also to C-Chain --key string CLI stored key to use to fund ICM deploy -l, --local operate on a local network -m, --mainnet operate on mainnet --messenger-contract-address-path string path to a messenger contract address file --messenger-deployer-address-path string path to a messenger deployer address file --messenger-deployer-tx-path string path to a messenger deployer tx file --private-key string private key to use to fund ICM deploy --registry-bytecode-path string path to a registry bytecode file --rpc-url string use the given RPC URL to connect to the subnet -t, --testnet fuji operate on testnet (alias to fuji) --version string version to deploy (default "latest") --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### messenger sendMsg Sends and wait reception for a ICM msg between two blockchains. **Usage:** ```bash avalanche interchain messenger sendMsg [subcommand] [flags] ``` **Flags:** ```bash --cluster string operate on the given cluster --dest-rpc string use the given destination blockchain rpc endpoint --destination-address string deliver the message to the given contract destination address --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet --genesis-key use genesis allocated key as message originator and to pay source blockchain fees -h, --help help for sendMsg --hex-encoded given message is hex encoded --key string CLI stored key to use as message originator and to pay source blockchain fees -l, --local operate on a local network -m, --mainnet operate on mainnet --private-key string private key to use as message originator and to pay source blockchain fees --source-rpc string use the given source blockchain rpc endpoint -t, --testnet fuji operate on testnet (alias to fuji) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### relayer The relayer command suite provides a collection of tools for deploying and configuring an ICM relayers. **Usage:** ```bash avalanche interchain relayer [subcommand] [flags] ``` **Subcommands:** - [`deploy`](#avalanche-interchain-relayer-deploy): Deploys an ICM Relayer for the given Network. - [`logs`](#avalanche-interchain-relayer-logs): Shows pretty formatted AWM relayer logs - [`start`](#avalanche-interchain-relayer-start): Starts AWM relayer on the specified network (Currently only for local network). - [`stop`](#avalanche-interchain-relayer-stop): Stops AWM relayer on the specified network (Currently only for local network, cluster). **Flags:** ```bash -h, --help help for relayer --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### relayer deploy Deploys an ICM Relayer for the given Network. **Usage:** ```bash avalanche interchain relayer deploy [subcommand] [flags] ``` **Flags:** ```bash --allow-private-ips allow relayer to connec to private ips (default true) --amount float automatically fund l1s fee payments with the given amount --bin-path string use the given relayer binary --blockchain-funding-key string key to be used to fund relayer account on all l1s --blockchains strings blockchains to relay as source and destination --cchain relay C-Chain as source and destination --cchain-amount float automatically fund cchain fee payments with the given amount --cchain-funding-key string key to be used to fund relayer account on cchain --cluster string operate on the given cluster --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for deploy --key string key to be used by default both for rewards and to pay fees -l, --local operate on a local network --log-level string log level to use for relayer logs -t, --testnet fuji operate on testnet (alias to fuji) --version string version to deploy (default "latest-prerelease") --config string config file (default is $HOME/.avalanche-cli/config.json) --skip-update-check skip check for new versions ``` #### relayer logs Shows pretty formatted AWM relayer logs **Usage:** ```bash avalanche interchain relayer logs [subcommand] [flags] ``` **Flags:** ```bash --endpoint string use the given endpoint for network operations --first uint output first N log lines -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for logs --last uint output last N log lines -l, --local operate on a local network --raw raw logs output -t, --testnet fuji operate on testnet (alias to fuji) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### relayer start Starts AWM relayer on the specified network (Currently only for local network). **Usage:** ```bash avalanche interchain relayer start [subcommand] [flags] ``` **Flags:** ```bash --bin-path string use the given relayer binary --cluster string operate on the given cluster --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for start -l, --local operate on a local network -t, --testnet fuji operate on testnet (alias to fuji) --version string version to use (default "latest-prerelease") --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### relayer stop Stops AWM relayer on the specified network (Currently only for local network, cluster). **Usage:** ```bash avalanche interchain relayer stop [subcommand] [flags] ``` **Flags:** ```bash --cluster string operate on the given cluster --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for stop -l, --local operate on a local network -t, --testnet fuji operate on testnet (alias to fuji) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### tokenTransferrer The tokenTransfer command suite provides tools to deploy and manage Token Transferrers. **Usage:** ```bash avalanche interchain tokenTransferrer [subcommand] [flags] ``` **Subcommands:** - [`deploy`](#avalanche-interchain-tokentransferrer-deploy): Deploys a Token Transferrer into a given Network and Subnets **Flags:** ```bash -h, --help help for tokenTransferrer --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### tokenTransferrer deploy Deploys a Token Transferrer into a given Network and Subnets **Usage:** ```bash avalanche interchain tokenTransferrer deploy [subcommand] [flags] ``` **Flags:** ```bash --c-chain-home set the Transferrer's Home Chain into C-Chain --c-chain-remote set the Transferrer's Remote Chain into C-Chain --cluster string operate on the given cluster --deploy-erc20-home string deploy a Transferrer Home for the given Chain's ERC20 Token --deploy-native-home deploy a Transferrer Home for the Chain's Native Token --deploy-native-remote deploy a Transferrer Remote for the Chain's Native Token --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for deploy --home-blockchain string set the Transferrer's Home Chain into the given CLI blockchain --home-genesis-key use genesis allocated key to deploy Transferrer Home --home-key string CLI stored key to use to deploy Transferrer Home --home-private-key string private key to use to deploy Transferrer Home --home-rpc string use the given RPC URL to connect to the home blockchain -l, --local operate on a local network -m, --mainnet operate on mainnet --remote-blockchain string set the Transferrer's Remote Chain into the given CLI blockchain --remote-genesis-key use genesis allocated key to deploy Transferrer Remote --remote-key string CLI stored key to use to deploy Transferrer Remote --remote-private-key string private key to use to deploy Transferrer Remote --remote-rpc string use the given RPC URL to connect to the remote blockchain --remote-token-decimals uint8 use the given number of token decimals for the Transferrer Remote [defaults to token home's decimals (18 for a new wrapped native home token)] --remove-minter-admin remove the native minter precompile admin found on remote blockchain genesis -t, --testnet fuji operate on testnet (alias to fuji) --use-home string use the given Transferrer's Home Address --version string tag/branch/commit of Avalanche Interchain Token Transfer (ICTT) to be used (defaults to main branch) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche key The key command suite provides a collection of tools for creating and managing signing keys. You can use these keys to deploy Subnets to the Fuji Testnet, but these keys are NOT suitable to use in production environments. DO NOT use these keys on Mainnet. To get started, use the key create command. **Usage:** ```bash avalanche key [subcommand] [flags] ``` **Subcommands:** - [`create`](#avalanche-key-create): The key create command generates a new private key to use for creating and controlling test Subnets. Keys generated by this command are NOT cryptographically secure enough to use in production environments. DO NOT use these keys on Mainnet. The command works by generating a secp256 key and storing it with the provided keyName. You can use this key in other commands by providing this keyName. If you'd like to import an existing key instead of generating one from scratch, provide the --file flag. - [`delete`](#avalanche-key-delete): The key delete command deletes an existing signing key. To delete a key, provide the keyName. The command prompts for confirmation before deleting the key. To skip the confirmation, provide the --force flag. - [`export`](#avalanche-key-export): The key export command exports a created signing key. You can use an exported key in other applications or import it into another instance of Avalanche-CLI. By default, the tool writes the hex encoded key to stdout. If you provide the --output flag, the command writes the key to a file of your choosing. - [`list`](#avalanche-key-list): The key list command prints information for all stored signing keys or for the ledger addresses associated to certain indices. - [`transfer`](#avalanche-key-transfer): The key transfer command allows to transfer funds between stored keys or ledger addresses. **Flags:** ```bash -h, --help help for key --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### create The key create command generates a new private key to use for creating and controlling test Subnets. Keys generated by this command are NOT cryptographically secure enough to use in production environments. DO NOT use these keys on Mainnet. The command works by generating a secp256 key and storing it with the provided keyName. You can use this key in other commands by providing this keyName. If you'd like to import an existing key instead of generating one from scratch, provide the --file flag. **Usage:** ```bash avalanche key create [subcommand] [flags] ``` **Flags:** ```bash --file string import the key from an existing key file -f, --force overwrite an existing key with the same name -h, --help help for create --skip-balances do not query public network balances for an imported key --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### delete The key delete command deletes an existing signing key. To delete a key, provide the keyName. The command prompts for confirmation before deleting the key. To skip the confirmation, provide the --force flag. **Usage:** ```bash avalanche key delete [subcommand] [flags] ``` **Flags:** ```bash -f, --force delete the key without confirmation -h, --help help for delete --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### export The key export command exports a created signing key. You can use an exported key in other applications or import it into another instance of Avalanche-CLI. By default, the tool writes the hex encoded key to stdout. If you provide the --output flag, the command writes the key to a file of your choosing. **Usage:** ```bash avalanche key export [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for export -o, --output string write the key to the provided file path --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### list The key list command prints information for all stored signing keys or for the ledger addresses associated to certain indices. **Usage:** ```bash avalanche key list [subcommand] [flags] ``` **Flags:** ```bash -a, --all-networks list all network addresses --blockchains strings blockchains to show information about (p=p-chain, x=x-chain, c=c-chain, and blockchain names) (default p,x,c) -c, --cchain list C-Chain addresses (default true) --cluster string operate on the given cluster --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for list --keys strings list addresses for the given keys -g, --ledger uints list ledger addresses for the given indices (default []) -l, --local operate on a local network -m, --mainnet operate on mainnet --pchain list P-Chain addresses (default true) --subnets strings subnets to show information about (p=p-chain, x=x-chain, c=c-chain, and blockchain names) (default p,x,c) -t, --testnet fuji operate on testnet (alias to fuji) --tokens strings provide balance information for the given token contract addresses (Evm only) (default [Native]) --use-gwei use gwei for EVM balances -n, --use-nano-avax use nano Avax for balances --xchain list X-Chain addresses (default true) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### transfer The key transfer command allows to transfer funds between stored keys or ledger addresses. **Usage:** ```bash avalanche key transfer [subcommand] [flags] ``` **Flags:** ```bash -o, --amount float amount to send or receive (AVAX or TOKEN units) --c-chain-receiver receive at C-Chain --c-chain-sender send from C-Chain --cluster string operate on the given cluster -a, --destination-addr string destination address --destination-key string key associated to a destination address --destination-subnet string subnet where the funds will be sent (token transferrer experimental) --destination-transferrer-address string token transferrer address at the destination subnet (token transferrer experimental) --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for transfer -k, --key string key associated to the sender or receiver address -i, --ledger uint32 ledger index associated to the sender or receiver address (default 32768) -l, --local operate on a local network -m, --mainnet operate on mainnet --origin-subnet string subnet where the funds belong (token transferrer experimental) --origin-transferrer-address string token transferrer address at the origin subnet (token transferrer experimental) --p-chain-receiver receive at P-Chain --p-chain-sender send from P-Chain --receiver-blockchain string receive at the given CLI blockchain --receiver-blockchain-id string receive at the given blockchain ID/Alias --sender-blockchain string send from the given CLI blockchain --sender-blockchain-id string send from the given blockchain ID/Alias -t, --testnet fuji operate on testnet (alias to fuji) --x-chain-receiver receive at X-Chain --x-chain-sender send from X-Chain --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche network The network command suite provides a collection of tools for managing local Blockchain deployments. When you deploy a Blockchain locally, it runs on a local, multi-node Avalanche network. The blockchain deploy command starts this network in the background. This command suite allows you to shutdown, restart, and clear that network. This network currently supports multiple, concurrently deployed Blockchains. **Usage:** ```bash avalanche network [subcommand] [flags] ``` **Subcommands:** - [`clean`](#avalanche-network-clean): The network clean command shuts down your local, multi-node network. All deployed Subnets shutdown and delete their state. You can restart the network by deploying a new Subnet configuration. - [`start`](#avalanche-network-start): The network start command starts a local, multi-node Avalanche network on your machine. By default, the command loads the default snapshot. If you provide the --snapshot-name flag, the network loads that snapshot instead. The command fails if the local network is already running. - [`status`](#avalanche-network-status): The network status command prints whether or not a local Avalanche network is running and some basic stats about the network. - [`stop`](#avalanche-network-stop): The network stop command shuts down your local, multi-node network. All deployed Subnets shutdown gracefully and save their state. If you provide the --snapshot-name flag, the network saves its state under this named snapshot. You can reload this snapshot with network start --snapshot-name `snapshotName`. Otherwise, the network saves to the default snapshot, overwriting any existing state. You can reload the default snapshot with network start. **Flags:** ```bash -h, --help help for network --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### clean The network clean command shuts down your local, multi-node network. All deployed Subnets shutdown and delete their state. You can restart the network by deploying a new Subnet configuration. **Usage:** ```bash avalanche network clean [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for clean --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### start The network start command starts a local, multi-node Avalanche network on your machine. By default, the command loads the default snapshot. If you provide the --snapshot-name flag, the network loads that snapshot instead. The command fails if the local network is already running. **Usage:** ```bash avalanche network start [subcommand] [flags] ``` **Flags:** ```bash --avalanchego-path string use this avalanchego binary path --avalanchego-version string use this version of avalanchego (ex: v1.17.12) (default "latest-prerelease") -h, --help help for start --num-nodes uint32 number of nodes to be created on local network (default 2) --relayer-path string use this relayer binary path --relayer-version string use this relayer version (default "latest-prerelease") --snapshot-name string name of snapshot to use to start the network from (default "default") --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### status The network status command prints whether or not a local Avalanche network is running and some basic stats about the network. **Usage:** ```bash avalanche network status [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for status --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### stop The network stop command shuts down your local, multi-node network. All deployed Subnets shutdown gracefully and save their state. If you provide the --snapshot-name flag, the network saves its state under this named snapshot. You can reload this snapshot with network start --snapshot-name `snapshotName`. Otherwise, the network saves to the default snapshot, overwriting any existing state. You can reload the default snapshot with network start. **Usage:** ```bash avalanche network stop [subcommand] [flags] ``` **Flags:** ```bash --dont-save do not save snapshot, just stop the network -h, --help help for stop --snapshot-name string name of snapshot to use to save network state into (default "default") --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche node The node command suite provides a collection of tools for creating and maintaining validators on Avalanche Network. To get started, use the node create command wizard to walk through the configuration to make your node a primary validator on Avalanche public network. You can use the rest of the commands to maintain your node and make your node a Subnet Validator. **Usage:** ```bash avalanche node [subcommand] [flags] ``` **Subcommands:** - [`addDashboard`](#avalanche-node-adddashboard): (ALPHA Warning) This command is currently in experimental mode. The node addDashboard command adds custom dashboard to the Grafana monitoring dashboard for the cluster. - [`create`](#avalanche-node-create): (ALPHA Warning) This command is currently in experimental mode. The node create command sets up a validator on a cloud server of your choice. The validator will be validating the Avalanche Primary Network and Subnet of your choice. By default, the command runs an interactive wizard. It walks you through all the steps you need to set up a validator. Once this command is completed, you will have to wait for the validator to finish bootstrapping on the primary network before running further commands on it, e.g. validating a Subnet. You can check the bootstrapping status by running avalanche node status The created node will be part of group of validators called `clusterName` and users can call node commands with `clusterName` so that the command will apply to all nodes in the cluster - [`destroy`](#avalanche-node-destroy): (ALPHA Warning) This command is currently in experimental mode. The node destroy command terminates all running nodes in cloud server and deletes all storage disks. If there is a static IP address attached, it will be released. - [`devnet`](#avalanche-node-devnet): (ALPHA Warning) This command is currently in experimental mode. The node devnet command suite provides a collection of commands related to devnets. You can check the updated status by calling avalanche node status `clusterName` - [`export`](#avalanche-node-export): (ALPHA Warning) This command is currently in experimental mode. The node export command exports cluster configuration and its nodes config to a text file. If no file is specified, the configuration is printed to the stdout. Use --include-secrets to include keys in the export. In this case please keep the file secure as it contains sensitive information. Exported cluster configuration without secrets can be imported by another user using node import command. - [`import`](#avalanche-node-import): (ALPHA Warning) This command is currently in experimental mode. The node import command imports cluster configuration and its nodes configuration from a text file created from the node export command. Prior to calling this command, call node whitelist command to have your SSH public key and IP whitelisted by the cluster owner. This will enable you to use avalanche-cli commands to manage the imported cluster. Please note, that this imported cluster will be considered as EXTERNAL by avalanche-cli, so some commands affecting cloud nodes like node create or node destroy will be not applicable to it. - [`list`](#avalanche-node-list): (ALPHA Warning) This command is currently in experimental mode. The node list command lists all clusters together with their nodes. - [`loadtest`](#avalanche-node-loadtest): (ALPHA Warning) This command is currently in experimental mode. The node loadtest command suite starts and stops a load test for an existing devnet cluster. - [`local`](#avalanche-node-local): The node local command suite provides a collection of commands related to local nodes - [`refresh-ips`](#avalanche-node-refresh-ips): (ALPHA Warning) This command is currently in experimental mode. The node refresh-ips command obtains the current IP for all nodes with dynamic IPs in the cluster, and updates the local node information used by CLI commands. - [`resize`](#avalanche-node-resize): (ALPHA Warning) This command is currently in experimental mode. The node resize command can change the amount of CPU, memory and disk space available for the cluster nodes. - [`scp`](#avalanche-node-scp): (ALPHA Warning) This command is currently in experimental mode. The node scp command securely copies files to and from nodes. Remote source or destionation can be specified using the following format: [clusterName|nodeID|instanceID|IP]:/path/to/file. Regular expressions are supported for the source files like /tmp/*.txt. File transfer to the nodes are parallelized. IF source or destination is cluster, the other should be a local file path. If both destinations are remote, they must be nodes for the same cluster and not clusters themselves. For example: $ avalanche node scp [cluster1|node1]:/tmp/file.txt /tmp/file.txt $ avalanche node scp /tmp/file.txt [cluster1|NodeID-XXXX]:/tmp/file.txt $ avalanche node scp node1:/tmp/file.txt NodeID-XXXX:/tmp/file.txt - [`ssh`](#avalanche-node-ssh): (ALPHA Warning) This command is currently in experimental mode. The node ssh command execute a given command [cmd] using ssh on all nodes in the cluster if ClusterName is given. If no command is given, just prints the ssh command to be used to connect to each node in the cluster. For provided NodeID or InstanceID or IP, the command [cmd] will be executed on that node. If no [cmd] is provided for the node, it will open ssh shell there. - [`status`](#avalanche-node-status): (ALPHA Warning) This command is currently in experimental mode. The node status command gets the bootstrap status of all nodes in a cluster with the Primary Network. If no cluster is given, defaults to node list behaviour. To get the bootstrap status of a node with a Blockchain, use --blockchain flag - [`sync`](#avalanche-node-sync): (ALPHA Warning) This command is currently in experimental mode. The node sync command enables all nodes in a cluster to be bootstrapped to a Blockchain. You can check the blockchain bootstrap status by calling avalanche node status `clusterName` --blockchain `blockchainName` - [`update`](#avalanche-node-update): (ALPHA Warning) This command is currently in experimental mode. The node update command suite provides a collection of commands for nodes to update their avalanchego or VM config. You can check the status after update by calling avalanche node status - [`upgrade`](#avalanche-node-upgrade): (ALPHA Warning) This command is currently in experimental mode. The node update command suite provides a collection of commands for nodes to update their avalanchego or VM version. You can check the status after upgrade by calling avalanche node status - [`validate`](#avalanche-node-validate): (ALPHA Warning) This command is currently in experimental mode. The node validate command suite provides a collection of commands for nodes to join the Primary Network and Subnets as validators. If any of the commands is run before the nodes are bootstrapped on the Primary Network, the command will fail. You can check the bootstrap status by calling avalanche node status `clusterName` - [`whitelist`](#avalanche-node-whitelist): (ALPHA Warning) The whitelist command suite provides a collection of tools for granting access to the cluster. Command adds IP if --ip params provided to cloud security access rules allowing it to access all nodes in the cluster via ssh or http. It also command adds SSH public key to all nodes in the cluster if --ssh params is there. If no params provided it detects current user IP automaticaly and whitelists it **Flags:** ```bash -h, --help help for node --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### addDashboard (ALPHA Warning) This command is currently in experimental mode. The node addDashboard command adds custom dashboard to the Grafana monitoring dashboard for the cluster. **Usage:** ```bash avalanche node addDashboard [subcommand] [flags] ``` **Flags:** ```bash --add-grafana-dashboard string path to additional grafana dashboard json file -h, --help help for addDashboard --subnet string subnet that the dasbhoard is intended for (if any) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### create (ALPHA Warning) This command is currently in experimental mode. The node create command sets up a validator on a cloud server of your choice. The validator will be validating the Avalanche Primary Network and Subnet of your choice. By default, the command runs an interactive wizard. It walks you through all the steps you need to set up a validator. Once this command is completed, you will have to wait for the validator to finish bootstrapping on the primary network before running further commands on it, e.g. validating a Subnet. You can check the bootstrapping status by running avalanche node status The created node will be part of group of validators called `clusterName` and users can call node commands with `clusterName` so that the command will apply to all nodes in the cluster **Usage:** ```bash avalanche node create [subcommand] [flags] ``` **Flags:** ```bash --add-grafana-dashboard string path to additional grafana dashboard json file --alternative-key-pair-name string key pair name to use if default one generates conflicts --authorize-access authorize CLI to create cloud resources --auto-replace-keypair automatically replaces key pair to access node if previous key pair is not found --avalanchego-version-from-subnet string install latest avalanchego version, that is compatible with the given subnet, on node/s --aws create node/s in AWS cloud --aws-profile string aws profile to use (default "default") --aws-volume-iops int AWS iops (for gp3, io1, and io2 volume types only) (default 3000) --aws-volume-size int AWS volume size in GB (default 1000) --aws-volume-throughput int AWS throughput in MiB/s (for gp3 volume type only) (default 125) --aws-volume-type string AWS volume type (default "gp3") --bootstrap-ids stringArray nodeIDs of bootstrap nodes --bootstrap-ips stringArray IP:port pairs of bootstrap nodes --cluster string operate on the given cluster --custom-avalanchego-version string install given avalanchego version on node/s --devnet operate on a devnet network --enable-monitoring set up Prometheus monitoring for created nodes. This option creates a separate monitoring cloud instance and incures additional cost --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet --gcp create node/s in GCP cloud --gcp-credentials string use given GCP credentials --gcp-project string use given GCP project --genesis string path to genesis file --grafana-pkg string use grafana pkg instead of apt repo(by default), for example https://dl.grafana.com/oss/release/grafana_10.4.1_amd64.deb -h, --help help for create --latest-avalanchego-pre-release-version install latest avalanchego pre-release version on node/s --latest-avalanchego-version install latest avalanchego release version on node/s -m, --mainnet operate on mainnet --node-type string cloud instance type. Use 'default' to use recommended default instance type --num-apis ints number of API nodes(nodes without stake) to create in the new Devnet --num-validators ints number of nodes to create per region(s). Use comma to separate multiple numbers for each region in the same order as --region flag --partial-sync primary network partial sync (default true) --public-http-port allow public access to avalanchego HTTP port --region strings create node(s) in given region(s). Use comma to separate multiple regions --ssh-agent-identity string use given ssh identity(only for ssh agent). If not set, default will be used -t, --testnet fuji operate on testnet (alias to fuji) --upgrade string path to upgrade file --use-ssh-agent use ssh agent(ex: Yubikey) for ssh auth --use-static-ip attach static Public IP on cloud servers (default true) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### destroy (ALPHA Warning) This command is currently in experimental mode. The node destroy command terminates all running nodes in cloud server and deletes all storage disks. If there is a static IP address attached, it will be released. **Usage:** ```bash avalanche node destroy [subcommand] [flags] ``` **Flags:** ```bash --all destroy all existing clusters created by Avalanche CLI --authorize-access authorize CLI to release cloud resources -y, --authorize-all authorize all CLI requests --authorize-remove authorize CLI to remove all local files related to cloud nodes --aws-profile string aws profile to use (default "default") -h, --help help for destroy --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### devnet (ALPHA Warning) This command is currently in experimental mode. The node devnet command suite provides a collection of commands related to devnets. You can check the updated status by calling avalanche node status `clusterName` **Usage:** ```bash avalanche node devnet [subcommand] [flags] ``` **Subcommands:** - [`deploy`](#avalanche-node-devnet-deploy): (ALPHA Warning) This command is currently in experimental mode. The node devnet deploy command deploys a subnet into a devnet cluster, creating subnet and blockchain txs for it. It saves the deploy info both locally and remotely. - [`wiz`](#avalanche-node-devnet-wiz): (ALPHA Warning) This command is currently in experimental mode. The node wiz command creates a devnet and deploys, sync and validate a subnet into it. It creates the subnet if so needed. **Flags:** ```bash -h, --help help for devnet --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### devnet deploy (ALPHA Warning) This command is currently in experimental mode. The node devnet deploy command deploys a subnet into a devnet cluster, creating subnet and blockchain txs for it. It saves the deploy info both locally and remotely. **Usage:** ```bash avalanche node devnet deploy [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for deploy --no-checks do not check for healthy status or rpc compatibility of nodes against subnet --subnet-aliases strings additional subnet aliases to be used for RPC calls in addition to subnet blockchain name --subnet-only only create a subnet --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### devnet wiz (ALPHA Warning) This command is currently in experimental mode. The node wiz command creates a devnet and deploys, sync and validate a subnet into it. It creates the subnet if so needed. **Usage:** ```bash avalanche node devnet wiz [subcommand] [flags] ``` **Flags:** ```bash --add-grafana-dashboard string path to additional grafana dashboard json file --alternative-key-pair-name string key pair name to use if default one generates conflicts --authorize-access authorize CLI to create cloud resources --auto-replace-keypair automatically replaces key pair to access node if previous key pair is not found --aws create node/s in AWS cloud --aws-profile string aws profile to use (default "default") --aws-volume-iops int AWS iops (for gp3, io1, and io2 volume types only) (default 3000) --aws-volume-size int AWS volume size in GB (default 1000) --aws-volume-throughput int AWS throughput in MiB/s (for gp3 volume type only) (default 125) --aws-volume-type string AWS volume type (default "gp3") --chain-config string path to the chain configuration for subnet --custom-avalanchego-version string install given avalanchego version on node/s --custom-subnet use a custom VM as the subnet virtual machine --custom-vm-branch string custom vm branch or commit --custom-vm-build-script string custom vm build-script --custom-vm-repo-url string custom vm repository url --default-validator-params use default weight/start/duration params for subnet validator --deploy-icm-messenger deploy Interchain Messenger (default true) --deploy-icm-registry deploy Interchain Registry (default true) --deploy-teleporter-messenger deploy Interchain Messenger (default true) --deploy-teleporter-registry deploy Interchain Registry (default true) --enable-monitoring set up Prometheus monitoring for created nodes. Please note that this option creates a separate monitoring instance and incures additional cost --evm-chain-id uint chain ID to use with Subnet-EVM --evm-defaults use default production settings with Subnet-EVM --evm-production-defaults use default production settings for your blockchain --evm-subnet use Subnet-EVM as the subnet virtual machine --evm-test-defaults use default test settings for your blockchain --evm-token string token name to use with Subnet-EVM --evm-version string version of Subnet-EVM to use --force-subnet-create overwrite the existing subnet configuration if one exists --gcp create node/s in GCP cloud --gcp-credentials string use given GCP credentials --gcp-project string use given GCP project --grafana-pkg string use grafana pkg instead of apt repo(by default), for example https://dl.grafana.com/oss/release/grafana_10.4.1_amd64.deb -h, --help help for wiz --icm generate an icm-ready vm --icm-messenger-contract-address-path string path to an icm messenger contract address file --icm-messenger-deployer-address-path string path to an icm messenger deployer address file --icm-messenger-deployer-tx-path string path to an icm messenger deployer tx file --icm-registry-bytecode-path string path to an icm registry bytecode file --icm-version string icm version to deploy (default "latest") --latest-avalanchego-pre-release-version install latest avalanchego pre-release version on node/s --latest-avalanchego-version install latest avalanchego release version on node/s --latest-evm-version use latest Subnet-EVM released version --latest-pre-released-evm-version use latest Subnet-EVM pre-released version --node-config string path to avalanchego node configuration for subnet --node-type string cloud instance type. Use 'default' to use recommended default instance type --num-apis ints number of API nodes(nodes without stake) to create in the new Devnet --num-validators ints number of nodes to create per region(s). Use comma to separate multiple numbers for each region in the same order as --region flag --public-http-port allow public access to avalanchego HTTP port --region strings create node/s in given region(s). Use comma to separate multiple regions --relayer run AWM relayer when deploying the vm --ssh-agent-identity string use given ssh identity(only for ssh agent). If not set, default will be used. --subnet-aliases strings additional subnet aliases to be used for RPC calls in addition to subnet blockchain name --subnet-config string path to the subnet configuration for subnet --subnet-genesis string file path of the subnet genesis --teleporter generate an icm-ready vm --teleporter-messenger-contract-address-path string path to an icm messenger contract address file --teleporter-messenger-deployer-address-path string path to an icm messenger deployer address file --teleporter-messenger-deployer-tx-path string path to an icm messenger deployer tx file --teleporter-registry-bytecode-path string path to an icm registry bytecode file --teleporter-version string icm version to deploy (default "latest") --use-ssh-agent use ssh agent for ssh --use-static-ip attach static Public IP on cloud servers (default true) --validators strings deploy subnet into given comma separated list of validators. defaults to all cluster nodes --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### export (ALPHA Warning) This command is currently in experimental mode. The node export command exports cluster configuration and its nodes config to a text file. If no file is specified, the configuration is printed to the stdout. Use --include-secrets to include keys in the export. In this case please keep the file secure as it contains sensitive information. Exported cluster configuration without secrets can be imported by another user using node import command. **Usage:** ```bash avalanche node export [subcommand] [flags] ``` **Flags:** ```bash --file string specify the file to export the cluster configuration to --force overwrite the file if it exists -h, --help help for export --include-secrets include keys in the export --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### import (ALPHA Warning) This command is currently in experimental mode. The node import command imports cluster configuration and its nodes configuration from a text file created from the node export command. Prior to calling this command, call node whitelist command to have your SSH public key and IP whitelisted by the cluster owner. This will enable you to use avalanche-cli commands to manage the imported cluster. Please note, that this imported cluster will be considered as EXTERNAL by avalanche-cli, so some commands affecting cloud nodes like node create or node destroy will be not applicable to it. **Usage:** ```bash avalanche node import [subcommand] [flags] ``` **Flags:** ```bash --file string specify the file to export the cluster configuration to -h, --help help for import --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### list (ALPHA Warning) This command is currently in experimental mode. The node list command lists all clusters together with their nodes. **Usage:** ```bash avalanche node list [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for list --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### loadtest (ALPHA Warning) This command is currently in experimental mode. The node loadtest command suite starts and stops a load test for an existing devnet cluster. **Usage:** ```bash avalanche node loadtest [subcommand] [flags] ``` **Subcommands:** - [`start`](#avalanche-node-loadtest-start): (ALPHA Warning) This command is currently in experimental mode. The node loadtest command starts load testing for an existing devnet cluster. If the cluster does not have an existing load test host, the command creates a separate cloud server and builds the load test binary based on the provided load test Git Repo URL and load test binary build command. The command will then run the load test binary based on the provided load test run command. - [`stop`](#avalanche-node-loadtest-stop): (ALPHA Warning) This command is currently in experimental mode. The node loadtest stop command stops load testing for an existing devnet cluster and terminates the separate cloud server created to host the load test. **Flags:** ```bash -h, --help help for loadtest --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### loadtest start (ALPHA Warning) This command is currently in experimental mode. The node loadtest command starts load testing for an existing devnet cluster. If the cluster does not have an existing load test host, the command creates a separate cloud server and builds the load test binary based on the provided load test Git Repo URL and load test binary build command. The command will then run the load test binary based on the provided load test run command. **Usage:** ```bash avalanche node loadtest start [subcommand] [flags] ``` **Flags:** ```bash --authorize-access authorize CLI to create cloud resources --aws create loadtest node in AWS cloud --aws-profile string aws profile to use (default "default") --gcp create loadtest in GCP cloud -h, --help help for start --load-test-branch string load test branch or commit --load-test-build-cmd string command to build load test binary --load-test-cmd string command to run load test --load-test-repo string load test repo url to use --node-type string cloud instance type for loadtest script --region string create load test node in a given region --ssh-agent-identity string use given ssh identity(only for ssh agent). If not set, default will be used --use-ssh-agent use ssh agent(ex: Yubikey) for ssh auth --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### loadtest stop (ALPHA Warning) This command is currently in experimental mode. The node loadtest stop command stops load testing for an existing devnet cluster and terminates the separate cloud server created to host the load test. **Usage:** ```bash avalanche node loadtest stop [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for stop --load-test strings stop specified load test node(s). Use comma to separate multiple load test instance names --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### local The node local command suite provides a collection of commands related to local nodes **Usage:** ```bash avalanche node local [subcommand] [flags] ``` **Subcommands:** - [`destroy`](#avalanche-node-local-destroy): Cleanup local node. - [`start`](#avalanche-node-local-start): The node local start command creates Avalanche nodes on the local machine. Once this command is completed, you will have to wait for the Avalanche node to finish bootstrapping on the primary network before running further commands on it, e.g. validating a Subnet. You can check the bootstrapping status by running avalanche node status local. - [`status`](#avalanche-node-local-status): Get status of local node. - [`stop`](#avalanche-node-local-stop): Stop local node. - [`track`](#avalanche-node-local-track): Track specified blockchain with local node - [`validate`](#avalanche-node-local-validate): Use Avalanche Node set up on local machine to set up specified L1 by providing the RPC URL of the L1. This command can only be used to validate Proof of Stake L1. **Flags:** ```bash -h, --help help for local --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### local destroy Cleanup local node. **Usage:** ```bash avalanche node local destroy [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for destroy --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### local start The node local start command creates Avalanche nodes on the local machine. Once this command is completed, you will have to wait for the Avalanche node to finish bootstrapping on the primary network before running further commands on it, e.g. validating a Subnet. You can check the bootstrapping status by running avalanche node status local. **Usage:** ```bash avalanche node local start [subcommand] [flags] ``` **Flags:** ```bash --avalanchego-path string use this avalanchego binary path --bootstrap-id stringArray nodeIDs of bootstrap nodes --bootstrap-ip stringArray IP:port pairs of bootstrap nodes --cluster string operate on the given cluster --custom-avalanchego-version string install given avalanchego version on node/s --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet --genesis string path to genesis file -h, --help help for start --latest-avalanchego-pre-release-version install latest avalanchego pre-release version on node/s (default true) --latest-avalanchego-version install latest avalanchego release version on node/s -l, --local operate on a local network -m, --mainnet operate on mainnet --node-config string path to common avalanchego config settings for all nodes --num-nodes uint32 number of Avalanche nodes to create on local machine (default 1) --partial-sync primary network partial sync (default true) --staking-cert-key-path string path to provided staking cert key for node --staking-signer-key-path string path to provided staking signer key for node --staking-tls-key-path string path to provided staking tls key for node -t, --testnet fuji operate on testnet (alias to fuji) --upgrade string path to upgrade file --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### local status Get status of local node. **Usage:** ```bash avalanche node local status [subcommand] [flags] ``` **Flags:** ```bash --blockchain string specify the blockchain the node is syncing with -h, --help help for status --l1 string specify the blockchain the node is syncing with --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### local stop Stop local node. **Usage:** ```bash avalanche node local stop [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for stop --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### local track Track specified blockchain with local node **Usage:** ```bash avalanche node local track [subcommand] [flags] ``` **Flags:** ```bash --avalanchego-path string use this avalanchego binary path --custom-avalanchego-version string install given avalanchego version on node/s -h, --help help for track --latest-avalanchego-pre-release-version install latest avalanchego pre-release version on node/s (default true) --latest-avalanchego-version install latest avalanchego release version on node/s --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### local validate Use Avalanche Node set up on local machine to set up specified L1 by providing the RPC URL of the L1. This command can only be used to validate Proof of Stake L1. **Usage:** ```bash avalanche node local validate [subcommand] [flags] ``` **Flags:** ```bash --aggregator-log-level string log level to use with signature aggregator (default "Debug") --aggregator-log-to-stdout use stdout for signature aggregator logs --balance float amount of AVAX to increase validator's balance by --blockchain string specify the blockchain the node is syncing with --delegation-fee uint16 delegation fee (in bips) (default 100) --disable-owner string P-Chain address that will able to disable the validator with a P-Chain transaction -h, --help help for validate --l1 string specify the blockchain the node is syncing with --minimum-stake-duration uint minimum stake duration (in seconds) (default 100) --remaining-balance-owner string P-Chain address that will receive any leftover AVAX from the validator when it is removed from Subnet --rpc string connect to validator manager at the given rpc endpoint --stake-amount uint amount of tokens to stake --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### refresh-ips (ALPHA Warning) This command is currently in experimental mode. The node refresh-ips command obtains the current IP for all nodes with dynamic IPs in the cluster, and updates the local node information used by CLI commands. **Usage:** ```bash avalanche node refresh-ips [subcommand] [flags] ``` **Flags:** ```bash --aws-profile string aws profile to use (default "default") -h, --help help for refresh-ips --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### resize (ALPHA Warning) This command is currently in experimental mode. The node resize command can change the amount of CPU, memory and disk space available for the cluster nodes. **Usage:** ```bash avalanche node resize [subcommand] [flags] ``` **Flags:** ```bash --aws-profile string aws profile to use (default "default") --disk-size string Disk size to resize in Gb (e.g. 1000Gb) -h, --help help for resize --node-type string Node type to resize (e.g. t3.2xlarge) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### scp (ALPHA Warning) This command is currently in experimental mode. The node scp command securely copies files to and from nodes. Remote source or destionation can be specified using the following format: [clusterName|nodeID|instanceID|IP]:/path/to/file. Regular expressions are supported for the source files like /tmp/*.txt. File transfer to the nodes are parallelized. IF source or destination is cluster, the other should be a local file path. If both destinations are remote, they must be nodes for the same cluster and not clusters themselves. For example: $ avalanche node scp [cluster1|node1]:/tmp/file.txt /tmp/file.txt $ avalanche node scp /tmp/file.txt [cluster1|NodeID-XXXX]:/tmp/file.txt $ avalanche node scp node1:/tmp/file.txt NodeID-XXXX:/tmp/file.txt **Usage:** ```bash avalanche node scp [subcommand] [flags] ``` **Flags:** ```bash --compress use compression for ssh -h, --help help for scp --recursive copy directories recursively --with-loadtest include loadtest node for scp cluster operations --with-monitor include monitoring node for scp cluster operations --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### ssh (ALPHA Warning) This command is currently in experimental mode. The node ssh command execute a given command [cmd] using ssh on all nodes in the cluster if ClusterName is given. If no command is given, just prints the ssh command to be used to connect to each node in the cluster. For provided NodeID or InstanceID or IP, the command [cmd] will be executed on that node. If no [cmd] is provided for the node, it will open ssh shell there. **Usage:** ```bash avalanche node ssh [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for ssh --parallel run ssh command on all nodes in parallel --with-loadtest include loadtest node for ssh cluster operations --with-monitor include monitoring node for ssh cluster operations --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### status (ALPHA Warning) This command is currently in experimental mode. The node status command gets the bootstrap status of all nodes in a cluster with the Primary Network. If no cluster is given, defaults to node list behaviour. To get the bootstrap status of a node with a Blockchain, use --blockchain flag **Usage:** ```bash avalanche node status [subcommand] [flags] ``` **Flags:** ```bash --blockchain string specify the blockchain the node is syncing with -h, --help help for status --subnet string specify the blockchain the node is syncing with --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### sync (ALPHA Warning) This command is currently in experimental mode. The node sync command enables all nodes in a cluster to be bootstrapped to a Blockchain. You can check the blockchain bootstrap status by calling avalanche node status `clusterName` --blockchain `blockchainName` **Usage:** ```bash avalanche node sync [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for sync --no-checks do not check for bootstrapped/healthy status or rpc compatibility of nodes against subnet --subnet-aliases strings subnet alias to be used for RPC calls. defaults to subnet blockchain ID --validators strings sync subnet into given comma separated list of validators. defaults to all cluster nodes --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### update (ALPHA Warning) This command is currently in experimental mode. The node update command suite provides a collection of commands for nodes to update their avalanchego or VM config. You can check the status after update by calling avalanche node status **Usage:** ```bash avalanche node update [subcommand] [flags] ``` **Subcommands:** - [`subnet`](#avalanche-node-update-subnet): (ALPHA Warning) This command is currently in experimental mode. The node update subnet command updates all nodes in a cluster with latest Subnet configuration and VM for custom VM. You can check the updated subnet bootstrap status by calling avalanche node status `clusterName` --subnet `subnetName` **Flags:** ```bash -h, --help help for update --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### update subnet (ALPHA Warning) This command is currently in experimental mode. The node update subnet command updates all nodes in a cluster with latest Subnet configuration and VM for custom VM. You can check the updated subnet bootstrap status by calling avalanche node status `clusterName` --subnet `subnetName` **Usage:** ```bash avalanche node update subnet [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for subnet --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### upgrade (ALPHA Warning) This command is currently in experimental mode. The node update command suite provides a collection of commands for nodes to update their avalanchego or VM version. You can check the status after upgrade by calling avalanche node status **Usage:** ```bash avalanche node upgrade [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for upgrade --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### validate (ALPHA Warning) This command is currently in experimental mode. The node validate command suite provides a collection of commands for nodes to join the Primary Network and Subnets as validators. If any of the commands is run before the nodes are bootstrapped on the Primary Network, the command will fail. You can check the bootstrap status by calling avalanche node status `clusterName` **Usage:** ```bash avalanche node validate [subcommand] [flags] ``` **Subcommands:** - [`primary`](#avalanche-node-validate-primary): (ALPHA Warning) This command is currently in experimental mode. The node validate primary command enables all nodes in a cluster to be validators of Primary Network. - [`subnet`](#avalanche-node-validate-subnet): (ALPHA Warning) This command is currently in experimental mode. The node validate subnet command enables all nodes in a cluster to be validators of a Subnet. If the command is run before the nodes are Primary Network validators, the command will first make the nodes Primary Network validators before making them Subnet validators. If The command is run before the nodes are bootstrapped on the Primary Network, the command will fail. You can check the bootstrap status by calling avalanche node status `clusterName` If The command is run before the nodes are synced to the subnet, the command will fail. You can check the subnet sync status by calling avalanche node status `clusterName` --subnet `subnetName` **Flags:** ```bash -h, --help help for validate --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### validate primary (ALPHA Warning) This command is currently in experimental mode. The node validate primary command enables all nodes in a cluster to be validators of Primary Network. **Usage:** ```bash avalanche node validate primary [subcommand] [flags] ``` **Flags:** ```bash -e, --ewoq use ewoq key [fuji/devnet only] -h, --help help for primary -k, --key string select the key to use [fuji only] -g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji/devnet) --ledger-addrs strings use the given ledger addresses --stake-amount uint how many AVAX to stake in the validator --staking-period duration how long validator validates for after start time --start-time string UTC start time when this validator starts validating, in 'YYYY-MM-DD HH:MM:SS' format --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### validate subnet (ALPHA Warning) This command is currently in experimental mode. The node validate subnet command enables all nodes in a cluster to be validators of a Subnet. If the command is run before the nodes are Primary Network validators, the command will first make the nodes Primary Network validators before making them Subnet validators. If The command is run before the nodes are bootstrapped on the Primary Network, the command will fail. You can check the bootstrap status by calling avalanche node status `clusterName` If The command is run before the nodes are synced to the subnet, the command will fail. You can check the subnet sync status by calling avalanche node status `clusterName` --subnet `subnetName` **Usage:** ```bash avalanche node validate subnet [subcommand] [flags] ``` **Flags:** ```bash --default-validator-params use default weight/start/duration params for subnet validator -e, --ewoq use ewoq key [fuji/devnet only] -h, --help help for subnet -k, --key string select the key to use [fuji/devnet only] -g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji/devnet) --ledger-addrs strings use the given ledger addresses --no-checks do not check for bootstrapped status or healthy status --no-validation-checks do not check if subnet is already synced or validated (default true) --stake-amount uint how many AVAX to stake in the validator --staking-period duration how long validator validates for after start time --start-time string UTC start time when this validator starts validating, in 'YYYY-MM-DD HH:MM:SS' format --validators strings validate subnet for the given comma separated list of validators. defaults to all cluster nodes --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### whitelist (ALPHA Warning) The whitelist command suite provides a collection of tools for granting access to the cluster. Command adds IP if --ip params provided to cloud security access rules allowing it to access all nodes in the cluster via ssh or http. It also command adds SSH public key to all nodes in the cluster if --ssh params is there. If no params provided it detects current user IP automaticaly and whitelists it **Usage:** ```bash avalanche node whitelist [subcommand] [flags] ``` **Flags:** ```bash -y, --current-ip whitelist current host ip -h, --help help for whitelist --ip string ip address to whitelist --ssh string ssh public key to whitelist --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche primary The primary command suite provides a collection of tools for interacting with the Primary Network **Usage:** ```bash avalanche primary [subcommand] [flags] ``` **Subcommands:** - [`addValidator`](#avalanche-primary-addvalidator): The primary addValidator command adds a node as a validator in the Primary Network - [`describe`](#avalanche-primary-describe): The subnet describe command prints details of the primary network configuration to the console. **Flags:** ```bash -h, --help help for primary --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### addValidator The primary addValidator command adds a node as a validator in the Primary Network **Usage:** ```bash avalanche primary addValidator [subcommand] [flags] ``` **Flags:** ```bash --cluster string operate on the given cluster --delegation-fee uint32 set the delegation fee (20 000 is equivalent to 2%) --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for addValidator -k, --key string select the key to use [fuji only] -g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji) --ledger-addrs strings use the given ledger addresses -m, --mainnet operate on mainnet --nodeID string set the NodeID of the validator to add --proof-of-possession string set the BLS proof of possession of the validator to add --public-key string set the BLS public key of the validator to add --staking-period duration how long this validator will be staking --start-time string UTC start time when this validator starts validating, in 'YYYY-MM-DD HH:MM:SS' format -t, --testnet fuji operate on testnet (alias to fuji) --weight uint set the staking weight of the validator to add --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### describe The subnet describe command prints details of the primary network configuration to the console. **Usage:** ```bash avalanche primary describe [subcommand] [flags] ``` **Flags:** ```bash --cluster string operate on the given cluster -h, --help help for describe -l, --local operate on a local network --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche transaction The transaction command suite provides all of the utilities required to sign multisig transactions. **Usage:** ```bash avalanche transaction [subcommand] [flags] ``` **Subcommands:** - [`commit`](#avalanche-transaction-commit): The transaction commit command commits a transaction by submitting it to the P-Chain. - [`sign`](#avalanche-transaction-sign): The transaction sign command signs a multisig transaction. **Flags:** ```bash -h, --help help for transaction --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### commit The transaction commit command commits a transaction by submitting it to the P-Chain. **Usage:** ```bash avalanche transaction commit [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for commit --input-tx-filepath string Path to the transaction signed by all signatories --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### sign The transaction sign command signs a multisig transaction. **Usage:** ```bash avalanche transaction sign [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for sign --input-tx-filepath string Path to the transaction file for signing -k, --key string select the key to use [fuji only] -g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji) --ledger-addrs strings use the given ledger addresses --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche update Check if an update is available, and prompt the user to install it **Usage:** ```bash avalanche update [subcommand] [flags] ``` **Flags:** ```bash -c, --confirm Assume yes for installation -h, --help help for update -v, --version version for update --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche validator The validator command suite provides a collection of tools for managing validator balance on P-Chain. Validator's balance is used to pay for continuous fee to the P-Chain. When this Balance reaches 0, the validator will be considered inactive and will no longer participate in validating the L1 **Usage:** ```bash avalanche validator [subcommand] [flags] ``` **Subcommands:** - [`getBalance`](#avalanche-validator-getbalance): This command gets the remaining validator P-Chain balance that is available to pay P-Chain continuous fee - [`increaseBalance`](#avalanche-validator-increasebalance): This command increases the validator P-Chain balance - [`list`](#avalanche-validator-list): This command gets a list of the validators of the L1 **Flags:** ```bash -h, --help help for validator --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### getBalance This command gets the remaining validator P-Chain balance that is available to pay P-Chain continuous fee **Usage:** ```bash avalanche validator getBalance [subcommand] [flags] ``` **Flags:** ```bash --cluster string operate on the given cluster --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for getBalance --l1 string name of L1 -l, --local operate on a local network -m, --mainnet operate on mainnet --node-id string node ID of the validator -t, --testnet fuji operate on testnet (alias to fuji) --validation-id string validation ID of the validator --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### increaseBalance This command increases the validator P-Chain balance **Usage:** ```bash avalanche validator increaseBalance [subcommand] [flags] ``` **Flags:** ```bash --balance float amount of AVAX to increase validator's balance by --cluster string operate on the given cluster --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for increaseBalance -k, --key string select the key to use [fuji/devnet deploy only] --l1 string name of L1 (to increase balance of bootstrap validators only) -l, --local operate on a local network -m, --mainnet operate on mainnet --node-id string node ID of the validator -t, --testnet fuji operate on testnet (alias to fuji) --validation-id string validationIDStr of the validator --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### list This command gets a list of the validators of the L1 **Usage:** ```bash avalanche validator list [subcommand] [flags] ``` **Flags:** ```bash --cluster string operate on the given cluster --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for list -l, --local operate on a local network -m, --mainnet operate on mainnet -t, --testnet fuji operate on testnet (alias to fuji) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` # Deploy a Smart Contract (/docs/avalanche-l1s/add-utility/deploy-smart-contract) --- title: Deploy a Smart Contract description: Deploy a smart contract on your Avalanche L1. --- {/* EVM Version Warning - TEMPORARY Remove this section when Avalanche adds Pectra support (after SAE implementation) Last reviewed: December 2025 */} Avalanche C-Chain and Subnet-EVM currently support the **Cancun** EVM version and do not yet support newer hardforks like **Pectra**. Since Solidity v0.8.30 changed its default target to Pectra, you must explicitly configure your compiler to target `cancun`. **In Remix:** Open the Solidity Compiler panel → expand Advanced Configurations → set EVM Version to **cancun**. For **Hardhat** or **Foundry** configurations, see the [contract verification guide](/docs/primary-network/verify-contract/hardhat). This tutorial assumes that: - [an Avalanche L1 and EVM blockchain](/docs/tooling/avalanche-cli/create-deploy-avalanche-l1s/deploy-on-fuji-testnet) has been created - Your Node is currently validating your target Avalanche L1 - Your wallet has a balance of the Avalanche L1 Native Token(Specified under _alloc_ in your [Genesis File](/docs/avalanche-l1s/evm-configuration/customize-avalanche-l1#genesis)). Step 1: Setting up Core[​](#step-1-setting-up-core "Direct link to heading") ---------------------------------------------------------------------------- ### **EVM Avalanche L1 Settings**: [(EVM Core Tutorial)](/docs/tooling/avalanche-cli/create-deploy-avalanche-l1s/deploy-on-fuji-testnet#connect-with-core)[​](#evm-avalanche-l1-settings-evm-core-tutorial "Direct link to heading") - **`Network Name`**: Custom Subnet-EVM - **`New RPC URL`**: http://NodeIPAddress:9650/ext/bc/BlockchainID/rpc (Note: the port number should match your local setting which can be different from 9650.) - **`ChainID`**: Subnet-EVM ChainID - **`Symbol`**: Subnet-EVM Token Symbol - **`Explorer`**: N/A You should see a balance of your Avalanche L1's Native Token in Core. ![balance](/images/smart-contract1.png) Step 2: Connect Core and Deploy a Smart Contract[​](#step-2-connect-core-and-deploy-a-smart-contract "Direct link to heading") ------------------------------------------------------------------------------------------------------------------------------ ### Using Remix[​](#using-remix "Direct link to heading") Open [Remix](https://remix.ethereum.org/) -> Select Solidity. ![remix Avalanche L1 evm sc home](/images/smart-contract2.png) Create the smart contracts that we want to compile and deploy using Remix file explorer ### Using GitHub[​](#using-github "Direct link to heading") In Remix Home _Click_ the GitHub button. ![remix Avalanche L1 evm sc load panel](/images/smart-contract3.png) Paste the [link to the Smart Contract](https://github.com/ava-labs/avalanche-smart-contract-quickstart/blob/main/contracts/NFT.sol) into the popup and _Click_ import. ![remix Avalanche L1 evm sc import](/images/smart-contract4.png) For this example, we will deploy an ERC721 contract from the [Avalanche Smart Contract Quickstart Repository](https://github.com/ava-labs/avalanche-smart-contract-quickstart). ![remix Avalanche L1 evm sc file explorer](/images/smart-contract5.png) Navigate to Deploy Tab -> Open the "ENVIRONMENT" drop-down and select Injected Web3 (make sure Core is loaded). ![remix Avalanche L1 evm sc web3](/images/smart-contract6.png) Once we injected the web3-> Go back to the compiler, and compile the selected contract -> Navigate to Deploy Tab. ![remix Avalanche L1 evm sc compile](/images/smart-contract7.png) Now, the smart contract is compiled, Core is injected, and we are ready to deploy our ERC721. Click "Deploy." ![remix Avalanche L1 evm sc deploy](/images/smart-contract8.png) Confirm the transaction on the Core pop up. ![balance](/images/smart-contract9.png) Our contract is successfully deployed! ![remix Avalanche L1 evm sc deployed](/images/smart-contract10.png) Now, we can expand it by selecting it from the "Deployed Contracts" tab and test it out. ![remix Avalanche L1 evm sc end](/images/smart-contract11.png) The contract ABI and Bytecode are available on the compiler tab. ![remix Avalanche L1 evm sc ABI](/images/smart-contract12.png) If you had any difficulties following this tutorial or simply want to discuss Avalanche with us, you can join our community at [Discord](https://chat.avalabs.org/)! You can use Subnet-EVM just like you use C-Chain and EVM tools. Only differences are `chainID` and RPC URL. For example you can deploy your contracts with [Hardhat](https://hardhat.org/getting-started) by changing `url` and `chainId` in the `hardhat.config.ts`. # Add a Testnet Faucet (/docs/avalanche-l1s/add-utility/testnet-faucet) --- title: Add a Testnet Faucet description: This guide will help you add a testnet faucet to your Avalanche L1. --- There are thousands of networks and chains in the blockchain space, each with its capabilities and use-cases. Each network requires native coins to do any transaction on them, which can have a monetary value as well. These coins can be collected through centralized exchanges, token sales, etc in exchange for some monetary assets like USD. But we cannot risk our funds on the network or on any applications hosted on that network, without testing them first. So, these networks often have test networks or testnets, where the native coins do not have any monetary value, and thus can be obtained freely through faucets. These testnets are often the testbeds for any new native feature of the network itself, or any dapp or [Avalanche L1](/docs/avalanche-l1s) that is going live on the main network (Mainnet). For example, [Fuji](/docs/primary-network) network is the Testnet for Avalanche's Mainnet. Besides Fuji Testnet, the [Avalanche Faucet](https://core.app/tools/testnet-faucet/?avalanche-l1=c&token=c) can be used to get free test tokens on testnet Avalanche L1s like: - [WAGMI Testnet](https://core.app/tools/testnet-faucet/?avalanche-l1=wagmi) - [DeFI Kingdoms Testnet](https://core.app/tools/testnet-faucet/?avalanche-l1=dfk) - [Beam Testnet](https://core.app/tools/testnet-faucet/?avalanche-l1=beam&token=beam) and many more. You can use this [repository](https://github.com/ava-labs/avalanche-faucet) to deploy your faucet or just make a PR with the [configurations](https://github.com/ava-labs/avalanche-faucet/blob/main/config.json) of the Avalanche L1. This faucet comes with many features like multiple chain support, custom rate-limiting per Avalanche L1, CAPTCHA verification, and concurrent transaction handling. Summary[​](#summary "Direct link to heading") --------------------------------------------- A [Faucet](https://core.app/tools/testnet-faucet/) powered by Avalanche for Fuji Network and other Avalanche L1s. You can - - Request test coins for the supported Avalanche L1s - Integrate your EVM Avalanche L1 with the faucet by making a PR with the [chain configurations](https://github.com/ava-labs/avalanche-faucet/blob/main/config.json) - Fork the [repository](https://github.com/ava-labs/avalanche-faucet) to deploy your faucet for any EVM chain Adding a New Avalanche L1[​](#adding-a-new-avalanche-l1 "Direct link to heading") --------------------------------------------------------------------- You can also integrate a new Avalanche L1 on the live [faucet](https://core.app/tools/testnet-faucet/) with just a few lines of configuration parameters. All you have to do is make a PR on the [Avalanche Faucet](https://github.com/ava-labs/avalanche-faucet) git repository with the Avalanche L1's information. The following parameters are required. ```json { "ID": string, "NAME": string, "TOKEN": string, "RPC": string, "CHAINID": number, "EXPLORER": string, "IMAGE": string, "MAX_PRIORITY_FEE": string, "MAX_FEE": string, "DRIP_AMOUNT": number, "RATELIMIT": { "MAX_LIMIT": number, "WINDOW_SIZE": number } } ``` - `ID` - Each Avalanche L1 chain should have a unique and relatable ID. - `NAME` - Name of the Avalanche L1 chain that will appear on the site. - `RPC` - A valid RPC URL for accessing the chain. - `CHAINID` - ChainID of the chain - `EXPLORER` - Base URL of standard explorer's site. - `IMAGE` - URL of the icon of the chain that will be shown in the dropdown. - `MAX_PRIORITY_FEE` - Maximum tip per faucet drop in **wei** or **10\-18** unit (for EIP1559 supported chains) - `MAX_FEE` - Maximum fee that can be paid for a faucet drop in **wei** or **10\-18** unit - `DRIP_AMOUNT` - Amount of coins to send per request in **gwei** or **10\-9** unit - `RECALIBRATE` _(optional)_ - Number of seconds after which the nonce and balance will recalibrate - `RATELIMIT` - Number of times (MAX\_LIMIT) to allow per user within the WINDOW\_SIZE (in minutes) Add the configuration in the array of `evmchains` inside the [config.json](https://github.com/ava-labs/avalanche-faucet/blob/main/config.json) file and make a PR. Building and Deploying a Faucet[​](#building-and-deploying-a-faucet "Direct link to heading") --------------------------------------------------------------------------------------------- You can also deploy and build your faucet by using the [Avalanche Faucet](https://github.com/ava-labs/avalanche-faucet) repository. ### Requirements[​](#requirements "Direct link to heading") - [Node](https://nodejs.org/en) >= 17.0 and [npm](https://www.npmjs.com/) >= 8.0 - [Google's reCAPTCHA](https://www.google.com/recaptcha/intro/v3.html) v3 keys - [Docker](https://www.docker.com/get-started/) ### Installation[​](#installation "Direct link to heading") Clone this repository at your preferred location. ```bash git clone https://github.com/ava-labs/avalanche-faucet ``` The repository cloning method used is HTTPS, but SSH can be used too: `git clone git@github.com:ava-labs/avalanche-faucet.git` You can find more about SSH and how to use it [here](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/about-ssh). ### Client-Side Configurations[​](#client-side-configurations "Direct link to heading") We need to configure our application with the server API endpoints and CAPTCHA site keys. All the client-side configurations are there in the `client/src/config.json` file. Since there are no secrets on the client-side, we do not need any environment variables. Update the config files according to your need. ```json { "banner": "/banner.png", "apiBaseEndpointProduction": "/api/", "apiBaseEndpointDevelopment": "http://localhost:8000/api/", "apiTimeout": 10000, "CAPTCHA": { "siteKey": "6LcNScYfAAAAAJH8fauA-okTZrmAxYqfF9gOmujf", "action": "faucetdrip" } } ``` Put the Google's reCAPTCHA site-key without which the faucet client can't send the necessary CAPTCHA response to the server. This key is not a secret and could be public. In the above file, there are 2 base endpoints for the faucet server `apiBaseEndpointProduction` and `apiBaseEndpointDevelopment`. In production mode, the client-side will be served as static content over the server's endpoint, and hence we do not have to provide the server's IP address or domain. The URL path should be valid, where the server's APIs are hosted. If the endpoints for API have a leading `/v1/api` and the server is running on localhost at port 3000, then you should use `http://localhost:3000/v1/api` or `/v1/api/` depending on whether it is production or development. ### Server-Side Configurations[​](#server-side-configurations "Direct link to heading") On the server-side, we need to configure 2 files - `.env` for secret keys and `config.json` for chain and API rate limiting configurations. #### Setup Environment Variables[​](#setup-environment-variables "Direct link to heading") Setup the environment variable with your private key and reCAPTCHA secret. Make a `.env` file in your preferred location with the following credentials, as this file will not be committed to the repository. The faucet server can handle multiple EVM chains, and therefore requires private keys for addresses with funds on each of the chains. If you have funds on the same address on every chain, then you can specify them with the single variable`PK`. But if you have funds on different addresses on different chains, then you can provide each of the private keys against the ID of the chain, as shown below. ```bash C="C chain private key" WAGMI="Wagmi chain private key" PK="Sender Private Key with Funds in it" CAPTCHA_SECRET="Google reCAPTCHA Secret" ``` `PK` will act as a fallback private key, in case, the key for any chain is not provided. #### Setup EVM Chain Configurations[​](#setup-evm-chain-configurations "Direct link to heading") You can create a faucet server for any EVM chain by making changes in the `config.json` file. Add your chain configuration as shown below in the `evmchains` object. Configuration for Fuji's C-Chain and WAGMI chain is shown below for example. ```json "evmchains": [ { "ID": "C", "NAME": "Fuji (C-Chain)", "TOKEN": "AVAX", "RPC": "https://api.avax-test.network/ext/C/rpc", "CHAINID": 43113, "EXPLORER": "https://testnet.snowtrace.io", "IMAGE": "/avaxred.png", "MAX_PRIORITY_FEE": "2000000000", "MAX_FEE": "100000000000", "DRIP_AMOUNT": 2000000000, "RECALIBRATE": 30, "RATELIMIT": { "MAX_LIMIT": 1, "WINDOW_SIZE": 1440 } }, { "ID": "WAGMI", "NAME": "WAGMI Testnet", "TOKEN": "WGM", "RPC": "https://subnets.avax.network/wagmi/wagmi-chain-testnet/rpc", "CHAINID": 11111, "EXPLORER": "https://subnets.avax.network/wagmi/wagmi-chain-testnet/explorer", "IMAGE": "/wagmi.png", "MAX_PRIORITY_FEE": "2000000000", "MAX_FEE": "100000000000", "DRIP_AMOUNT": 2000000000, "RATELIMIT": { "MAX_LIMIT": 1, "WINDOW_SIZE": 1440 } } ] ``` In the above configuration drip amount is in `nAVAX` or `gwei`, whereas fees are in `wei`. For example, with the above configurations, the faucet will send `1 AVAX` with maximum fees per gas being `100 nAVAX` and priority fee as `2 nAVAX`. The rate limiter for C-Chain will only accept 1 request in 60 minutes for a particular API and 2 requests in 60 minutes for the WAGMI chain. Though it will skip any failed requests so that users can request tokens again, even if there is some internal error in the application. On the other hand, the global rate limiter will allow 15 requests per minute on every API. This time failed requests will also get counted so that no one can abuse the APIs. ### API Endpoints[​](#api-endpoints "Direct link to heading") This server will expose the following APIs #### Health API[​](#health-api "Direct link to heading") The `/health` API will always return a response with a `200` status code. This endpoint can be used to know the health of the server. ```bash curl http://localhost:8000/health ``` Response #### Get Faucet Address[​](#get-faucet-address "Direct link to heading") This API will be used for fetching the faucet address. ```bash curl http://localhost:8000/api/faucetAddress?chain=C ``` It will give the following response: ```bash 0x3EA53fA26b41885cB9149B62f0b7c0BAf76C78D4 ``` #### Get Faucet Balance[​](#get-faucet-balance "Direct link to heading") This API will be used for fetching the faucet address. ```bash curl http://localhost:8000/api/getBalance?chain=C ``` #### Send Token[​](#send-token "Direct link to heading") This API endpoint will handle token requests from users. It will return the transaction hash as a receipt of the faucet drip. ```bash curl -d '{ "address": "0x3EA53fA26b41885cB9149B62f0b7c0BAf76C78D4" "chain": "C" }' -H 'Content-Type: application/json' http://localhost:8000/api/sendToken ``` Send token API requires a CAPTCHA response token that is generated using the CAPTCHA site key on the client-side. Since we can't generate and pass this token while making a curl request, we have to disable the CAPTCHA verification for testing purposes. You can find the steps to disable it in the next sections. The response is shown below ```json { "message": "Transaction successful on Avalanche C Chain!", "txHash": "0x3d1f1c3facf59c5cd7d6937b3b727d047a1e664f52834daf20b0555e89fc8317" } ``` ### Rate Limiters[​](#rate-limiters-important "Direct link to heading") The rate limiters are applied on the global (all endpoints) as well as on the `/api/sendToken` API. These can be configured from the `config.json` file. Rate limiting parameters for chains are passed in the chain configuration as shown above. ```json "GLOBAL_RL": { "ID": "GLOBAL", "RATELIMIT": { "REVERSE_PROXIES": 4, "MAX_LIMIT": 40, "WINDOW_SIZE": 1, "PATH": "/", "SKIP_FAILED_REQUESTS": false } } ``` There could be multiple proxies between the server and the client. The server will see the IP address of the adjacent proxy connected with the server, and this may not be the client's actual IP. The IPs of all the proxies that the request has hopped through are stuffed inside the header **x-forwarded-for** array. But the proxies in between can easily manipulate these headers to bypass rate limiters. So, we cannot trust all the proxies and hence all the IPs inside the header. The proxies that are set up by the owner of the server (reverse-proxies) are the trusted proxies on which we can rely and know that they have stuffed the actual IP of the callers in between. Any proxy that is not set up by the server, should be considered an untrusted proxy. So, we can jump to the IP address added by the last proxy that we trust. The number of jumps that we want can be configured in the `config.json` file inside the `GLOBAL_RL` object. ![faucet 5](/images/faucet1.png) #### Clients Behind Same Proxy[​](#clients-behind-same-proxy "Direct link to heading") Consider the below diagram. The server is set up with 2 reverse proxies. If the client is behind proxies, then we cannot get the client's actual IP, and instead will consider the proxy's IP as the client's IP. And if some other client is behind the same proxy, then those clients will be considered as a single entity and might get rate-limited faster. ![faucet 6](/images/faucet2.png) Therefore it is advised to the users, to avoid using any proxy for accessing applications that have critical rate limits, like this faucet. #### Wrong Number of Reverse Proxies[​](#wrong-number-of-reverse-proxies "Direct link to heading") So, if you want to deploy this faucet, and have some reverse proxies in between, then you should configure this inside the `GLOBAL_RL` key of the `config.json` file. If this is not configured properly, then the users might get rate-limited very frequently, since the server-side proxy's IP addresses are being viewed as the client's IP. You can verify this in the code [here](https://github.com/ava-labs/avalanche-faucet/blob/23eb300635b64130bc9ce10d9e894f0a0b3d81ea/middlewares/rateLimiter.ts#L25). ```json "GLOBAL_RL": { "ID": "GLOBAL", "RATELIMIT": { "REVERSE_PROXIES": 4, ... } } ``` ![faucet 7](/images/faucet3.png) It is also quite common to have Cloudflare as the last reverse proxy or the exposed server. Cloudflare provides a header **cf-connecting-ip** which is the IP of the client that requested the faucet and hence Cloudflare. We are using this as default. ### CAPTCHA Verification[​](#captcha-verification "Direct link to heading") CAPTCHA is required to prove the user is a human and not a bot. For this purpose, we will use [Google's reCAPTCHA](https://www.google.com/recaptcha/intro/v3.html). The server-side will require `CAPTCHA_SECRET` that should not be exposed. You can set the threshold score to pass the CAPTCHA test by the users [here](https://github.com/ava-labs/avalanche-faucet/blob/23eb300635b64130bc9ce10d9e894f0a0b3d81ea/middlewares/verifyCaptcha.ts#L20). You can disable these CAPTCHA verifications and rate limiters for testing the purpose, by tweaking in the `server.ts` file. ### Disabling Rate Limiters[​](#disabling-rate-limiters "Direct link to heading") Comment or remove these two lines from the `server.ts` file ```ts title="server.ts" new RateLimiter(app, [GLOBAL_RL]); new RateLimiter(app, evmchains); ``` ### Disabling CAPTCHA Verification[​](#disabling-captcha-verification "Direct link to heading") Remove the `captcha.middleware` from `sendToken` API. ### Starting the Faucet[​](#starting-the-faucet "Direct link to heading") Follow the below commands to start your local faucet. #### Installing Dependencies[​](#installing-dependencies "Direct link to heading") This will concurrently install dependencies for both client and server. If ports have a default configuration, then the client will start at port 3000 and the server will start at port 8000 while in development mode. #### Starting in Development Mode[​](#starting-in-development-mode "Direct link to heading") This will concurrently start the server and client in development mode. #### Building for Production[​](#building-for-production "Direct link to heading") The following command will build server and client at `build/` and `build/client` directories. #### Starting in Production Mode[​](#starting-in-production-mode "Direct link to heading") This command should only be run after successfully building the client and server-side code. ### Setting up with Docker[​](#setting-up-with-docker "Direct link to heading") Follow the steps to run this application in a Docker container. #### Build Docker Image[​](#build-docker-image "Direct link to heading") Docker images can be served as the built versions of our application, that can be used to deploy on Docker container. ```bash docker build . -t faucet-image ``` #### Starting Application inside Docker Container[​](#starting-application-inside-docker-container "Direct link to heading") Now we can create any number of containers using the above `faucet` image. We also have to supply the `.env` file or the environment variables with the secret keys to create the container. Once the container is created, these variables and configurations will be persisted and can be easily started or stopped with a single command. ```bash docker run -p 3000:8000 --name faucet-container --env-file ../.env faucet-image ``` The server will run on port 8000, and our Docker will also expose this port for the outer world to interact. We have exposed this port in the `Dockerfile`. But we cannot directly interact with the container port, so we had to bind this container port to our host port. For the host port, we have chosen 3000. This flag `-p 3000:8000` achieves the same. This will start our faucet application in a Docker container at port 3000 (port 8000 on the container). You can interact with the application by visiting \[http://localhost:3000\] in your browser. #### Stopping the Container[​](#stopping-the-container "Direct link to heading") You can easily stop the container using the following command ```bash docker stop faucet-container ``` #### Restarting the Container[​](#restarting-the-container "Direct link to heading") To restart the container, use the following command ```bash docker start faucet-container ``` Using the Faucet[​](#using-the-faucet "Direct link to heading") --------------------------------------------------------------- Using the faucet is quite straightforward, but for the sake of completeness, let's go through the steps, to collect your first test coins. ### Visit Avalanche Faucet Site[​](#visit-avalanche-faucet-site "Direct link to heading") Go to [https://core.app/tools/testnet-faucet/](https://core.app/tools/testnet-faucet/). You will see various network parameters like network name, faucet balance, drop amount, drop limit, faucet address, etc. ![faucet 1](/images/faucet4.png) ### Select Network[​](#select-network "Direct link to heading") You can use the dropdown to select the network of your choice and get some free coins (each network may have a different drop amount). ![faucet 2](/images/faucet5.png) ### Put Address and Request Coins[​](#put-address-and-request-coins "Direct link to heading") If you already have an AVAX balance greater than zero on Mainnet, paste your C-Chain address there, and request test tokens. Otherwise, please request a faucet coupon on [Guild](https://guild.xyz/avalanche). Admins and mods on the official [Discord](https://discord.com/invite/RwXY7P6) can provide testnet AVAX if developers are unable to obtain it from the other two options. Within a second, you will get a **transaction hash** for the processed transaction. The hash would be a hyperlink to Avalanche L1's explorer. You can see the transaction status, by clicking on that hyperlink. ![faucet 3](/images/faucet6.png) ### More Interactions[​](#more-interactions "Direct link to heading") This is not just it. Using the buttons shown below, you can go to the Avalanche L1 explorer or add the Avalanche L1 to your browser wallet extensions like Core or MetaMask with a single click. ![faucet 4](/images/faucet7.png) ### Probable Errors and Troubleshooting[​](#probable-errors-and-troubleshooting "Direct link to heading") Errors are not expected, but if you are facing some of the errors shown, then you could try troubleshooting as shown below. If none of the troubleshooting works, reach us through [Discord](https://discord.com/channels/578992315641626624/). 1. **Too many requests. Please try again after X minutes**: This is a rate-limiting message. Every Avalanche L1 can set its drop limits. The above message suggests that you have reached your drop limit, that is the number of times you could request coins within the window of X minutes. You should try requesting after X minutes. If you are facing this problem, even when you are requesting for the first time in the window, you may be behind some proxy, Wi-Fi, or VPN service that is also being used by some other user. 2. **CAPTCHA verification failed! Try refreshing**: We are using v3 of [Google's reCAPTCHA](https://developers.google.com/recaptcha/docs/v3). This version uses scores between 0 and 1 to rate the interaction of humans with the site, with 0 being the most suspicious one. You do not have to solve any puzzle or mark the **I am not a Robot** checkbox. The score will be automatically calculated. We want our users to score at least 0.3 to use the faucet. This is configurable, and we will update the threshold after having broader data. But if you are facing this issue, then you can try refreshing your page, disabling ad-blockers, or switching off any VPN. You can follow this [guide](https://2captcha.com/blog/google-doesnt-accept-recaptcha-answers) to get rid of this issue. 3. **Internal RPC error! Please try after sometime**: This is an internal error in the Avalanche L1's node, on which we are making an RPC for sending transactions. A regular check will update the RPC's health status every 30 seconds (default) or whatever is set in the configuration. This may happen only in rare scenarios and you cannot do much about it, rather than waiting. 4. **Timeout of 10000ms exceeded**: There could be many reasons for this message. It could be an internal server error, or the request didn't receive by the server, slow internet, etc. You could try again after some time, and if the problem persists, then you should raise this issue on our [Discord](https://discord.com/channels/578992315641626624/) server. 5. **Couldn't see any transaction status on explorer**: The transaction hash that you get for each drop is pre-computed using the expected nonce, amount, and receiver's address. Though transactions on Avalanche are near-instant, the explorer may take time to index those transactions. You should wait for a few more seconds, before raising any issue or reaching out to us. # Getting Started (/docs/api-reference/metrics-api/getting-started) --- title: Getting Started description: Getting Started with the Metrics API icon: Rocket --- The Metrics API is designed to be simple and accessible, requiring no authentication to get started. Just choose your endpoint, make your query, and instantly access on-chain data and analytics to power your applications. The following query retrieves the daily count of active addresses on the Avalanche C-Chain(43114) over the course of one month (from August 1, 2024 12:00:00 AM to August 31, 2024 12:00:00 AM), providing insights into user activity on the chain for each day during that period. With this data you can use JavaScript visualization tools like Chart.js, D3.js, Highcharts, Plotly.js, or Recharts to create interactive and insightful visual representations. ```bash curl --request GET \ --url 'https://metrics.avax.network/v2/chains/43114/metrics/activeAddresses?startTimestamp=1722470400&endTimestamp=1725062400&timeInterval=day&pageSize=31' ``` Response: ```json { "results": [ { "value": 37738, "timestamp": 1724976000 }, { "value": 53934, "timestamp": 1724889600 }, { "value": 58992, "timestamp": 1724803200 }, { "value": 73792, "timestamp": 1724716800 }, { "value": 70057, "timestamp": 1724630400 }, { "value": 46452, "timestamp": 1724544000 }, { "value": 46323, "timestamp": 1724457600 }, { "value": 73399, "timestamp": 1724371200 }, { "value": 52661, "timestamp": 1724284800 }, { "value": 52497, "timestamp": 1724198400 }, { "value": 50574, "timestamp": 1724112000 }, { "value": 46999, "timestamp": 1724025600 }, { "value": 45320, "timestamp": 1723939200 }, { "value": 54964, "timestamp": 1723852800 }, { "value": 60251, "timestamp": 1723766400 }, { "value": 48493, "timestamp": 1723680000 }, { "value": 71091, "timestamp": 1723593600 }, { "value": 50456, "timestamp": 1723507200 }, { "value": 46989, "timestamp": 1723420800 }, { "value": 50984, "timestamp": 1723334400 }, { "value": 46988, "timestamp": 1723248000 }, { "value": 66943, "timestamp": 1723161600 }, { "value": 64209, "timestamp": 1723075200 }, { "value": 57478, "timestamp": 1722988800 }, { "value": 80553, "timestamp": 1722902400 }, { "value": 70472, "timestamp": 1722816000 }, { "value": 53678, "timestamp": 1722729600 }, { "value": 70818, "timestamp": 1722643200 }, { "value": 99842, "timestamp": 1722556800 }, { "value": 76515, "timestamp": 1722470400 } ] } ``` Congratulations! You’ve successfully made your first query to the Metrics API. 🚀🚀🚀 # Metrics API (/docs/api-reference/metrics-api) --- title: Metrics API description: Access real-time and historical metrics for Avalanche networks icon: ChartLine --- ### What is the Metrics API? The Metrics API equips web3 developers with a robust suite of tools to access and analyze on-chain activity across Avalanche’s primary network, Avalanche L1s, and other supported EVM chains. This API delivers comprehensive metrics and analytics, enabling you to seamlessly integrate historical data on transactions, gas consumption, throughput, staking, and more into your applications. The Metrics API, along with the [Data API](/docs/api-reference/data-api) are the driving force behind every graph you see on the [Avalanche Explorer](https://explorer.avax.network/). From transaction trends to staking insights, the visualizations and data presented are all powered by these APIs, offering real-time and historical insights that are essential for building sophisticated, data-driven blockchain products.. ### Features * **Chain Throughput:** Retrieve detailed metrics on gas consumption, Transactions Per Second (TPS), and gas prices, including rolling windows of data for granular analysis. * **Cumulative Metrics:** Access cumulative data on addresses, contracts, deployers, and transaction counts, providing insights into network growth over time. * **Staking Information:** Obtain staking-related data, including the number of validators and delegators, along with their respective weights, across different subnets. * **Blockchains and Subnets:** Get information about supported blockchains, including EVM Chain IDs, blockchain IDs, and subnet associations, facilitating multi-chain analytics. * **Composite Queries:** Perform advanced queries by combining different metric types and conditions, enabling detailed and customizable data retrieval. The Metrics API is designed to provide developers with powerful tools to analyze and monitor on-chain activity across Avalanche’s primary network, Avalanche L1s, and other supported EVM chains. Below is an overview of the key features available: ### Chain Throughput Metrics * **Gas Consumption**
Track the average and maximum gas consumption per second, helping to understand network performance and efficiency. * **Transactions Per Second (TPS)**
Monitor the average and peak TPS to assess the network’s capacity and utilization. * **Gas Prices**
Analyze average and maximum gas prices over time to optimize transaction costs and predict fee trends. Monitor the average and peak TPS to assess the network’s capacity and utilization. ### Cumulative Metrics * **Address Growth**
Access the cumulative number of active addresses on a chain, providing insights into network adoption and user activity. * **Contract Deployment**
Monitor the cumulative number of smart contracts deployed, helping to gauge developer engagement and platform usage. * **Transaction Count**
Track the cumulative number of transactions, offering a clear view of network activity and transaction volume. ### Staking Information * **Validator and Delegator Counts**
Retrieve the number of active validators and delegators for a given L1, crucial for understanding network security and decentralization. * **Staking Weights**
Access the total stake weight of validators and delegators, helping to assess the distribution of staked assets across the network. ### Rolling Window Analytics * **Short-Term and Long-Term Metrics:** Perform rolling window analysis on various metrics like gas used, TPS, and gas prices, allowing for both short-term and long-term trend analysis. * **Customizable Time Frames:** Choose from different time intervals (hourly, daily, monthly) to suit your specific analytical needs. ### Blockchain and L1 Information * **Chain and L1 Mapping:** Get detailed information about EVM chains and their associated L1s, including chain IDs, blockchain IDs, and subnet IDs, facilitating cross-chain analytics. ### Advanced Composite Queries * **Custom Metrics Combinations**: Combine multiple metrics and apply logical operators to perform sophisticated queries, enabling deep insights and tailored analytics. * **Paginated Results:** Handle large datasets efficiently with paginated responses, ensuring seamless data retrieval in your applications. The Metrics API equips developers with the tools needed to build robust analytics, monitoring, and reporting solutions, leveraging the full power of multi-chain data across the Avalanche ecosystem and beyond. # Rate Limits (/docs/api-reference/metrics-api/rate-limits) --- title: Rate Limits description: Rate Limits for the Metrics API icon: Clock --- # Rate Limits Rate limiting is managed through a weighted scoring system, known as Compute Units (CUs). Each API request consumes a specified number of CUs, determined by the complexity of the request. This system is designed to accommodate basic requests while efficiently handling more computationally intensive operations. ## Rate Limit Tiers The maximum CUs (rate-limiting score) for a user depends on their subscription level and is delineated in the following table: | Subscription Level | Per Minute Limit (CUs) | Per Day Limit (CUs) | | :----------------- | :--------------------- | :------------------ | | Free | 8,000 | 1,200,000 | > We are working on new subscription tiers with higher rate limits to support even greater request volumes. ## Rate Limit Categories The CUs for each category are defined in the following table: | Weight | CU Value | | :----- | :------- | | Free | 1 | | Small | 20 | | Medium | 100 | | Large | 500 | | XL | 1000 | | XXL | 3000 | ## Rate Limits for Metrics Endpoints The CUs for each route are defined in the table below: | Endpoint | Method | Weight | CU Value | | :---------------------------------------------------------- | :----- | :----- | :------- | | `/v2/health-check` | GET | Free | 1 | | `/v2/chains` | GET | Free | 1 | | `/v2/chains/{chainId}` | GET | Free | 1 | | `/v2/chains/{chainId}/metrics/{metric}` | GET | Medium | 100 | | `/v2/chains/{chainId}/teleporterMetrics/{metric}` | GET | Medium | 100 | | `/v2/chains/{chainId}/rollingWindowMetrics/{metric}` | GET | Medium | 100 | | `/v2/networks/{network}/metrics/{metric}` | GET | Medium | 100 | | `/v2/chains/{chainId}/contracts/{address}/nfts:listHolders` | GET | Large | 500 | | `/v2/chains/{chainId}/contracts/{address}/balances` | GET | XL | 1000 | | `/v2/chains/43114/btcb/bridged:getAddresses` | GET | Large | 500 | | `/v2/subnets/{subnetId}/validators:getAddresses` | GET | Large | 500 | | `/v2/lookingGlass/compositeQuery` | POST | XXL | 3000 | All rate limits, weights, and CU values are subject to change. # Usage Guide (/docs/api-reference/metrics-api/usage-guide) --- title: Usage Guide description: Usage Guide for the Metrics API icon: Code --- The Metrics API does not require authentication, making it straightforward to integrate into your applications. You can start making API requests without the need for an API key or any authentication headers. #### Making Requests You can interact with the Metrics API by sending HTTP GET requests to the provided endpoints. Below is an example of a simple `curl` request. ```bash curl -H "Content-Type: Application/json" "https://metrics.avax.network/v1/avg_tps/{chainId}" ``` In the above request Replace `chainId` with the specific chain ID you want to query. For example, to retrieve the average transactions per second (TPS) for a specific chain (in this case, chain ID 43114), you can use the following endpoint: ```bash curl "https://metrics.avax.network/v1/avg_tps/43114" ``` The API will return a JSON response containing the average TPS for the specified chain over a series of timestamps and `lastRun` is a timestamp indicating when the last data point was updated: ```json { "results": [ {"timestamp": 1724716800, "value": 1.98}, {"timestamp": 1724630400, "value": 2.17}, {"timestamp": 1724544000, "value": 1.57}, {"timestamp": 1724457600, "value": 1.82}, // Additional data points... ], "status": 200, "lastRun": 1724780812 } ``` ### Rate Limits Even though the Metrics API does not require authentication, it still enforces rate limits to ensure stability and performance. If you exceed these limits, the server will respond with a 429 Too Many Requests HTTP response code. ### Error Types The API generates standard error responses along with error codes based on provided requests and parameters. Typically, response codes within the `2XX` range signifies successful requests, while those within the `4XX` range points to errors originating from the client's side. Meanwhile, response codes within the `5XX` range indicates problems on the server's side. The error response body is formatted like this: ```json { "message": ["Invalid address format"], // route specific error message "error": "Bad Request", // error type "statusCode": 400 // http response code } ``` Let's go through every error code that we can respond with: | Error Code | Error Type | Description | | ---------- | --------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **400** | Bad Request | Bad requests generally mean the client has passed invalid or malformed parameters. Error messages in the response could help in evaluating the error. | | **401** | Unauthorized | When a client attempts to access resources that require authorization credentials but the client lacks proper authentication in the request, the server responds with 401. | | **403** | Forbidden | When a client attempts to access resources with valid credentials but doesn't have the privilege to perform that action, the server responds with 403. | | **404** | Not Found | The 404 error is mostly returned when the client requests with either mistyped URL, or the passed resource is moved or deleted, or the resource doesn't exist. | | **500** | Internal Server Error | The 500 error is a generic server-side error that is returned for any uncaught and unexpected issues on the server side. This should be very rare, and you may reach out to us if the problem persists for a longer duration. | | **502** | Bad Gateway | This is an internal error indicating invalid response received by the client-facing proxy or gateway from the upstream server. | | **503** | Service Unavailable | The 503 error is returned for certain routes on a particular Subnet. This indicates an internal problem with our Subnet node, and may not necessarily mean the Subnet is down or affected. | ### Pagination For endpoints that return large datasets, the Metrics API employs pagination to manage the results. When querying for lists of data, you may receive a nextPageToken in the response, which can be used to request the next page of data. Example response with pagination: ```json { "results": [...], "nextPageToken": "3d22deea-ea64-4d30-8a1e-c2a353b67e90" } ``` To retrieve the next set of results, include the nextPageToken in your subsequent request: ```bash curl -H "Content-Type: Application/json" \ "https://metrics.avax.network/v1/avg_tps/{chainId}?pageToken=3d22deea-ea64-4d30-8a1e-c2a353b67e90" ``` ### Pagination Details #### Page Token Structure The `nextPageToken` is a UUID-based token provided in the response when additional pages of data are available. This token serves as a pointer to the next set of data. * **UUID Generation**: The `nextPageToken` is generated uniquely for each pagination scenario, ensuring security and ensuring predictability. * **Expiration**: The token is valid for 24 hours from the time it is generated. After this period, the token will expire, and a new request starting from the initial page will be required. * **Presence**: The token is only included in the response when there is additional data available. If no more data exists, the token will not be present. #### Integration and Usage To use the pagination system effectively: * Check if the `nextPageToken` is present in the response. * If present, include this token in the subsequent request to fetch the next page of results. * Ensure that the follow-up request is made within the 24-hour window after the token was generated to avoid token expiration. By utilizing the pagination mechanism, you can efficiently manage and navigate through large datasets, ensuring a smooth data retrieval process. ### Swagger API Reference You can explore the full API definitions and interact with the endpoints in the Swagger documentation at: [https://metrics.avax.network/api](https://metrics.avax.network/api) # Webhooks API (/docs/api-reference/webhook-api) --- title: Webhooks API description: Real-time notifications for blockchain events on Avalanche networks icon: Webhook --- ### What is the Webhooks API? The Webhooks API lets you monitor real-time events on the Avalanche ecosystem, including the C-chain, L1s, and the Platform Chain (P/X chains). By subscribing to specific events, you can receive instant notifications for on-chain occurrences without continuously polling the network. webhooks ### Key Features: * **Real-time notifications:** Receive immediate updates on specified on-chain activities without polling. * **Customizable:** Specify the desired event type to listen for, customizing notifications based on your individual requirements. * **Secure:** Employ shared secrets and signature-based verification to ensure that notifications originate from a trusted source. * **Broad Coverage:** * **C-chain:** Mainnet and testnet, covering smart contract events, NFT transfers, and wallet-to-wallet transactions. * **Platform Chain (P and X chains):** Address and validator events, staking activities, and other platform-level transactions. By supporting both the C-chain and the Platform Chain, you can monitor an even wider range of Avalanche activities. ### Use cases * **NFT marketplace transactions**: Get alerts for NFT minting, transfers, auctions, bids, sales, and other interactions within NFT marketplaces. * **Wallet notifications**: Receive alerts when an address performs actions such as sending, receiving, swapping, or burning assets. * **DeFi activities**: Receive notifications for various DeFi activities such as liquidity provisioning, yield farming, borrowing, lending, and liquidations. * **Staking rewards:** Get real-time notifications when a validator stakes, receives delegation, or earns staking rewards on the P-Chain, enabling seamless monitoring of validator earnings and participation. ## APIs for continuous polling vs. Webhooks for events data The following example uses the address activity webhook topic to illustrate the difference between polling an API for wallet event data versus subscribing to a webhook topic to receive wallet events. webhooks ### Continous polling Continuous polling is a method where your application repeatedly sends requests to an API at fixed intervals to check for new data or events. Think of it like checking your mailbox every five minutes to see if new mail has arrived, whether or not anything is there. * You want to track new transactions for a specific wallet. * Your application calls an API every few seconds (e.g., every 5 seconds) with a query like, “Are there any new transactions for this wallet since my last check?” * The API responds with either new transaction data or a confirmation that nothing has changed. **Downsides of continuous polling** * **Inefficiency:** Your app makes requests even when no new transactions occur, wasting computational resources, bandwidth, and potentially incurring higher API costs. For example, if no transactions happen for an hour, your app still sends hundreds of unnecessary requests. * **Delayed updates:** Since polling happens at set intervals, there’s a potential delay in detecting events. If a transaction occurs just after a poll, your app won’t know until the next check—up to 5 seconds later in our example. This lag can be critical for time-sensitive applications, like trading or notifications. * **Scalability challenges:** Monitoring one wallet might be manageable, but if you’re tracking dozens or hundreds of wallets, the number of requests multiplies quickly. ### Webhook subscription Webhooks are an event-driven alternative where your application subscribes to specific events, and the Avalanche service notifies you instantly when those events occur. It’s like signing up for a delivery alert—when the package (event) arrives, you get a text message right away, instead of checking the tracking site repeatedly. * Your app registers a webhook specifying an endpoint (e.g., `https://your-app.com/webhooks/transactions`) and the event type (e.g., `address_activity`). * When a new transaction occurs we send a POST request to your endpoint with the transaction details. * Your app receives the data only when something happens, with no need to ask repeatedly. **Benefits of Avalanche webhooks** * **Real-Time updates:** Notifications arrive the moment a transaction is processed, eliminating delays inherent in polling. This is ideal for applications needing immediate responses, like alerting users or triggering automated actions. * **Efficiency:** Your app doesn’t waste resources making requests when there’s no new data. Data flows only when events occur. This reduces server load, bandwidth usage, and API call quotas. * **Scalability:** You can subscribe to events for multiple wallets or event types (e.g., transactions, smart contract calls) without increasing the number of requests your app makes. We handle the event detection and delivery, so your app scales effortlessly as monitoring needs grow. ## Event payload structure The Event structure always begins with the following parameters: ```json theme={null} { "webhookId": "6d1bd383-aa8d-47b5-b793-da6d8a115fde", "eventType": "address_activity", "messageId": "8e4e7284-852a-478b-b425-27631c8d22d2", "event": { } } ``` **Parameters:** * `webhookId`: Unique identifier for the webhook in your account. * `eventType`: The event that caused the webhook to be triggered. In the future, there will be multiple types of events, for the time being only the address\_activity event is supported. The address\_activity event gets triggered whenever the specified addresses participate in a token or AVAX transaction. * `messageId`: Unique identifier per event sent. * `event`: Event payload. It contains details about the transaction, logs, and traces. By default logs and internal transactions are not included, if you want to include them use `"includeLogs": true`, and `"includeInternalTxs": true`. ### Address Activity webhook The address activity webhook allows you to track any interaction with an address (any address). Here is an example of this type of event: ```json theme={null} { "webhookId": "263942d1-74a4-4416-aeb4-948b9b9bb7cc", "eventType": "address_activity", "messageId": "94df1881-5d93-49d1-a1bd-607830608de2", "event": { "transaction": { "blockHash": "0xbd093536009f7dd785e9a5151d80069a93cc322f8b2df63d373865af4f6ee5be", "blockNumber": "44568834", "from": "0xf73166f0c75a3DF444fAbdFDC7e5EE4a73fA51C7", "gas": "651108", "gasPrice": "31466275484", "maxFeePerGas": "31466275484", "maxPriorityFeePerGas": "31466275484", "txHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4", "txStatus": "1", "input": "0xb80c2f090000000000000000000000000000000000000000000000000000000000000000000000000000000000000000eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee000000000000000000000000b97ef9ef8734c71904d8002f8b6bc66dd9c48a6e000000000000000000000000000000000000000000000000006ca0c737b131f2000000000000000000000000000000000000000000000000000000000011554e000000000000000000000000000000000000000000000000000000006627dadc0000000000000000000000000000000000000000000000000000000000000120000000000000000000000000000000000000000000000000000000000000016000000000000000000000000000000000000000000000000000000000000004600000000000000000000000000000000000000000000000000000000000000001000000000000000000000000000000000000000000000000006ca0c737b131f2000000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000200000000000000000000000000000000000000000000000000000000000000001000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000000a000000000000000000000000000000000000000000000000000000000000000e000000000000000000000000000000000000000000000000000000000000001200000000000000000000000000000000000000000000000000000000000000160000000000000000000000000b31f66aa3c1e785363f0875a1b74e27b85fd66c70000000000000000000000000000000000000000000000000000000000000001000000000000000000000000be882fb094143b59dc5335d32cecb711570ebdd40000000000000000000000000000000000000000000000000000000000000001000000000000000000000000be882fb094143b59dc5335d32cecb711570ebdd400000000000000000000000000000000000000000000000000000000000000010000000000000000000027100e663593657b064e1bae76d28625df5d0ebd44210000000000000000000000000000000000000000000000000000000000000001000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000000000000000000000000000000c0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000400000000000000000000000000000000000000000000000000000000000000060000000000000000000000000b31f66aa3c1e785363f0875a1b74e27b85fd66c7000000000000000000000000b97ef9ef8734c71904d8002f8b6bc66dd9c48a6e0000000000000000000000000000000000000000000000000000000000000bb80000000000000000000000000000000000000000000000000000000000000000", "nonce": "4", "to": "0x1dac23e41fc8ce857e86fd8c1ae5b6121c67d96d", "transactionIndex": 0, "value": "30576074978046450", "type": 0, "chainId": "43114", "receiptCumulativeGasUsed": "212125", "receiptGasUsed": "212125", "receiptEffectiveGasPrice": "31466275484", "receiptRoot": "0xf355b81f3e76392e1b4926429d6abf8ec24601cc3d36d0916de3113aa80dd674", "erc20Transfers": [ { "transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4", "type": "ERC20", "from": "0x1daC23e41Fc8ce857E86fD8C1AE5b6121C67D96d", "to": "0xbe882fb094143B59Dc5335D32cEcB711570EbDD4", "value": "30576074978046450", "blockTimestamp": 1713884373, "logIndex": 2, "erc20Token": { "address": "0xB31f66AA3C1e785363F0875A1B74E27b85FD66c7", "name": "Wrapped AVAX", "symbol": "WAVAX", "decimals": 18, "valueWithDecimals": "0.030576074978046448" } }, { "transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4", "type": "ERC20", "from": "0x0E663593657B064e1baE76d28625Df5D0eBd4421", "to": "0xf73166f0c75a3DF444fAbdFDC7e5EE4a73fA51C7", "value": "1195737", "blockTimestamp": 1713884373, "logIndex": 3, "erc20Token": { "address": "0xB97EF9Ef8734C71904D8002F8b6Bc66Dd9c48a6E", "name": "USD Coin", "symbol": "USDC", "decimals": 6, "valueWithDecimals": "1.195737" } }, { "transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4", "type": "ERC20", "from": "0xbe882fb094143B59Dc5335D32cEcB711570EbDD4", "to": "0x0E663593657B064e1baE76d28625Df5D0eBd4421", "value": "30576074978046450", "blockTimestamp": 1713884373, "logIndex": 4, "erc20Token": { "address": "0xB31f66AA3C1e785363F0875A1B74E27b85FD66c7", "name": "Wrapped AVAX", "symbol": "WAVAX", "decimals": 18, "valueWithDecimals": "0.030576074978046448" } } ], "erc721Transfers": [], "erc1155Transfers": [], "internalTransactions": [ { "from": "0xf73166f0c75a3DF444fAbdFDC7e5EE4a73fA51C7", "to": "0x1daC23e41Fc8ce857E86fD8C1AE5b6121C67D96d", "internalTxType": "CALL", "value": "30576074978046450", "gasUsed": "212125", "gasLimit": "651108", "transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4" }, { "from": "0x1daC23e41Fc8ce857E86fD8C1AE5b6121C67D96d", "to": "0xF2781Bb34B6f6Bb9a6B5349b24de91487E653119", "internalTxType": "DELEGATECALL", "value": "30576074978046450", "gasUsed": "176417", "gasLimit": "605825", "transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4" }, { "from": "0x1daC23e41Fc8ce857E86fD8C1AE5b6121C67D96d", "to": "0xB97EF9Ef8734C71904D8002F8b6Bc66Dd9c48a6E", "internalTxType": "STATICCALL", "value": "0", "gasUsed": "9750", "gasLimit": "585767", "transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4" }, { "from": "0xB97EF9Ef8734C71904D8002F8b6Bc66Dd9c48a6E", "to": "0x30DFE0469803BcE76F8F62aC24b18d33D3d6FfE6", "internalTxType": "DELEGATECALL", "value": "0", "gasUsed": "2553", "gasLimit": "569571", "transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4" }, { "from": "0x1daC23e41Fc8ce857E86fD8C1AE5b6121C67D96d", "to": "0xB31f66AA3C1e785363F0875A1B74E27b85FD66c7", "internalTxType": "CALL", "value": "30576074978046450", "gasUsed": "23878", "gasLimit": "566542", "transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4" }, { "from": "0x1daC23e41Fc8ce857E86fD8C1AE5b6121C67D96d", "to": "0xB31f66AA3C1e785363F0875A1B74E27b85FD66c7", "internalTxType": "CALL", "value": "0", "gasUsed": "25116", "gasLimit": "540114", "transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4" }, { "from": "0x1daC23e41Fc8ce857E86fD8C1AE5b6121C67D96d", "to": "0xbe882fb094143B59Dc5335D32cEcB711570EbDD4", "internalTxType": "CALL", "value": "0", "gasUsed": "81496", "gasLimit": "511279", "transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4" }, { "from": "0xbe882fb094143B59Dc5335D32cEcB711570EbDD4", "to": "0xB31f66AA3C1e785363F0875A1B74E27b85FD66c7", "internalTxType": "STATICCALL", "value": "0", "gasUsed": "491", "gasLimit": "501085", "transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4" }, { "from": "0xbe882fb094143B59Dc5335D32cEcB711570EbDD4", "to": "0x0E663593657B064e1baE76d28625Df5D0eBd4421", "internalTxType": "CALL", "value": "0", "gasUsed": "74900", "gasLimit": "497032", "transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4" }, { "from": "0x0E663593657B064e1baE76d28625Df5D0eBd4421", "to": "0xB97EF9Ef8734C71904D8002F8b6Bc66Dd9c48a6E", "internalTxType": "CALL", "value": "0", "gasUsed": "32063", "gasLimit": "463431", "transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4" }, { "from": "0xB97EF9Ef8734C71904D8002F8b6Bc66Dd9c48a6E", "to": "0x30DFE0469803BcE76F8F62aC24b18d33D3d6FfE6", "internalTxType": "DELEGATECALL", "value": "0", "gasUsed": "31363", "gasLimit": "455542", "transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4" }, { "from": "0x0E663593657B064e1baE76d28625Df5D0eBd4421", "to": "0xB31f66AA3C1e785363F0875A1B74E27b85FD66c7", "internalTxType": "STATICCALL", "value": "0", "gasUsed": "2491", "gasLimit": "430998", "transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4" }, { "from": "0x0E663593657B064e1baE76d28625Df5D0eBd4421", "to": "0xbe882fb094143B59Dc5335D32cEcB711570EbDD4", "internalTxType": "CALL", "value": "0", "gasUsed": "7591", "gasLimit": "427775", "transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4" }, { "from": "0xbe882fb094143B59Dc5335D32cEcB711570EbDD4", "to": "0xB31f66AA3C1e785363F0875A1B74E27b85FD66c7", "internalTxType": "CALL", "value": "0", "gasUsed": "6016", "gasLimit": "419746", "transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4" }, { "from": "0x0E663593657B064e1baE76d28625Df5D0eBd4421", "to": "0xB31f66AA3C1e785363F0875A1B74E27b85FD66c7", "internalTxType": "STATICCALL", "value": "0", "gasUsed": "491", "gasLimit": "419670", "transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4" }, { "from": "0x1daC23e41Fc8ce857E86fD8C1AE5b6121C67D96d", "to": "0xB97EF9Ef8734C71904D8002F8b6Bc66Dd9c48a6E", "internalTxType": "STATICCALL", "value": "0", "gasUsed": "3250", "gasLimit": "430493", "transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4" }, { "from": "0xB97EF9Ef8734C71904D8002F8b6Bc66Dd9c48a6E", "to": "0x30DFE0469803BcE76F8F62aC24b18d33D3d6FfE6", "internalTxType": "DELEGATECALL", "value": "0", "gasUsed": "2553", "gasLimit": "423121", "transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4" }, { "from": "0x1daC23e41Fc8ce857E86fD8C1AE5b6121C67D96d", "to": "0xB97EF9Ef8734C71904D8002F8b6Bc66Dd9c48a6E", "internalTxType": "STATICCALL", "value": "0", "gasUsed": "1250", "gasLimit": "426766", "transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4" }, { "from": "0xB97EF9Ef8734C71904D8002F8b6Bc66Dd9c48a6E", "to": "0x30DFE0469803BcE76F8F62aC24b18d33D3d6FfE6", "internalTxType": "DELEGATECALL", "value": "0", "gasUsed": "553", "gasLimit": "419453", "transactionHash": "0xf6a791920652e87ccc91d2f1b20c1505a94452b88f359acdeb5a6fa8205638c4" } ], "blockTimestamp": 1713884373 } } } ``` # Rate Limits (/docs/api-reference/webhook-api/rate-limits) --- title: Rate Limits description: Rate Limits for the Webhooks API icon: Clock --- Rate limiting is managed through a weighted scoring system, known as Compute Units (CUs). Each API request consumes a specified number of CUs, determined by the complexity of the request. This system is designed to accommodate basic requests while efficiently handling more computationally intensive operations. ## Rate Limit Tiers The maximum CUs (rate-limiting score) for a user depends on their subscription level and is delineated in the following table: | Subscription Level | Per Minute Limit (CUs) | Per Day Limit (CUs) | | :----------------- | :--------------------- | :------------------ | | Unauthenticated | 6,000 | 1,200,000 | | Free | 8,000 | 2,000,000 | | Base | 10,000 | 3,750,000 | | Growth | 14,000 | 11,200,000 | | Pro | 20,000 | 25,000,000 | To update your subscription level use the [AvaCloud Portal](https://app.avacloud.io/) Note: Rate limits apply collectively across both Webhooks and Data APIs, with usage from each counting toward your total CU limit. ## Rate Limit Categories The CUs for each category are defined in the following table: | Weight | CU Value | | :----- | :------- | | Free | 1 | | Small | 10 | | Medium | 20 | | Large | 50 | | XL | 100 | | XXL | 200 | ## Rate Limits for Webhook Endpoints The CUs for each route are defined in the table below: | Endpoint | Method | Weight | CU Value | | :------------------------------------------ | :----- | :----- | :------- | | `/v1/webhooks` | POST | Medium | 20 | | `/v1/webhooks` | GET | Small | 10 | | `/v1/webhooks/{id}` | GET | Small | 10 | | `/v1/webhooks/{id}` | DELETE | Medium | 20 | | `/v1/webhooks/{id}` | PATCH | Medium | 20 | | `/v1/webhooks:generateOrRotateSharedSecret` | POST | Medium | 20 | | `/v1/webhooks:getSharedSecret` | GET | Small | 10 | | `/v1/webhooks/{id}/addresses` | PATCH | Medium | 20 | | `/v1/webhooks/{id}/addresses` | DELETE | Medium | 20 | | `/v1/webhooks/{id}/addresses` | GET | Medium | 20 | All rate limits, weights, and CU values are subject to change. # Retry mechanism (/docs/api-reference/webhook-api/retries) --- title: Retry mechanism description: Retry mechanism for the Webhook API icon: RotateCcw --- Our webhook system is designed to ensure you receive all your messages, even if temporary issues prevent immediate delivery. To achieve this, we’ve implemented a retry mechanism that resends messages if they don’t get through on the first attempt. Importantly, **retries are handled on a per-message basis**, meaning each webhook message follows its own independent retry schedule. This ensures that the failure of one message doesn’t affect the delivery attempts of others. This guide will walk you through how the retry mechanism works, the differences between free and paid tier users, and practical steps you can take to ensure your system handles webhooks effectively. ## How it works When we send a webhook message to your server, we expect a `200` status code within 10 seconds to confirm successful receipt. Your server should return this response immediately and process the message afterward. Processing the message before sending the response can lead to timeouts and trigger unnecessary retries. webhooks * **Attempt 1:** We send the message expecting a respose with `200` status code. If we do not receive a `200` status code within **10 seconds**, the attempt is considered failed. During this window, any non-`2xx` responses are ignored. * **Attempt 2:** Occurs **10 seconds** after the first attempt, with another 10-second timeout and the same rule for ignoring non-`2xx` responses. * **Retry Queue After Two Failed Attempts** If both initial attempts fail, the message enters a **retry queue** with progressively longer intervals between attempts. Each retry attempt still has a 10-second timeout, and non-`2xx` responses are ignored during this window. The retry schedule is as follows: | Attempt | Interval | | ------- | -------- | | 3 | 1 min | | 4 | 5 min | | 5 | 10 min | | 6 | 30 min | | 7 | 2 hours | | 8 | 6 hours | | 9 | 12 hours | | 10 | 24 hours | **Total Retry Duration:** Up to approximately 44.8 hours (2,688 minutes) if all retries are exhausted. **Interval Timing:** Each retry interval starts 10 seconds after the previous attempt is deemed failed. For example, if attempt 2 fails at t=20 seconds, attempt 3 will start at t=80 seconds (20s + 1 minute interval + 10s). Since retries are per message, multiple messages can be in different stages of their retry schedules simultaneously without interfering with each other. ## Differences Between Free and Paid Tier Users The behavior of the retry mechanism varies based on your subscription tier: **Free tier users** * **Initial attempts limit:** If six messages fail both the first and second attempts, your webhook will be automatically deactivated. * **Retry queue limit:** Only five messages can enter the retry queue over the lifetime of the subscription. If a sixth message requires retry queuing, or if any message fails all 10 retry attempts, the subscription will be deactivated. **Paid tier users** * For paid users, webhooks will be deactivated if a single message, retried at the 24-hour interval, fails to process successfully. ## What you can do **Ensure server availability:** * Keep your server running smoothly to receive webhook messages without interruption. * Implement logging for incoming webhook requests and your server's responses to help identify any issues quickly. **Design for idempotency** * Set up your webhook handler so it can safely process the same message multiple times without causing errors or unwanted effects. This way, if retries occur, they won't negatively impact your system. * The webhook retry mechanism is designed to maximize the reliability of message delivery while minimizing the impact of temporary issues. By understanding how retries work—especially the per-message nature of the system—and following best practices like ensuring server availability and designing for idempotency, you can ensure a seamless experience with webhooks. ## Key Takeaways * Each message has its own retry schedule, ensuring isolation and reliability. * Free tier users have limits on failed attempts and retry queue entries, while paid users do not. * Implement logging and idempotency to handle retries effectively and avoid disruptions. * By following this guide, you’ll be well-equipped to manage webhooks and ensure your system remains robust, even in the face of temporary challenges. # Webhook Signature (/docs/api-reference/webhook-api/webhooks-signature) --- title: Webhook Signature description: Webhook Signature for the Webhook API icon: Signature --- To make your webhooks extra secure, you can verify that they originated from our side by generating an HMAC SHA-256 hash code using your Authentication Token and request body. You can get the signing secret through the AvaCloud portal or Glacier API. ### Find your signing secret **Using the portal**\ Navigate to the webhook section and click on Generate Signing Secret. Create the secret and copy it to your code. **Using Data API**\ The following endpoint retrieves a shared secret: ```bash curl --location 'https://glacier-api.avax.network/v1/webhooks:getSharedSecret' \ --header 'x-glacier-api-key: ' \ ``` ### Validate the signature received Every outbound request will include an authentication signature in the header. This signature is generated by: 1. **Canonicalizing the JSON Payload**: This means arranging the JSON data in a standard format. 2. **Generating a Hash**: Using the HMAC SHA256 hash algorithm to create a hash of the canonicalized JSON payload. To verify that the signature is from us, follow these steps: 1. Generate the HMAC SHA256 hash of the received JSON payload. 2. Compare this generated hash with the signature in the request header. This process, known as verifying the digital signature, ensures the authenticity and integrity of the request. **Example Request Header** ``` Content-Type: application/json; x-signature: your-hashed-signature ``` ### Example Signature Validation Function This Node.js code sets up an HTTP server using the Express framework. It listens for POST requests sent to the `/callback` endpoint. Upon receiving a request, it validates the signature of the request against a predefined `signingSecret`. If the signature is valid, it logs match; otherwise, it logs no match. The server responds with a JSON object indicating that the request was received. ### Node (JavaScript) ```javascript const express = require('express'); const crypto = require('crypto'); const { canonicalize } = require('json-canonicalize'); const app = express(); app.use(express.json({limit: '50mb'})); const signingSecret = 'c13cc017c4ed63bcc842c8edfb49df37512280326a32826de3b885340b8a3d53'; function isValidSignature(signingSecret, signature, payload) { const canonicalizedPayload = canonicalize(payload); const hmac = crypto.createHmac('sha256', Buffer.from(signingSecret, 'hex')); const digest = hmac.update(canonicalizedPayload).digest('base64'); console.log("signature: ", signature); console.log("digest", digest); return signature === digest; } app.post('/callback', express.json({ type: 'application/json' }), (request, response) => { const { body, headers } = request; const signature = headers['x-signature']; // Handle the event switch (body.evenType) { case 'address\*activity': console.log("\*\** Address*activity \*\*\*"); console.log(body); if (isValidSignature(signingSecret, signature, body)) { console.log("match"); } else { console.log("no match"); } break; // ... handle other event types default: console.log(`Unhandled event type ${body}`); } // Return a response to acknowledge receipt of the event response.json({ received: true }); }); const PORT = 8000; app.listen(PORT, () => console.log(`Running on port ${PORT}`)); ``` ### Python (Flask) ```python from flask import Flask, request, jsonify import hmac import hashlib import base64 import json app = Flask(__name__) SIGNING_SECRET = 'c13cc017c4ed63bcc842c8edfb49df37512280326a32826de3b885340b8a3d53' def canonicalize(payload): """Function to canonicalize JSON payload""" # In Python, canonicalization can be achieved by using sort_keys=True in json.dumps return json.dumps(payload, separators=(',', ':'), sort_keys=True) def is_valid_signature(signing_secret, signature, payload): canonicalized_payload = canonicalize(payload) hmac_obj = hmac.new(bytes.fromhex(signing_secret), canonicalized_payload.encode('utf-8'), hashlib.sha256) digest = base64.b64encode(hmac_obj.digest()).decode('utf-8') print("signature:", signature) print("digest:", digest) return signature == digest @app.route('/callback', methods=['POST']) def callback_handler(): body = request.json signature = request.headers.get('x-signature') # Handle the event if body.get('eventType') == 'address_activity': print("*** Address_activity ***") print(body) if is_valid_signature(SIGNING_SECRET, signature, body): print("match") else: print("no match") else: print(f"Unhandled event type {body}") # Return a response to acknowledge receipt of the event return jsonify({"received": True}) if __name__ == '__main__': PORT = 8000 print(f"Running on port {PORT}") app.run(port=PORT) ``` ### Go (net/http) ```go package main import ( "crypto/hmac" "crypto/sha256" "encoding/base64" "encoding/hex" "encoding/json" "fmt" "net/http" "sort" "strings" ) const signingSecret = "c13cc017c4ed63bcc842c8edfb49df37512280326a32826de3b885340b8a3d53" // Canonicalize function sorts the JSON keys and produces a canonicalized string func Canonicalize(payload map[string]interface{}) (string, error) { var sb strings.Builder var keys []string for k := range payload { keys = append(keys, k) } sort.Strings(keys) sb.WriteString("{") for i, k := range keys { v, err := json.Marshal(payload[k]) if err != nil { return "", err } sb.WriteString(fmt.Sprintf("\"%s\":%s", k, v)) if i < len(keys)-1 { sb.WriteString(",") } } sb.WriteString("}") return sb.String(), nil } func isValidSignature(signingSecret, signature string, payload map[string]interface{}) bool { canonicalizedPayload, err := Canonicalize(payload) if err != nil { fmt.Println("Error canonicalizing payload:", err) return false } key, err := hex.DecodeString(signingSecret) if err != nil { fmt.Println("Error decoding signing secret:", err) return false } h := hmac.New(sha256.New, key) h.Write([]byte(canonicalizedPayload)) digest := h.Sum(nil) encodedDigest := base64.StdEncoding.EncodeToString(digest) fmt.Println("signature:", signature) fmt.Println("digest:", encodedDigest) return signature == encodedDigest } func callbackHandler(w http.ResponseWriter, r *http.Request) { var body map[string]interface{} err := json.NewDecoder(r.Body).Decode(&body) if err != nil { fmt.Println("Error decoding body:", err) http.Error(w, "Invalid request body", http.StatusBadRequest) return } signature := r.Header.Get("x-signature") eventType, ok := body["eventType"].(string) if !ok { fmt.Println("Error parsing eventType") http.Error(w, "Invalid event type", http.StatusBadRequest) return } switch eventType { case "address_activity": fmt.Println("*** Address_activity ***") fmt.Println(body) if isValidSignature(signingSecret, signature, body) { fmt.Println("match") } else { fmt.Println("no match") } default: fmt.Printf("Unhandled event type %s\n", eventType) } w.Header().Set("Content-Type", "application/json") json.NewEncoder(w).Encode(map[string]bool{"received": true}) } func main() { http.HandleFunc("/callback", callbackHandler) fmt.Println("Running on port 8000") http.ListenAndServe(":8000", nil) } ``` ### Rust (actix-web) ```rust use actix_web::{web, App, HttpServer, HttpResponse, Responder, post}; use serde::Deserialize; use hmac::{Hmac, Mac}; use sha2::Sha256; use base64::encode; use std::collections::BTreeMap; type HmacSha256 = Hmac; const SIGNING_SECRET: &str = "c13cc017c4ed63bcc842c8edfb49df37512280326a32826de3b885340b8a3d53"; #[derive(Deserialize)] struct EventPayload { eventType: String, // Add other fields as necessary } // Canonicalize the JSON payload by sorting keys fn canonicalize(payload: &BTreeMap) -> String { serde_json::to_string(payload).unwrap() } fn is_valid_signature(signing_secret: &str, signature: &str, payload: &BTreeMap) -> bool { let canonicalized_payload = canonicalize(payload); let mut mac = HmacSha256::new_from_slice(signing_secret.as_bytes()) .expect("HMAC can take key of any size"); mac.update(canonicalized_payload.as_bytes()); let result = mac.finalize(); let digest = encode(result.into_bytes()); println!("signature: {}", signature); println!("digest: {}", digest); digest == signature } #[post("/callback")] async fn callback(body: web::Json>, req: web::HttpRequest) -> impl Responder { let signature = req.headers().get("x-signature").unwrap().to_str().unwrap(); if let Some(event_type) = body.get("eventType").and_then(|v| v.as_str()) { match event_type { "address_activity" => { println!("*** Address_activity ***"); println!("{:?}", body); if is_valid_signature(SIGNING_SECRET, signature, &body) { println!("match"); } else { println!("no match"); } } _ => { println!("Unhandled event type: {}", event_type); } } } else { println!("Error parsing eventType"); return HttpResponse::BadRequest().finish(); } HttpResponse::Ok().json(serde_json::json!({ "received": true })) } #[actix_web::main] async fn main() -> std::io::Result<()> { HttpServer::new(|| { App::new() .service(callback) }) .bind("0.0.0.0:8000")? .run() .await } ``` ### TypeScript (ChainKit SDK) ```typescript import { isValidSignature } from '@avalanche-sdk/chainkit/utils'; import express from 'express'; const app = express(); app.use(express.json()); const signingSecret = 'your-signing-secret'; // Replace with your signing secret app.post('/webhook', (req, res) => { const signature = req.headers['x-signature']; const payload = req.body; if (isValidSignature(signingSecret, signature, payload)) { console.log('Valid signature'); // Process the request } else { console.log('Invalid signature'); } res.json({ received: true }); }); app.listen(8000, () => console.log('Server running on port 8000')); ``` # WebSockets vs Webhooks (/docs/api-reference/webhook-api/wss-vs-webhooks) --- title: WebSockets vs Webhooks description: WebSockets vs Webhooks for the Webhook API icon: GitCompare --- Reacting to real-time events from Avalanche smart contracts allows for immediate responses and automation, improving user experience and streamlining application functionality. It ensures that applications stay synchronized with the blockchain state. There are two primary methods for receiving these on-chain events: * **WebSockets**, using libraries like Ethers.js or Viem * **Webhooks**, which send structured event data directly to your app via HTTP POST. Both approaches enable real-time interactions, but they differ drastically in their reliability, ease of implementation, and long-term maintainability. In this post, we break down why Webhooks are the better, more resilient choice for most Avalanche developers. ## Architecture Overview The diagram below compares the two models side-by-side: wss_vs_webhooks **WebSockets** * The app connects to the Avalanche RPC API over WSS to receive raw log data. * It must decode logs, manage connection state, and store data locally. * On disconnection, it must re-sync via an external Data API or using standard `eth_*` RPC calls (e.g., `eth_getLogs`, `eth_getBlockByNumber`). Important: WSS is a transport protocol—not real-time by itself. Real-time capabilities come from the availability of `eth_subscribe`, which requires node support. **Webhooks** * The app exposes a simple HTTP endpoint. * Decoded event data is pushed directly via POST, including token metadata. * Built-in retries ensure reliable delivery, even during downtime. Important: Webhooks have a 48-hour retry window. If your app is down for longer, you still need a re-sync strategy using `eth_*` calls to recover older missed events. *** ## Using WebSockets: Real-time but high maintenance WebSockets allow you to subscribe to events using methods like eth\_subscribe. These subscriptions notify your app in real-time whenever new logs, blocks, or pending transactions meet your criteria. ```javascript import { createPublicClient, webSocket, formatUnits } from 'viem'; import { avalancheFuji } from 'viem/chains'; import { usdcAbi } from './usdc-abi.mjs'; // Ensure this includes the Transfer event // Your wallet address (case-insensitive comparison) const MY_WALLET = '0x8ae323046633A07FB162043f28Cea39FFc23B50A'.toLowerCase(); //Chrome async function monitorTransfers() { try { // USDC.e contract address on Avalanche Fuji const usdcAddress = '0x5425890298aed601595a70AB815c96711a31Bc65'; // Set up the WebSocket client for Avalanche Fuji const client = createPublicClient({ chain: avalancheFuji, transport: webSocket('wss://api.avax-test.network/ext/bc/C/ws'), }); // Watch for Transfer events on the USDC contract client.watchContractEvent({ address: usdcAddress, abi: usdcAbi, eventName: 'Transfer', onLogs: (logs) => { logs.forEach((log) => { const { from, to, value } = log.args; const fromLower = from.toLowerCase(); // Filter for transactions where 'from' matches your wallet if (fromLower === MY_WALLET) { console.log('*******'); console.log('Transfer from my wallet:'); console.log(`From: ${from}`); console.log(`To: ${to}`); console.log(`Value: ${formatUnits(value, 6)} USDC`); // USDC has 6 decimals console.log(`Transaction Hash: ${log.transactionHash}`); } }); }, onError: (error) => { console.error('Event watching error:', error.message); }, }); console.log('Monitoring USDC Transfer events on Fuji...'); } catch (error) { console.error('Error setting up transfer monitoring:', error.message); } } // Start monitoring monitorTransfers(); ``` The downside? If your connection drops, you lose everything in between. You’ll need to: * Set up a database to track the latest processed block and log index. * Correctly handling dropped connections and reconnection by hand can be challenging to get right. * Use `eth_getLogs` to re-fetch missed logs. * Decode and process raw logs yourself to rebuild app state. This requires extra infrastructure, custom recovery logic, and significant maintenance overhead. *** ## Webhooks: Resilient and developer-friendly Webhooks eliminate the complexity of managing live connections. Instead, you register an HTTP endpoint to receive blockchain event payloads when they occur. Webhook payload example: ```json { "eventType": "address_activity", "event": { "transaction": { "txHash": "0x1d8f...", "from": "0x3D3B...", "to": "0x9702...", "erc20Transfers": [ { "valueWithDecimals": "110.56", "erc20Token": { "symbol": "USDt", "decimals": 6 } } ] } } } ``` You get everything you need: * Decoded event data * Token metadata (name, symbol, decimals) * Full transaction context * No extra calls. No parsing. No manual re-sync logic. *** ## Key Advantages of Webhooks * **Reliable delivery with zero effort:** Built-in retries ensure no missed events during downtime * **Instant enrichment:** Payloads contain decoded logs, token metadata, and transaction context * **No extra infrastructure:** No WebSocket connections, no DB, no external APIs * **Faster development:** Go from idea to production with fewer moving parts * **Lower operational cost:** Less compute, fewer network calls, smaller surface area to manage If we compare using a table: | Feature | WebSockets (Ethers.js/Viem) | Webhooks | | | | :----------------------------- | :------------------------------------------------- | :--------------------------------------------------- | - | - | | **Interruption Handling** | Manual; Requires complex custom logic | Automatic; Built-in queues & retries | | | | **Data Recovery** | Requires DB + External API for re-sync | Handled by provider; No re-sync logic needed | | | | **Dev Complexity** | High; Error-prone custom resilience code | Low; Focus on processing incoming POST data | | | | **Infrastructure** | WSS connection + DB + Potential Data API cost | Application API endpoint | | | | **Data Integrity** | Risk of gaps if recovery logic fails | High; Ensures eventual delivery | | | | **Payload** | Often raw; Requires extra calls for context | Typically enriched and ready-to-use | | | | **Multiple addresses** | Manual filtering or separate listeners per address | Supports direct configuration for multiple addresses | | | | **Listen to wallet addresses** | Requires manual block/transaction filtering | Can monitor wallet addresses and smart contracts | | | ## Summary * WebSockets offer real-time access to Avalanche data, but come with complexity: raw logs, reconnect logic, re-sync handling, and decoding responsibilities. * Webhooks flip the model: the data comes to you, pre-processed and reliable. You focus on your product logic instead of infrastructure. * If you want to ship faster, operate more reliably, and reduce overhead, Webhooks are the better path forward for Avalanche event monitoring. # Background and Requirements (/docs/avalanche-l1s/custom-precompiles/background-requirements) --- title: Background and Requirements description: Learn about the background and requirements for customizing Ethereum Virtual Machine. --- This is a brief overview of what this tutorial will cover. - Write a Solidity interface - Generate the precompile template - Implement the precompile functions in Golang - Write and run tests Stateful precompiles are [alpha software](https://en.wikipedia.org/wiki/Software_release_life_cycle#Alpha). Build at your own risk. In this tutorial, we used a branch based on Subnet-EVM version `v0.5.2`. You can find the branch [here](https://github.com/ava-labs/subnet-evm/tree/helloworld-official-tutorial-v2). The code in this branch is the same as Subnet-EVM except for the `precompile/contracts/helloworld` directory. The directory contains the code for the `HelloWorld` precompile. We will be using this precompile as an example to learn how to write a stateful precompile. The code in this branch can become outdated. You should always use the latest version of Subnet-EVM when you develop your own precompile. ## Precompile-EVM Subnet-EVM precompiles can be registered from an external repo. This allows developer to build their precompiles without maintaining a fork of Subnet-EVM. The precompiles are then registered in the Subnet-EVM at build time. The difference between using Subnet-EVM and Precompile-EVM is that with Subnet-EVM you can change EVM internals to interact with your precompiles. Such as changing fee structure, adding new opcodes, changing how to build a block, etc. With Precompile-EVM you can only add new stateful precompiles that can interact with the StateDB. Precompiles built with Precompile-EVM are still very powerful because it can directly access to the state and modify it. There is a template repo for how to build a precompile with this way called [Precompile-EVM](https://github.com/ava-labs/precompile-evm). Both Subnet-EVM and Precompile-EVM share similar directory structures and common codes. You can reference the Precompile-EVM PR that adds Hello World precompile [here](https://github.com/ava-labs/precompile-evm/pull/12). ## Requirements This tutorial assumes familiarity with Golang and JavaScript. Additionally, users should be deeply familiar with the EVM in order to understand its invariants since adding a Stateful Precompile modifies the EVM itself. Here are some recommended resources to learn the ins and outs of the EVM: - [The Ethereum Virtual Machine](https://github.com/ethereumbook/ethereumbook/blob/develop/13evm.asciidoc) - [Precompiles in Solidity](https://medium.com/@rbkhmrcr/precompiles-solidity-e5d29bd428c4) - [Deconstructing a Smart Contract](https://blog.openzeppelin.com/deconstructing-a-solidity-contract-part-i-introduction-832efd2d7737/) - [Layout of State Variables in Storage](https://docs.soliditylang.org/en/v0.8.10/internals/layout_in_storage.html) - [Layout in Memory](https://docs.soliditylang.org/en/v0.8.10/internals/layout_in_memory.html) - [Layout of Call Data](https://docs.soliditylang.org/en/v0.8.10/internals/layout_in_calldata.html) - [Contract ABI Specification](https://docs.soliditylang.org/en/v0.8.10/abi-spec.html) - [Customizing the EVM with Stateful Precompiles](https://medium.com/avalancheavax/customizing-the-evm-with-stateful-precompiles-f44a34f39efd) Please install the following before getting started. First, install the latest version of Go. Follow the instructions [here](https://go.dev/doc/install). You can verify by running `go version`. Set the `$GOPATH` environment variable properly for Go to look for Go Workspaces. Please read [this](https://go.dev/doc/gopath_code) for details. You can verify by running `echo $GOPATH`. See [here](https://github.com/golang/go/wiki/SettingGOPATH) for instructions on setting the GOPATH based on system configurations. As a few things will be installed into `$GOPATH/bin`, please make sure that `$GOPATH/bin` is in your `$PATH`, otherwise, you may get an error running the commands below. To do that, run the command: `export PATH=$PATH:$GOROOT/bin:$GOPATH/bin` Download the following prerequisites into your `$GOPATH`: - Git Clone the repository (Subnet-EVM or Precompile-EVM) - Git Clone [AvalancheGo](https://github.com/ava-labs/avalanchego) repository - Install [Avalanche Network Runner](/docs/tooling/avalanche-cli) - Install [solc](https://github.com/ethereum/solc-js#usage-on-the-command-line) - Install [Node.js and NPM](https://nodejs.org/en/download) For easy copy paste, use the below commands: ```bash cd $GOPATH mkdir -p src/github.com/ava-labs cd src/github.com/ava-labs ``` Clone the repository: ```bash git clone git@github.com:ava-labs/subnet-evm.git ``` Then run the following commands: ```bash git clone git@github.com:ava-labs/avalanchego.git curl -sSfL https://raw.githubusercontent.com/ava-labs/avalanche-network-runner/main/scripts/install.sh | sh -s npm install -g solc ``` ```bash git clone git@github.com:ava-labs/precompile-evm.git ``` Alternatively you can use it as a template repo from github. Then run the following commands: ```bash git clone git@github.com:ava-labs/avalanchego.git curl -sSfL https://raw.githubusercontent.com/ava-labs/avalanche-network-runner/main/scripts/install.sh | sh -s npm install -g solc ``` ## Complete Code You can inspect example pull request for the complete code. [Subnet-EVM Hello World Pull Request](https://github.com/ava-labs/subnet-evm/pull/565/) [Precompile-EVM Hello World Pull Request](https://github.com/ava-labs/precompile-evm/pull/12/) For a full-fledged example, you can also check out the [Reward Manager Precompile](https://github.com/ava-labs/subnet-evm/blob/helloworld-official-tutorial-v2/precompile/contracts/rewardmanager/). # Generating Your Precompile (/docs/avalanche-l1s/custom-precompiles/create-precompile) --- title: Generating Your Precompile description: In this section, we will go over the process for automatically generating the template code which you can configure accordingly for your stateful precompile. --- First, we must create the Solidity interface that we want our precompile to implement. This will be the HelloWorld Interface. It will have two simple functions, `sayHello()`, `setGreeting()` and an event `GreetingChanged`. These two functions will demonstrate the getting and setting respectively of a value stored in the precompile's state space. The `sayHello()` function is a `view` function, meaning it does not modify the state of the precompile and returns a string result. The `setGreeting()` function is a state changer function, meaning it modifies the state of the precompile. The `HelloWorld` interface inherits `IAllowList` interface to use the allow list functionality. For this tutorial, we will be working in a new branch in Subnet-EVM/Precompile-EVM repo. ```bash cd $GOPATH/src/github.com/ava-labs/subnet-evm ``` We will start off in this directory `./contracts/`: ```bash cd contracts/ ``` Create a new file called `IHelloWorld.sol` and copy and paste the below code: ```solidity title="contracts/IHelloWorld.sol" // (c) 2022-2023, Ava Labs, Inc. All rights reserved. // See the file LICENSE for licensing terms. // SPDX-License-Identifier: MIT pragma solidity >=0.8.0; import "./IAllowList.sol"; interface IHelloWorld is IAllowList { event GreetingChanged( address indexed sender, string oldGreeting, string newGreeting ); // sayHello returns the stored greeting string function sayHello() external view returns (string calldata result); // setGreeting stores the greeting string function setGreeting(string calldata response) external; } ``` Now we have an interface that our precompile can implement! Let's create an [ABI](https://docs.soliditylang.org/en/v0.8.13/abi-spec.html#contract-abi-specification) of our Solidity interface. In the same directory, let's run: ```bash solc --abi ./contracts/interfaces/IHelloWorld.sol -o ./abis ``` This generates the ABI code under `./abis/contracts_interfaces_IHelloWorld_sol_IHelloWorld.abi`. ``` [ { "anonymous": false, "inputs": [ { "indexed": true, "internalType": "address", "name": "sender", "type": "address" }, { "indexed": false, "internalType": "string", "name": "oldGreeting", "type": "string" }, { "indexed": false, "internalType": "string", "name": "newGreeting", "type": "string" } ], "name": "GreetingChanged", "type": "event" }, { "anonymous": false, "inputs": [ { "indexed": true, "internalType": "uint256", "name": "role", "type": "uint256" }, { "indexed": true, "internalType": "address", "name": "account", "type": "address" }, { "indexed": true, "internalType": "address", "name": "sender", "type": "address" }, { "indexed": false, "internalType": "uint256", "name": "oldRole", "type": "uint256" } ], "name": "RoleSet", "type": "event" }, { "inputs": [ { "internalType": "address", "name": "addr", "type": "address" } ], "name": "readAllowList", "outputs": [ { "internalType": "uint256", "name": "role", "type": "uint256" } ], "stateMutability": "view", "type": "function" }, { "inputs": [], "name": "sayHello", "outputs": [ { "internalType": "string", "name": "result", "type": "string" } ], "stateMutability": "view", "type": "function" }, { "inputs": [ { "internalType": "address", "name": "addr", "type": "address" } ], "name": "setAdmin", "outputs": [], "stateMutability": "nonpayable", "type": "function" }, { "inputs": [ { "internalType": "address", "name": "addr", "type": "address" } ], "name": "setEnabled", "outputs": [], "stateMutability": "nonpayable", "type": "function" }, { "inputs": [ { "internalType": "string", "name": "response", "type": "string" } ], "name": "setGreeting", "outputs": [], "stateMutability": "nonpayable", "type": "function" }, { "inputs": [ { "internalType": "address", "name": "addr", "type": "address" } ], "name": "setManager", "outputs": [], "stateMutability": "nonpayable", "type": "function" }, { "inputs": [ { "internalType": "address", "name": "addr", "type": "address" } ], "name": "setNone", "outputs": [], "stateMutability": "nonpayable", "type": "function" } ] ``` As you can see the ABI also contains the `IAllowList` interface functions. This is because the `IHelloWorld` interface inherits from the `IAllowList` interface. Note: The ABI must have named outputs in order to generate the precompile template. Now that we have an ABI for the precompile gen tool to interact with, we can run the following command to generate our HelloWorld precompile files! Let's go back to the root of the repository and run the PrecompileGen script helper: ```bash cd .. ``` Both of these Subnet-EVM and Precompile-EVM have the same `generate_precompile.sh` script. The one in Precompile-EVM installs the script from Subnet-EVM and runs it. ```bash ./scripts/generate_precompile.sh --help # output Using branch: precompile-tutorial NAME: precompilegen - subnet-evm precompile generator tool USAGE: main [global options] command [command options] [arguments...] VERSION: 1.10.26-stable COMMANDS: help, h Shows a list of commands or help for one command GLOBAL OPTIONS: --abi value Path to the contract ABI json to generate, - for STDIN --out value Output folder for the generated precompile files, - for STDOUT (default = ./precompile/contracts/{pkg}). Test files won't be generated if STDOUT is used --pkg value Go package name to generate the precompile into (default = {type}) --type value Struct name for the precompile (default = {abi file name}) MISC --help, -h (default: false) show help --version, -v (default: false) print the version COPYRIGHT: Copyright 2013-2022 The go-ethereum Authors ``` Now let's generate the precompile template files! ```bash cd $GOPATH/src/github.com/ava-labs/precompile-evm ``` We will start off in this directory `./contracts/`: ```bash cd contracts/ ``` For Precompile-EVM interfaces and other contracts in Subnet-EVM can be accessible through `@avalabs/subnet-evm-contracts` package. This is already added to the `package.json` file. You can install it by running `npm install`. In order to import `IAllowList` interface, you can use the following import statement: ```solidity import "@avalabs/subnet-evm-contracts/contracts/interfaces/IAllowList.sol"; ``` The full file looks like this: ```solidity // SPDX-License-Identifier: MIT pragma solidity >=0.8.0; import "@avalabs/subnet-evm-contracts/contracts/interfaces/IAllowList.sol"; interface IHelloWorld is IAllowList { event GreetingChanged( address indexed sender, string oldGreeting, string newGreeting ); // sayHello returns the stored greeting string function sayHello() external view returns (string calldata result); // setGreeting stores the greeting string function setGreeting(string calldata response) external; } ``` Now we have an interface that our precompile can implement! Let's create an ABI of our Solidity interface. In Precompile-EVM we import contracts from `@avalabs/subnet-evm-contracts` package. In order to generate the ABI in Precompile-EVM we need to include the `node_modules` folder to find imported contracts with following flags: - `--abi`: ABI specification of the contracts. - `--base-path path`: Use the given path as the root of the source tree instead of the root of the filesystem. - `--include-path path`: Make an additional source directory available to the default import callback. Use this option if you want to import contracts whose location is not fixed in relation to your main source tree; for example third-party libraries installed using a package manager. Can be used multiple times. Can only be used if base path has a non-empty value. - `--output-dir path`: If given, creates one file per output component and contract/file at the specified directory. - `--overwrite`: Overwrite existing files (used together with` --output-dir`). ```bash solc --abi ./contracts/interfaces/IHelloWorld.sol -o ./abis --base-path . --include-path ./node_modules ``` This generates the ABI code under `./abis/contracts_interfaces_IHelloWorld_sol_IHelloWorld.abi`. ``` [ { "anonymous": false, "inputs": [ { "indexed": true, "internalType": "address", "name": "sender", "type": "address" }, { "indexed": false, "internalType": "string", "name": "oldGreeting", "type": "string" }, { "indexed": false, "internalType": "string", "name": "newGreeting", "type": "string" } ], "name": "GreetingChanged", "type": "event" }, { "anonymous": false, "inputs": [ { "indexed": true, "internalType": "uint256", "name": "role", "type": "uint256" }, { "indexed": true, "internalType": "address", "name": "account", "type": "address" }, { "indexed": true, "internalType": "address", "name": "sender", "type": "address" }, { "indexed": false, "internalType": "uint256", "name": "oldRole", "type": "uint256" } ], "name": "RoleSet", "type": "event" }, { "inputs": [ { "internalType": "address", "name": "addr", "type": "address" } ], "name": "readAllowList", "outputs": [ { "internalType": "uint256", "name": "role", "type": "uint256" } ], "stateMutability": "view", "type": "function" }, { "inputs": [], "name": "sayHello", "outputs": [ { "internalType": "string", "name": "result", "type": "string" } ], "stateMutability": "view", "type": "function" }, { "inputs": [ { "internalType": "address", "name": "addr", "type": "address" } ], "name": "setAdmin", "outputs": [], "stateMutability": "nonpayable", "type": "function" }, { "inputs": [ { "internalType": "address", "name": "addr", "type": "address" } ], "name": "setEnabled", "outputs": [], "stateMutability": "nonpayable", "type": "function" }, { "inputs": [ { "internalType": "string", "name": "response", "type": "string" } ], "name": "setGreeting", "outputs": [], "stateMutability": "nonpayable", "type": "function" }, { "inputs": [ { "internalType": "address", "name": "addr", "type": "address" } ], "name": "setManager", "outputs": [], "stateMutability": "nonpayable", "type": "function" }, { "inputs": [ { "internalType": "address", "name": "addr", "type": "address" } ], "name": "setNone", "outputs": [], "stateMutability": "nonpayable", "type": "function" } ] ``` As you can see the ABI also contains the `IAllowList` interface functions. This is because the `IHelloWorld` interface inherits from the `IAllowList` interface. Note: The ABI must have named outputs in order to generate the precompile template. Now that we have an ABI for the precompile gen tool to interact with, we can run the following command to generate our HelloWorld precompile files! Let's go back to the root of the repository and run the PrecompileGen script helper: ```bash cd .. ``` Both of these Subnet-EVM and Precompile-EVM have the same generate_precompile.sh script. The one in Precompile-EVM installs the script from Subnet-EVM and runs it. ```bash ./scripts/generate_precompile.sh --help # output Using branch: precompile-tutorial NAME: precompilegen - subnet-evm precompile generator tool USAGE: main [global options] command [command options] [arguments...] VERSION: 1.10.26-stable COMMANDS: help, h Shows a list of commands or help for one command GLOBAL OPTIONS: --abi value Path to the contract ABI json to generate, - for STDIN --out value Output folder for the generated precompile files, - for STDOUT (default = ./precompile/contracts/{pkg}). Test files won't be generated if STDOUT is used --pkg value Go package name to generate the precompile into (default = {type}) --type value Struct name for the precompile (default = {abi file name}) MISC --help, -h (default: false) show help --version, -v (default: false) print the version COPYRIGHT: Copyright 2013-2022 The go-ethereum Authors ``` Now let's generate the precompile template files! In Subnet-EVM precompile implementations reside under the `./precompile/contracts` directory. Let's generate our precompile template in the `./precompile/contracts/helloworld` directory, where `helloworld` is the name of the Go package we want to generate the precompile into. ```bash ./scripts/generate_precompile.sh --abi ./contracts/abis/contracts_interfaces_IHelloWorld_sol_IHelloWorld.abi --type HelloWorld --pkg helloworld ``` This generates a precompile template files `contract.go`, `contract.abi`, `config.go`, `module.go`, `event.go` and `README.md` files. `README.md` explains general guidelines for precompile development. You should carefully read this file before modifying the precompile template. ``` There are some must-be-done changes waiting in the generated file. Each area requiring you to add your code is marked with CUSTOM CODE to make them easy to find and modify. Additionally there are other files you need to edit to activate your precompile. These areas are highlighted with comments "ADD YOUR PRECOMPILE HERE". For testing take a look at other precompile tests in contract_test.go and config_test.go in other precompile folders. General guidelines for precompile development: 1- Set a suitable config key in generated module.go. E.g: "yourPrecompileConfig" 2- Read the comment and set a suitable contract address in generated module.go. E.g: ContractAddress = common.HexToAddress("ASUITABLEHEXADDRESS") 3- It is recommended to only modify code in the highlighted areas marked with "CUSTOM CODE STARTS HERE". Typically, custom codes are required in only those areas. Modifying code outside of these areas should be done with caution and with a deep understanding of how these changes may impact the EVM. 4- If you have any event defined in your precompile, review the generated event.go file and set your event gas costs. You should also emit your event in your function in the contract.go file. 5- Set gas costs in generated contract.go 6- Force import your precompile package in precompile/registry/registry.go 7- Add your config unit tests under generated package config_test.go 8- Add your contract unit tests under generated package contract_test.go 9- Additionally you can add a full-fledged VM test for your precompile under plugin/vm/vm_test.go. See existing precompile tests for examples. 10- Add your solidity interface and test contract to contracts/contracts 11- Write solidity contract tests for your precompile in contracts/contracts/test 12- Write TypeScript DS-Test counterparts for your solidity tests in contracts/test 13- Create your genesis with your precompile enabled in tests/precompile/genesis/ 14- Create e2e test for your solidity test in tests/precompile/solidity/suites.go 15- Run your e2e precompile Solidity tests with './scripts/run_ginkgo.sh` ``` Let's follow these steps and create our HelloWorld precompile. For Precompile-EVM we don't need to put files under a deep directory structure. We can just generate the precompile template under its own directory via `--out ./helloworld` flag. ```bash ./scripts/generate_precompile.sh --abi ./contracts/abis/contracts_interfaces_IHelloWorld_sol_IHelloWorld.abi --type HelloWorld --pkg helloworld --out ./helloworld ``` This generates a precompile template files `contract.go`, `contract.abi`, `config.go`, `module.go`, `event.go` and `README.md` files. `README.md` explains general guidelines for precompile development. You should carefully read this file before modifying the precompile template. ``` There are some must-be-done changes waiting in the generated file. Each area requiring you to add your code is marked with CUSTOM CODE to make them easy to find and modify. Additionally there are other files you need to edit to activate your precompile. These areas are highlighted with comments "ADD YOUR PRECOMPILE HERE". For testing take a look at other precompile tests in contract_test.go and config_test.go in other precompile folders. General guidelines for precompile development: 1- Set a suitable config key in generated module.go. E.g: "yourPrecompileConfig" 2- Read the comment and set a suitable contract address in generated module.go. E.g: ContractAddress = common.HexToAddress("ASUITABLEHEXADDRESS") 3- It is recommended to only modify code in the highlighted areas marked with "CUSTOM CODE STARTS HERE". Typically, custom codes are required in only those areas. Modifying code outside of these areas should be done with caution and with a deep understanding of how these changes may impact the EVM. 4- If you have any event defined in your precompile, review the generated event.go file and set your event gas costs. You should also emit your event in your function in the contract.go file. 5- Set gas costs in generated contract.go 6- Force import your precompile package in precompile/registry/registry.go 7- Add your config unit tests under generated package config_test.go 8- Add your contract unit tests under generated package contract_test.go 9- Additionally you can add a full-fledged VM test for your precompile under plugin/vm/vm_test.go. See existing precompile tests for examples. 10- Add your solidity interface and test contract to contracts/contracts 11- Write solidity contract tests for your precompile in contracts/contracts/test 12- Write TypeScript DS-Test counterparts for your solidity tests in contracts/test 13- Create your genesis with your precompile enabled in tests/precompile/genesis/ 14- Create e2e test for your solidity test in tests/precompile/solidity/suites.go 15- Run your e2e precompile Solidity tests with './scripts/run_ginkgo.sh` ``` Let's follow these steps and create our HelloWorld precompile! # Defining Your Precompile (/docs/avalanche-l1s/custom-precompiles/defining-precompile) --- title: Defining Your Precompile description: Now that we have autogenerated the template code required for our precompile, let's actually write the logic for the precompile itself. --- ## Setting Config Key Let's jump to `helloworld/module.go` file first. This file contains the module definition for our precompile. You can see the `ConfigKey` is set to some default value of `helloWorldConfig`. This key should be unique to the precompile. This config key determines which JSON key to use when reading the precompile's config from the JSON upgrade/genesis file. In this case, the config key is `helloWorldConfig` and the JSON config should look like this: ```json { "helloWorldConfig": { "blockTimestamp": 0 ... } } ``` ## Setting Contract Address In the `helloworld/module.go` you can see the `ContractAddress` is set to some default value. This should be changed to a suitable address for your precompile. The address should be unique to the precompile. There is a registry of precompile addresses under [`precompile/registry/registry.go`](https://github.com/ava-labs/subnet-evm/blob/helloworld-official-tutorial-v2/precompile/registry/registry.go). A list of addresses is specified in the comments under this file. Modify the default value to be the next user available stateful precompile address. For forks of Subnet-EVM or Precompile-EVM, users should start at `0x0300000000000000000000000000000000000000` to ensure that their own modifications do not conflict with stateful precompiles that may be added to Subnet-EVM in the future. You should pick an address that is not already taken. ```go title="helloworld/module.go" // This list is kept just for reference. The actual addresses defined in respective packages of precompiles. // Note: it is important that none of these addresses conflict with each other or any other precompiles // in core/vm/contracts.go. // The first stateful precompiles were added in coreth to support nativeAssetCall and nativeAssetBalance. New stateful precompiles // originating in coreth will continue at this prefix, so we reserve this range in subnet-evm so that they can be migrated into // subnet-evm without issue. // These start at the address: 0x0100000000000000000000000000000000000000 and will increment by 1. // Optional precompiles implemented in subnet-evm start at 0x0200000000000000000000000000000000000000 and will increment by 1 // from here to reduce the risk of conflicts. // For forks of subnet-evm, users should start at 0x0300000000000000000000000000000000000000 to ensure // that their own modifications do not conflict with stateful precompiles that may be added to subnet-evm // in the future. // ContractDeployerAllowListAddress = common.HexToAddress("0x0200000000000000000000000000000000000000") // ContractNativeMinterAddress = common.HexToAddress("0x0200000000000000000000000000000000000001") // TxAllowListAddress = common.HexToAddress("0x0200000000000000000000000000000000000002") // FeeManagerAddress = common.HexToAddress("0x0200000000000000000000000000000000000003") // RewardManagerAddress = common.HexToAddress("0x0200000000000000000000000000000000000004") // HelloWorldAddress = common.HexToAddress("0x0300000000000000000000000000000000000000") // ADD YOUR PRECOMPILE HERE // {YourPrecompile}Address = common.HexToAddress("0x03000000000000000000000000000000000000??") ``` Don't forget to update the actual variable `ContractAddress` in `module.go` to the address you chose. It should look like this: ```go title="helloworld/module.go" // ContractAddress is the defined address of the precompile contract. // This should be unique across all precompile contracts. // See params/precompile_modules.go for registered precompile contracts and more information. var ContractAddress = common.HexToAddress("0x0300000000000000000000000000000000000000") ``` Now when Subnet-EVM sees the `helloworld.ContractAddress` as input when executing [`CALL`](https://github.com/ava-labs/subnet-evm/blob/helloworld-official-tutorial-v2/core/vm/evm.go#L251), [`CALLCODE`](https://github.com/ava-labs/subnet-evm/blob/helloworld-official-tutorial-v2/core/vm/evm.go#L341), [`DELEGATECALL`](https://github.com/ava-labs/subnet-evm/blob/helloworld-official-tutorial-v2/core/vm/evm.go#L392), [`STATICCALL`](https://github.com/ava-labs/subnet-evm/blob/helloworld-official-tutorial-v2/core/vm/evm.go#L435), it can run the precompile if the precompile is enabled. ## Adding Custom Code Search (`CTRL F`) throughout the file with `CUSTOM CODE STARTS HERE` to find the areas in the precompile package that you need to modify. You should start with the reference imports code block. ### Module File The module file contains fundamental information about the precompile. This includes the key for the precompile, the address of the precompile, and a configurator. This file is located at [`./precompile/helloworld/module.go`](https://github.com/ava-labs/subnet-evm/blob/helloworld-official-tutorial-v2/precompile/contracts/helloworld/module.go) for Subnet-EVM and [./helloworld/module.go](https://github.com/ava-labs/precompile-evm/blob/hello-world-example/helloworld/module.go) for Precompile-EVM. This file defines the module for the precompile. The module is used to register the precompile to the precompile registry. The precompile registry is used to read configs and enable the precompile. Registration is done in the `init()` function of the module file. `MakeConfig()` is used to create a new instance for the precompile config. This will be used in custom Unmarshal/Marshal logic. You don't need to override these functions. #### Configure() Module file contains a `configurator` which implements the `contract.Configurator` interface. This interface includes a `Configure()` function used to configure the precompile and set the initial state of the precompile. This function is called when the precompile is enabled. This is typically used to read from a given config in upgrade/genesis JSON and sets the initial state of the precompile accordingly. This function also calls `AllowListConfig.Configure()` to invoke AllowList configuration as the last step. You should keep it as it is if you want to use AllowList. You can modify this function for your custom logic. You can circle back to this function later after you have finalized the implementation of the precompile config. ### Config File The config file contains the config for the precompile. This file is located at [`./precompile/helloworld/config.go`](https://github.com/ava-labs/subnet-evm/blob/helloworld-official-tutorial-v2/precompile/contracts/helloworld/config.go) for Subnet-EVM and [./helloworld/config.go](https://github.com/ava-labs/precompile-evm/blob/hello-world-example/helloworld/config.go) for Precompile-EVM. This file contains the `Config` struct, which implements `precompileconfig.Config` interface. It has some embedded structs like `precompileconfig.Upgrade`. `Upgrade` is used to enable upgrades for the precompile. It contains the `BlockTimestamp` and `Disable` to enable/disable upgrades. `BlockTimestamp` is the timestamp of the block when the upgrade will be activated. `Disable` is used to disable the upgrade. If you use `AllowList` for the precompile, there is also `allowlist.AllowListConfig` embedded in the `Config` struct. `AllowListConfig` is used to specify initial roles for specified addresses. If you have any custom fields in your precompile config, you can add them here. These custom fields will be read from upgrade/genesis JSON and set in the precompile config. ```go title="precompile/helloworld/config.go" // Config implements the precompileconfig.Config interface and // adds specific configuration for HelloWorld. type Config struct { allowlist.AllowListConfig precompileconfig.Upgrade } ``` #### Verify() `Verify()` is called on startup and an error is treated as fatal. Generated code contains a call to `AllowListConfig.Verify()` to verify the `AllowListConfig`. You can leave that as is and start adding your own custom verify code after that. We can leave this function as is right now because there is no invalid custom configuration for the `Config`. ```go title="precompile/helloworld/config.go" // Verify tries to verify Config and returns an error accordingly. func (c *Config) Verify() error { // Verify AllowList first if err := c.AllowListConfig.Verify(); err != nil { return err } // CUSTOM CODE STARTS HERE // Add your own custom verify code for Config here // and return an error accordingly return nil } ``` #### Equal() Next, we see is `Equal()`. This function determines if two precompile configs are equal. This is used to determine if the precompile needs to be upgraded. There is some default code that is generated for checking `Upgrade` and `AllowListConfig` equality. ```go title="precompile/helloworld/config.go" // Equal returns true if [s] is a [*Config] and it has been configured identical to [c]. func (c *Config) Equal(s precompileconfig.Config) bool { // typecast before comparison other, ok := (s).(*Config) if !ok { return false } // CUSTOM CODE STARTS HERE // modify this boolean accordingly with your custom Config, to check if [other] and the current [c] are equal // if Config contains only Upgrade and AllowListConfig you can skip modifying it. equals := c.Upgrade.Equal(&other.Upgrade) && c.AllowListConfig.Equal(&other.AllowListConfig) return equals } ``` We can leave this function as is since we check `Upgrade` and `AllowListConfig` for equality which are the only fields that `Config` struct has. ### Modify Configure() We can now circle back to `Configure()` in `module.go` as we finished implementing `Config` struct. This function configures the `state` with the initial configuration at`blockTimestamp` when the precompile is enabled. In the HelloWorld example, we want to set up a default key-value mapping in the state where the key is `storageKey` and the value is `Hello World!`. The `StateDB` allows us to store a key-value mapping of 32-byte hashes. The below code snippet can be copied and pasted to overwrite the default `Configure()` code. ```go title="precompile/helloworld/module.go" const defaultGreeting = "Hello World!" // Configure configures [state] with the given [cfg] precompileconfig. // This function is called by the EVM once per precompile contract activation. // You can use this function to set up your precompile contract's initial state, // by using the [cfg] config and [state] stateDB. func (*configurator) Configure(chainConfig contract.ChainConfig, cfg precompileconfig.Config, state contract.StateDB, _ contract.BlockContext) error { config, ok := cfg.(*Config) if !ok { return fmt.Errorf("incorrect config %T: %v", config, config) } // CUSTOM CODE STARTS HERE // This will be called in the first block where HelloWorld stateful precompile is enabled. // 1) If BlockTimestamp is nil, this will not be called // 2) If BlockTimestamp is 0, this will be called while setting up the genesis block // 3) If BlockTimestamp is 1000, this will be called while processing the first block // whose timestamp is >= 1000 // // Set the initial value under [common.BytesToHash([]byte("storageKey")] to "Hello World!" StoreGreeting(state, defaultGreeting) // AllowList is activated for this precompile. Configuring allowlist addresses here. return config.AllowListConfig.Configure(state, ContractAddress) } ``` ### Event File The event file contains the events that the precompile can emit. This file is located at [`./precompile/helloworld/event.go`](https://github.com/ava-labs/subnet-evm/blob/helloworld-official-tutorial-v2/precompile/contracts/helloworld/event.go) for Subnet-EVM and [./helloworld/event.go](https://github.com/ava-labs/precompile-evm/blob/hello-world-example/helloworld/event.go) for Precompile-EVM. The file begins with a comment about events and how they can be emitted: ```go title="precompile/helloworld/event.go" /* NOTE: Events can only be emitted in state-changing functions. So you cannot use events in read-only (view) functions. Events are generally emitted at the end of a state-changing function with AddLog method of the StateDB. The AddLog method takes 4 arguments: 1. Address of the contract that emitted the event. 2. Topic hashes of the event. 3. Encoded non-indexed data of the event. 4. Block number at which the event was emitted. The first argument is the address of the contract that emitted the event. Topics can be at most 4 elements, the first topic is the hash of the event signature and the rest are the indexed event arguments. There can be at most 3 indexed arguments. Topics cannot be fully unpacked into their original values since they're 32-bytes hashes. The non-indexed arguments are encoded using the ABI encoding scheme. The non-indexed arguments can be unpacked into their original values. Before packing the event, you need to calculate the gas cost of the event. The gas cost of an event is the base gas cost + the gas cost of the topics + the gas cost of the non-indexed data. See Get{EvetName}EventGasCost functions for more details. You can use the following code to emit an event in your state-changing precompile functions (generated packer might be different):*/ topics, data, err := PackMyEvent( topic1, topic2, data1, data2, ) if err != nil { return nil, remainingGas, err } accessibleState.GetStateDB().AddLog(&types.Log{ Address: ContractAddress, Topics: topics, Data: data, BlockNumber: accessibleState.GetBlockContext().Number().Uint64(), }) ``` ```go title="precompile/helloworld/event.go" /* NOTE: Events can only be emitted in state-changing functions. So you cannot use events in read-only (view) functions. Events are generally emitted at the end of a state-changing function with AddLog method of the StateDB. The AddLog method takes 4 arguments: 1. Address of the contract that emitted the event. 2. Topic hashes of the event. 3. Encoded non-indexed data of the event. 4. Block number at which the event was emitted. The first argument is the address of the contract that emitted the event. Topics can be at most 4 elements, the first topic is the hash of the event signature and the rest are the indexed event arguments. There can be at most 3 indexed arguments. Topics cannot be fully unpacked into their original values since they're 32-bytes hashes. The non-indexed arguments are encoded using the ABI encoding scheme. The non-indexed arguments can be unpacked into their original values. Before packing the event, you need to calculate the gas cost of the event. The gas cost of an event is the base gas cost + the gas cost of the topics + the gas cost of the non-indexed data. See Get{EvetName}EventGasCost functions for more details. You can use the following code to emit an event in your state-changing precompile functions (generated packer might be different):*/ topics, data, err := PackMyEvent( topic1, topic2, data1, data2, ) if err != nil { return nil, remainingGas, err } accessibleState.GetStateDB().AddLog( ContractAddress, topics, data, accessibleState.GetBlockContext().Number().Uint64(), ) ``` In this file you should set your event's gas cost and implement the `Get{EventName}EventGasCost` function. This function should take the data you want to emit and calculate the gas cost. In this example we defined our event as follow, and plan to emit it in the `setGreeting` function: ```go event GreetingChanged(address indexed sender, string oldGreeting, string newGreeting); ``` We used arbitrary strings as non-indexed event data, remind that each emitted event is stored on chain, thus charging right amount is critical. We calculated gas cost according to the length of the string to make sure we're charging right amount of gas. If you're sure that you're dealing with a fixed length data, you can use a fixed gas cost for your event. We will show how events can be emitted under the Contract File section. ### Contract File The contract file contains the functions of the precompile contract that will be called by the EVM. The file is located at [`./precompile/helloworld/contract.go`](https://github.com/ava-labs/subnet-evm/blob/helloworld-official-tutorial-v2/precompile/contracts/helloworld/contract.go) for Subnet-EVM and [./helloworld/contract.go](https://github.com/ava-labs/precompile-evm/blob/hello-world-example/helloworld/contract.go) for Precompile-EVM. Since we use `IAllowList` interface there will be auto-generated code for `AllowList` functions like below: ```go title="precompile/helloworld/contract.go" // GetHelloWorldAllowListStatus returns the role of [address] for the HelloWorld list. func GetHelloWorldAllowListStatus(stateDB contract.StateDB, address common.Address) allowlist.Role { return allowlist.GetAllowListStatus(stateDB, ContractAddress, address) } // SetHelloWorldAllowListStatus sets the permissions of [address] to [role] for the // HelloWorld list. Assumes [role] has already been verified as valid. // This stores the [role] in the contract storage with address [ContractAddress] // and [address] hash. It means that any reusage of the [address] key for different value // conflicts with the same slot [role] is stored. // Precompile implementations must use a different key than [address] for their storage. func SetHelloWorldAllowListStatus(stateDB contract.StateDB, address common.Address, role allowlist.Role) { allowlist.SetAllowListRole(stateDB, ContractAddress, address, role) } ``` These will be helpful to use AllowList precompile helper in our functions. #### Packers and Unpackers There are also auto-generated Packers and Unpackers for the ABI. These will be used in `sayHello` and `setGreeting` functions to comfort the ABI. These functions are auto-generated and will be used in necessary places accordingly. You don't need to worry about how to deal with them, but it's good to know what they are. Note: There were few changes to precompile packers with Durango. In this example we assumed that the HelloWorld precompile contract has been deployed before Durango. We need to activate this condition only after Durango. If this is a new precompile and never deployed before Durango, you can activate it immediately by removing the if condition. Each input to a precompile contract function has it's own `Unpacker` function as follows (if deployed before Durango): ```go title="precompile/helloworld/contract.go" // UnpackSetGreetingInput attempts to unpack [input] into the string type argument // assumes that [input] does not include selector (omits first 4 func signature bytes) // if [useStrictMode] is true, it will return an error if the length of [input] is not [common.HashLength] func UnpackSetGreetingInput(input []byte, useStrictMode bool) (string, error) { // Initially we had this check to ensure that the input was the correct length. // However solidity does not always pack the input to the correct length, and allows // for extra padding bytes to be added to the end of the input. Therefore, we have removed // this check with the Durango. We still need to keep this check for backwards compatibility. if useStrictMode && len(input) > common.HashLength { return "", ErrInputExceedsLimit } res, err := HelloWorldABI.UnpackInput("setGreeting", input, useStrictMode) if err != nil { return "", err } unpacked := *abi.ConvertType(res[0], new(string)).(*string) return unpacked, nil } ``` If this is a new precompile that will be deployed after Durango, you can skip strict mode handling and use false: ```go title="precompile/helloworld/contract.go" func UnpackSetGreetingInput(input []byte) (string, error) { res, err := HelloWorldABI.UnpackInput("setGreeting", input, false) if err != nil { return "", err } unpacked := *abi.ConvertType(res[0], new(string)).(*string) return unpacked, nil } ``` The ABI is a binary format and the input to the precompile contract function is a byte array. The `Unpacker` function converts this input to a more easy-to-use format so that we can use it in our function. Similarly, there is a `Packer` function for each output of a precompile contract function as follows: ```go title="precompile/helloworld/contract.go" // PackSayHelloOutput attempts to pack given result of type string // to conform the ABI outputs. func PackSayHelloOutput(result string) ([]byte, error) { return HelloWorldABI.PackOutput("sayHello", result) } ``` This function converts the output of the function to a byte array that conforms to the ABI and can be returned to the EVM as a result. #### Modify sayHello() The next place to modify is in our `sayHello()` function. In a previous step, we created the `IHelloWorld.sol` interface with two functions `sayHello()` and `setGreeting()`. We finally get to implement them here. If any contract calls these functions from the interface, the below function gets executed. This function is a simple getter function. In `Configure()` we set up a mapping with the key as `storageKey` and the value as `Hello World!`. In this function, we will be returning whatever value is at `storageKey`. The below code snippet can be copied and pasted to overwrite the default `setGreeting` code. First, we add a helper function to get the greeting value from the stateDB, this will be helpful when we test our contract. We will use the `storageKeyHash` to store the value in the Contract's reserved storage in the stateDB. ```go title="precompile/helloworld/contract.go" var ( // storageKeyHash is the hash of the storage key "storageKey" in the contract storage. // This is used to store the value of the greeting in the contract storage. // It is important to use a unique key here to avoid conflicts with other storage keys // like addresses, AllowList, etc. storageKeyHash = common.BytesToHash([]byte("storageKey")) ) // GetGreeting returns the value of the storage key "storageKey" in the contract storage, // with leading zeroes trimmed. // This function is mostly used for tests. func GetGreeting(stateDB contract.StateDB) string { // Get the value set at recipient value := stateDB.GetState(ContractAddress, storageKeyHash) return string(common.TrimLeftZeroes(value.Bytes())) } ``` Now we can modify the `sayHello` function to return the stored value. ```go title="precompile/helloworld/contract.go" func sayHello(accessibleState contract.AccessibleState, caller common.Address, addr common.Address, input []byte, suppliedGas uint64, readOnly bool) (ret []byte, remainingGas uint64, err error) { if remainingGas, err = contract.DeductGas(suppliedGas, SayHelloGasCost); err != nil { return nil, 0, err } // CUSTOM CODE STARTS HERE // Get the current state currentState := accessibleState.GetStateDB() // Get the value set at recipient value := GetGreeting(currentState) packedOutput, err := PackSayHelloOutput(value) if err != nil { return nil, remainingGas, err } // Return the packed output and the remaining gas return packedOutput, remainingGas, nil } ``` #### Modify setGreeting() `setGreeting()` function is a simple setter function. It takes in `input` and we will set that as the value in the state mapping with the key as `storageKey`. It also checks if the VM running the precompile is in read-only mode. If it is, it returns an error. At the end of a successful execution, it will emit `GreetingChanged` event. There is also a generated `AllowList` code in that function. This generated code checks if the caller address is eligible to perform this state-changing operation. If not, it returns an error. Let's add the helper function to set the greeting value in the stateDB, this will be helpful when we test our contract. ```go title="precompile/helloworld/contract.go" // StoreGreeting sets the value of the storage key "storageKey" in the contract storage. func StoreGreeting(stateDB contract.StateDB, input string) { inputPadded := common.LeftPadBytes([]byte(input), common.HashLength) inputHash := common.BytesToHash(inputPadded) stateDB.SetState(ContractAddress, storageKeyHash, inputHash) } ``` The below code snippet can be copied and pasted to overwrite the default `setGreeting()` code. ```go title="precompile/helloworld/contract.go" func setGreeting(accessibleState contract.AccessibleState, caller common.Address, addr common.Address, input []byte, suppliedGas uint64, readOnly bool) (ret []byte, remainingGas uint64, err error) { if remainingGas, err = contract.DeductGas(suppliedGas, SetGreetingGasCost); err != nil { return nil, 0, err } if readOnly { return nil, remainingGas, vmerrs.ErrWriteProtection } // do not use strict mode after Durango useStrictMode := !contract.IsDurangoActivated(accessibleState) // attempts to unpack [input] into the arguments to the SetGreetingInput. // Assumes that [input] does not include selector // You can use unpacked [inputStruct] variable in your code inputStruct, err := UnpackSetGreetingInput(input, useStrictMode) if err != nil { return nil, remainingGas, err } // Allow list is enabled and SetGreeting is a state-changer function. // This part of the code restricts the function to be called only by enabled/admin addresses in the allow list. // You can modify/delete this code if you don't want this function to be restricted by the allow list. stateDB := accessibleState.GetStateDB() // Verify that the caller is in the allow list and therefore has the right to call this function. callerStatus := allowlist.GetAllowListStatus(stateDB, ContractAddress, caller) if !callerStatus.IsEnabled() { return nil, remainingGas, fmt.Errorf("%w: %s", ErrCannotSetGreeting, caller) } // allow list code ends here. // CUSTOM CODE STARTS HERE // With Durango, you can emit an event in your state-changing precompile functions. // Note: If you have been using the precompile before Durango, you should activate it only after Durango. // Activating this code before Durango will result in a consensus failure. // If this is a new precompile and never deployed before Durango, you can activate it immediately by removing // the if condition. // We will first read the old greeting. So we should charge the gas for reading the storage. if remainingGas, err = contract.DeductGas(remainingGas, contract.ReadGasCostPerSlot); err != nil { return nil, 0, err } oldGreeting := GetGreeting(stateDB) eventData := GreetingChangedEventData{ OldGreeting: oldGreeting, NewGreeting: inputStruct, } topics, data, err := PackGreetingChangedEvent(caller, eventData) if err != nil { return nil, remainingGas, err } // Charge the gas for emitting the event. eventGasCost := GetGreetingChangedEventGasCost(eventData) if remainingGas, err = contract.DeductGas(remainingGas, eventGasCost); err != nil { return nil, 0, err } // Emit the event stateDB.AddLog(&types.Log{ Address: ContractAddress, Topics: topics, Data: data, BlockNumber: accessibleState.GetBlockContext().Number().Uint64(), }) // setGreeting is the execution function // "SetGreeting(name string)" and sets the storageKey // in the string returned by hello world StoreGreeting(stateDB, inputStruct) // This function does not return an output, leave this one as is packedOutput := []byte{} // Return the packed output and the remaining gas return packedOutput, remainingGas, nil } ``` ```go title="precompile/helloworld/contract.go" func setGreeting(accessibleState contract.AccessibleState, caller common.Address, addr common.Address, input []byte, suppliedGas uint64, readOnly bool) (ret []byte, remainingGas uint64, err error) { if remainingGas, err = contract.DeductGas(suppliedGas, SetGreetingGasCost); err != nil { return nil, 0, err } if readOnly { return nil, remainingGas, vmerrs.ErrWriteProtection } // do not use strict mode after Durango useStrictMode := !contract.IsDurangoActivated(accessibleState) // attempts to unpack [input] into the arguments to the SetGreetingInput. // Assumes that [input] does not include selector // You can use unpacked [inputStruct] variable in your code inputStruct, err := UnpackSetGreetingInput(input, useStrictMode) if err != nil { return nil, remainingGas, err } // Allow list is enabled and SetGreeting is a state-changer function. // This part of the code restricts the function to be called only by enabled/admin addresses in the allow list. // You can modify/delete this code if you don't want this function to be restricted by the allow list. stateDB := accessibleState.GetStateDB() // Verify that the caller is in the allow list and therefore has the right to call this function. callerStatus := allowlist.GetAllowListStatus(stateDB, ContractAddress, caller) if !callerStatus.IsEnabled() { return nil, remainingGas, fmt.Errorf("%w: %s", ErrCannotSetGreeting, caller) } // allow list code ends here. // CUSTOM CODE STARTS HERE // With Durango, you can emit an event in your state-changing precompile functions. // Note: If you have been using the precompile before Durango, you should activate it only after Durango. // Activating this code before Durango will result in a consensus failure. // If this is a new precompile and never deployed before Durango, you can activate it immediately by removing // the if condition. // We will first read the old greeting. So we should charge the gas for reading the storage. if remainingGas, err = contract.DeductGas(remainingGas, contract.ReadGasCostPerSlot); err != nil { return nil, 0, err } oldGreeting := GetGreeting(stateDB) eventData := GreetingChangedEventData{ OldGreeting: oldGreeting, NewGreeting: inputStruct, } topics, data, err := PackGreetingChangedEvent(caller, eventData) if err != nil { return nil, remainingGas, err } // Charge the gas for emitting the event. eventGasCost := GetGreetingChangedEventGasCost(eventData) if remainingGas, err = contract.DeductGas(remainingGas, eventGasCost); err != nil { return nil, 0, err } // Emit the event stateDB.AddLog(&types.Log{ Address: ContractAddress, Topics: topics, Data: data, BlockNumber: accessibleState.GetBlockContext().Number().Uint64(), }) // setGreeting is the execution function // "SetGreeting(name string)" and sets the storageKey // in the string returned by hello world StoreGreeting(stateDB, inputStruct) // This function does not return an output, leave this one as is packedOutput := []byte{} // Return the packed output and the remaining gas return packedOutput, remainingGas, nil } ``` Precompile events introduced with Durango. In this example we assumed that the `HelloWorld` precompile contract has been deployed before Durango. If this is a new precompile and it will be deployed after Durango, you can activate it immediately by removing the Durango if condition (`contract.IsDurangoActivated(accessibleState)`). ### Setting Gas Costs Setting gas costs for functions is very important and should be done carefully. If the gas costs are set too low, then functions can be abused and can cause DoS attacks. If the gas costs are set too high, then the contract will be too expensive to run. Subnet-EVM has some predefined gas costs for write and read operations in [`precompile/contract/utils.go`](https://github.com/ava-labs/subnet-evm/blob/helloworld-official-tutorial-v2/precompile/contract/utils.go#L19-L20). In order to provide a baseline for gas costs, we have set the following gas costs. ```go title="precompile/contract/utils.go" // Gas costs for stateful precompiles const ( WriteGasCostPerSlot = 20_000 ReadGasCostPerSlot = 5_000 ) ``` - `WriteGasCostPerSlot` is the cost of one write such as modifying a state storage slot. - `ReadGasCostPerSlot` is the cost of reading a state storage slot. This should be in your gas cost estimations based on how many times the precompile function does a read or a write. For example, if the precompile modifies the state slot of its precompile address twice then the gas cost for that function would be `40_000`. However, if the precompile does additional operations and requires more computational power, then you should increase the gas costs accordingly. On top of these gas costs, we also have to account for the gas costs of AllowList gas costs. These are the gas costs of reading and writing permissions for addresses in AllowList. These are defined under Subnet-EVM's [`precompile/allowlist/allowlist.go`](https://github.com/ava-labs/subnet-evm/blob/helloworld-official-tutorial-v2/precompile/allowlist/allowlist.go#L28-L29). By default, these are added to the default gas costs of the state-change functions (SetGreeting) of the precompile. Meaning that these functions will cost an additional `ReadAllowListGasCost` in order to read permissions from the storage. If you don't plan to read permissions from the storage then you can omit these. Now going back to our `/helloworld/contract.go`, we can modify our precompile function gas costs. Please search (`CTRL F`) `SET A GAS COST HERE` to locate the default gas cost code. ```go title="helloworld/contract.go" SayHelloGasCost uint64 = 0 // SET A GAS COST HERE SetGreetingGasCost uint64 = 0 + allowlist.ReadAllowListGasCost // SET A GAS COST HERE ``` We get and set our greeting with `sayHello()` and `setGreeting()` in one slot respectively so we can define the gas costs as follows. We also read permissions from the AllowList in `setGreeting()` so we keep `allowlist.ReadAllowListGasCost`. ```go title="helloworld/contract.go" SayHelloGasCost uint64 = contract.ReadGasCostPerSlot SetGreetingGasCost uint64 = contract.WriteGasCostPerSlot + allowlist.ReadAllowListGasCost ``` ## Registering Your Precompile We should register our precompile package to the Subnet-EVM to be discovered by other packages. Our `Module` file contains an `init()` function that registers our precompile. `init()` is called when the package is imported. We should register our precompile in a common package so that it can be imported by other packages. For Subnet-EVM we have a precompile registry under [`/precompile/registry/registry.go`](https://github.com/ava-labs/subnet-evm/blob/helloworld-official-tutorial-v2/precompile/registry/registry.go). This registry force-imports precompiles from other packages, for example: ```go title="precompile/registry/registry.go" // Force imports of each precompile to ensure each precompile's init function runs and registers itself // with the registry. import ( _ "github.com/ava-labs/subnet-evm/precompile/contracts/deployerallowlist" _ "github.com/ava-labs/subnet-evm/precompile/contracts/nativeminter" _ "github.com/ava-labs/subnet-evm/precompile/contracts/txallowlist" _ "github.com/ava-labs/subnet-evm/precompile/contracts/feemanager" _ "github.com/ava-labs/subnet-evm/precompile/contracts/rewardmanager" _ "github.com/ava-labs/subnet-evm/precompile/contracts/helloworld" // ADD YOUR PRECOMPILE HERE // _ "github.com/ava-labs/subnet-evm/precompile/contracts/yourprecompile" ) ``` The registry itself also force-imported by the [\`/plugin/evm/vm.go](https://github.com/ava-labs/subnet-evm/blob/helloworld-official-tutorial-v2/plugin/evm/vm.go#L50). This ensures that the registry is imported and the precompiles are registered. For Precompile-EVM there is a `plugin/main.go` file in Precompile-EVM that orchestrates this precompile registration. ```go title="plugin/main.go" // (c) 2019-2023, Ava Labs, Inc. All rights reserved. // See the file LICENSE for licensing terms. package main import ( "fmt" "github.com/ava-labs/avalanchego/version" "github.com/ava-labs/subnet-evm/plugin/evm" "github.com/ava-labs/subnet-evm/plugin/runner" // Each precompile generated by the precompilegen tool has a self-registering init function // that registers the precompile with the subnet-evm. Importing the precompile package here // will cause the precompile to be registered with the subnet-evm. _ "github.com/ava-labs/precompile-evm/helloworld" // ADD YOUR PRECOMPILE HERE //_ "github.com/ava-labs/precompile-evm/{yourprecompilepkg}" ) ``` # Writing Test Cases (/docs/avalanche-l1s/custom-precompiles/defining-test-cases) --- title: Writing Test Cases description: In this section, we will go over the different ways we can write test cases for our stateful precompile. --- ## Adding Config Tests Precompile generation tool generates skeletons for unit tests as well. Generated config tests will be under [`./precompile/contracts/helloworld/config_test.go`](https://github.com/ava-labs/subnet-evm/blob/helloworld-official-tutorial-v2/precompile/contracts/helloworld/config_test.go) for Subnet-EVM and [`./helloworld/config_test.go`](https://github.com/ava-labs/precompile-evm/blob/hello-world-example/helloworld/config_test.go) for Precompile-EVM. There are mainly two functions we need to test: `Verify` and `Equal`. `Verify` checks if the precompile is configured correctly. `Equal` checks if the precompile is equal to another precompile. Generated `Verify` tests contain a valid case. You can add more invalid cases depending on your implementation. `Equal` tests generate some invalid cases to test different timestamps, types, and AllowList cases. You can check each `config_test.go` files for other precompiles under the Subnet-EVM's [`./precompile/contracts`](https://github.com/ava-labs/subnet-evm/blob/helloworld-official-tutorial-v2/precompile/contracts/) directory for more examples. ## Adding Contract Tests The tool also generates contract tests to make sure our precompile is working correctly. Generated tests include cases to test allow list capabilities, gas costs, and calling functions in read-only mode. You can check other `contract_test.go` files in the `/precompile/contracts`. Hello World contract tests will be under [`./precompile/contracts/helloworld/contract_test.go`](https://github.com/ava-labs/subnet-evm/blob/helloworld-official-tutorial-v2/precompile/contracts/helloworld/contract_test.go) for Subnet-EVM and [`./helloworld/contract_test.go`](https://github.com/ava-labs/precompile-evm/blob/hello-world-example/helloworld/contract_test.go) for Precompile-EVM. We will also add more test to cover functionalities of `sayHello()` and `setGreeting()`. Contract tests are defined in a standard structure that each test can customize to their needs. The test structure is as follows: ```go // PrecompileTest is a test case for a precompile type PrecompileTest struct { // Caller is the address of the precompile caller Caller common.Address // Input the raw input bytes to the precompile Input []byte // InputFn is a function that returns the raw input bytes to the precompile // If specified, Input will be ignored. InputFn func(t *testing.T) []byte // SuppliedGas is the amount of gas supplied to the precompile SuppliedGas uint64 // ReadOnly is whether the precompile should be called in read only // mode. If true, the precompile should not modify the state. ReadOnly bool // Config is the config to use for the precompile // It should be the same precompile config that is used in the // precompile's configurator. // If nil, Configure will not be called. Config precompileconfig.Config // BeforeHook is called before the precompile is called. BeforeHook func(t *testing.T, state contract.StateDB) // AfterHook is called after the precompile is called. AfterHook func(t *testing.T, state contract.StateDB) // ExpectedRes is the expected raw byte result returned by the precompile ExpectedRes []byte // ExpectedErr is the expected error returned by the precompile ExpectedErr string // BlockNumber is the block number to use for the precompile's block context BlockNumber int64 } ``` Each test can populate the fields of the `PrecompileTest` struct to customize the test. This test uses an AllowList helper function `allowlist.RunPrecompileWithAllowListTests(t, Module, state.NewTestStateDB, tests)` which can run all specified tests plus AllowList test suites. If you don't plan to use AllowList, you can directly run them as follows: ```go for name, test := range tests { t.Run(name, func(t *testing.T) { test.Run(t, module, newStateDB(t)) }) } ``` ## Adding VM Tests (Optional) This is only applicable for direct Subnet-EVM forks as test files are not directly exported in Golang. If you use Precompile-EVM you can skip this step. VM tests are tests that run the precompile by calling it through the Subnet-EVM. These are the most comprehensive tests that we can run. If your precompile modifies how the Subnet-EVM works, for example changing blockchain rules, you should add a VM test. For example, you can take a look at the `TestRewardManagerPrecompileSetRewardAddress` function in [here](https://github.com/ava-labs/subnet-evm/blob/helloworld-official-tutorial-v2/plugin/evm/vm_test.go#L2772). For this Hello World example, we don't modify any Subnet-EVM rules, so we don't need to add any VM tests. ## Adding Solidity Test Contracts Let's add our test contract to `./contracts/contracts`. This smart contract lets us interact with our precompile! We cast the `HelloWorld` precompile address to the `IHelloWorld` interface. In doing so, `helloWorld` is now a contract of type `IHelloWorld` and when we call any functions on that contract, we will be redirected to the HelloWorld precompile address. The below code snippet can be copied and pasted into a new file called `ExampleHelloWorld.sol`: ```go //SPDX-License-Identifier: MIT pragma solidity ^0.8.0; import "./IHelloWorld.sol"; // ExampleHelloWorld shows how the HelloWorld precompile can be used in a smart contract. contract ExampleHelloWorld { address constant HELLO_WORLD_ADDRESS = 0x0300000000000000000000000000000000000000; IHelloWorld helloWorld = IHelloWorld(HELLO_WORLD_ADDRESS); function sayHello() public view returns (string memory) { return helloWorld.sayHello(); } function setGreeting(string calldata greeting) public { helloWorld.setGreeting(greeting); } } ``` Hello World Precompile is a different contract than ExampleHelloWorld and has a different address. Since the precompile uses AllowList for a permissioned access, any call to the precompile including from ExampleHelloWorld will be denied unless the caller is added to the AllowList. Please note that this contract is simply a wrapper and is calling the precompile functions. The reason why we add another example smart contract is to have a simpler stateless tests. For the test contract we write our test in `./contracts/test/ExampleHelloWorldTest.sol`. ```solidity //SPDX-License-Identifier: MIT pragma solidity ^0.8.0; import "../ExampleHelloWorld.sol"; import "../interfaces/IHelloWorld.sol"; import "./AllowListTest.sol"; contract ExampleHelloWorldTest is AllowListTest { IHelloWorld helloWorld = IHelloWorld(HELLO_WORLD_ADDRESS); function step_getDefaultHelloWorld() public { ExampleHelloWorld example = new ExampleHelloWorld(); address exampleAddress = address(example); assertRole(helloWorld.readAllowList(exampleAddress), AllowList.Role.None); assertEq(example.sayHello(), "Hello World!"); } function step_doesNotSetGreetingBeforeEnabled() public { ExampleHelloWorld example = new ExampleHelloWorld(); address exampleAddress = address(example); assertRole(helloWorld.readAllowList(exampleAddress), AllowList.Role.None); try example.setGreeting("testing") { assertTrue(false, "setGreeting should fail"); } catch {} } function step_setAndGetGreeting() public { ExampleHelloWorld example = new ExampleHelloWorld(); address exampleAddress = address(example); assertRole(helloWorld.readAllowList(exampleAddress), AllowList.Role.None); helloWorld.setEnabled(exampleAddress); assertRole( helloWorld.readAllowList(exampleAddress), AllowList.Role.Enabled ); string memory greeting = "testgreeting"; example.setGreeting(greeting); assertEq(example.sayHello(), greeting); } } ``` For Precompile-EVM, you should import AllowListTest with `@avalabs/subnet-evm-contracts` NPM package: ```solidity //SPDX-License-Identifier: MIT pragma solidity ^0.8.0; import "../ExampleHelloWorld.sol"; import "../interfaces/IHelloWorld.sol"; import "@avalabs/subnet-evm-contracts/contracts/test/AllowListTest.sol"; contract ExampleHelloWorldTest is AllowListTest { IHelloWorld helloWorld = IHelloWorld(HELLO_WORLD_ADDRESS); function step_getDefaultHelloWorld() public { ExampleHelloWorld example = new ExampleHelloWorld(); address exampleAddress = address(example); assertRole(helloWorld.readAllowList(exampleAddress), AllowList.Role.None); assertEq(example.sayHello(), "Hello World!"); } function step_doesNotSetGreetingBeforeEnabled() public { ExampleHelloWorld example = new ExampleHelloWorld(); address exampleAddress = address(example); assertRole(helloWorld.readAllowList(exampleAddress), AllowList.Role.None); try example.setGreeting("testing") { assertTrue(false, "setGreeting should fail"); } catch {} } function step_setAndGetGreeting() public { ExampleHelloWorld example = new ExampleHelloWorld(); address exampleAddress = address(example); assertRole(helloWorld.readAllowList(exampleAddress), AllowList.Role.None); helloWorld.setEnabled(exampleAddress); assertRole( helloWorld.readAllowList(exampleAddress), AllowList.Role.Enabled ); string memory greeting = "testgreeting"; example.setGreeting(greeting); assertEq(example.sayHello(), greeting); } } ``` ## Adding DS-Test Case We can now trigger this test contract via `hardhat` tests. The test script uses Subnet-EVM's `test` framework test in `./contracts/test`. You can find more information about the test framework [here](https://github.com/ava-labs/subnet-evm/blob/helloworld-official-tutorial-v2/contracts/test/utils.ts). We also can test the events emitted by the precompile. The test script looks like this: The test script looks like this: ```go // (c) 2019-2022, Ava Labs, Inc. All rights reserved. // See the file LICENSE for licensing terms. import { expect } from "chai"; import { SignerWithAddress } from "@nomiclabs/hardhat-ethers/signers"; import { Contract } from "ethers"; import { ethers } from "hardhat"; import { test } from "./utils"; // make sure this is always an admin for hello world precompile const ADMIN_ADDRESS = "0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC"; const HELLO_WORLD_ADDRESS = "0x0300000000000000000000000000000000000000"; describe("ExampleHelloWorldTest", function () { this.timeout("30s"); beforeEach("Setup DS-Test contract", async function () { const signer = await ethers.getSigner(ADMIN_ADDRESS); const helloWorldPromise = ethers.getContractAt( "IHelloWorld", HELLO_WORLD_ADDRESS, signer ); return ethers .getContractFactory("ExampleHelloWorldTest", { signer }) .then((factory) => factory.deploy()) .then((contract) => { this.testContract = contract; return contract.deployed().then(() => contract); }) .then(() => Promise.all([helloWorldPromise])) .then(([helloWorld]) => helloWorld.setAdmin(this.testContract.address)) .then((tx) => tx.wait()); }); test("should gets default hello world", ["step_getDefaultHelloWorld"]); test( "should not set greeting before enabled", "step_doesNotSetGreetingBeforeEnabled" ); test( "should set and get greeting with enabled account", "step_setAndGetGreeting" ); }); describe("IHelloWorld events", function () { let owner: SignerWithAddress; let contract: Contract; let defaultGreeting = "Hello, World!"; before(async function () { owner = await ethers.getSigner(ADMIN_ADDRESS); contract = await ethers.getContractAt( "IHelloWorld", HELLO_WORLD_ADDRESS, owner ); // reset greeting let tx = await contract.setGreeting(defaultGreeting); await tx.wait(); }); it("should emit GreetingChanged event", async function () { let newGreeting = "helloprecompile"; await expect(contract.setGreeting(newGreeting)) .to.emit(contract, "GreetingChanged") .withArgs( owner.address, // old greeting defaultGreeting, // new greeting newGreeting ); }); }); ``` The test script looks like this: ```go // (c) 2019-2022, Ava Labs, Inc. All rights reserved. // See the file LICENSE for licensing terms. import { expect } from "chai"; import { SignerWithAddress } from "@nomiclabs/hardhat-ethers/signers"; import { Contract } from "ethers"; import { ethers } from "hardhat"; import { test } from "@avalabs/subnet-evm-contracts"; // make sure this is always an admin for hello world precompile const ADMIN_ADDRESS = "0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC"; const HELLO_WORLD_ADDRESS = "0x0300000000000000000000000000000000000000"; describe("ExampleHelloWorldTest", function () { this.timeout("30s"); beforeEach("Setup DS-Test contract", async function () { const signer = await ethers.getSigner(ADMIN_ADDRESS); const helloWorldPromise = ethers.getContractAt( "IHelloWorld", HELLO_WORLD_ADDRESS, signer ); return ethers .getContractFactory("ExampleHelloWorldTest", { signer }) .then((factory) => factory.deploy()) .then((contract) => { this.testContract = contract; return contract.deployed().then(() => contract); }) .then(() => Promise.all([helloWorldPromise])) .then(([helloWorld]) => helloWorld.setAdmin(this.testContract.address)) .then((tx) => tx.wait()); }); test("should gets default hello world", ["step_getDefaultHelloWorld"]); test( "should not set greeting before enabled", "step_doesNotSetGreetingBeforeEnabled" ); test( "should set and get greeting with enabled account", "step_setAndGetGreeting" ); }); describe("IHelloWorld events", function () { let owner: SignerWithAddress; let contract: Contract; let defaultGreeting = "Hello, World!"; before(async function () { owner = await ethers.getSigner(ADMIN_ADDRESS); contract = await ethers.getContractAt( "IHelloWorld", HELLO_WORLD_ADDRESS, owner ); // reset greeting let tx = await contract.setGreeting(defaultGreeting); await tx.wait(); }); it("should emit GreetingChanged event", async function () { let newGreeting = "helloprecompile"; await expect(contract.setGreeting(newGreeting)) .to.emit(contract, "GreetingChanged") .withArgs( owner.address, // old greeting defaultGreeting, // new greeting newGreeting ); }); }); ``` # Executing Test Cases (/docs/avalanche-l1s/custom-precompiles/executing-test-cases) --- title: Executing Test Cases description: In this section, we will go over how to be able to execute the test cases you wrote in the last section. --- ## Adding the Test Genesis File To run our e2e contract tests, we will need to create an Avalanche L1 that has the `Hello World` precompile activated, so we will copy and paste the below genesis file into: `/tests/precompile/genesis/hello_world.json`. Note: it's important that this has the same name as the HardHat test file we created previously. ```json { "config": { "chainId": 99999, "homesteadBlock": 0, "eip150Block": 0, "eip150Hash": "0x2086799aeebeae135c246c65021c82b4e15a2c451340993aacfd2751886514f0", "eip155Block": 0, "eip158Block": 0, "byzantiumBlock": 0, "constantinopleBlock": 0, "petersburgBlock": 0, "istanbulBlock": 0, "muirGlacierBlock": 0, "feeConfig": { "gasLimit": 20000000, "minBaseFee": 1000000000, "targetGas": 100000000, "baseFeeChangeDenominator": 48, "minBlockGasCost": 0, "maxBlockGasCost": 10000000, "targetBlockRate": 2, "blockGasCostStep": 500000 }, "helloWorldConfig": { "blockTimestamp": 0, "adminAddresses": ["0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC"] } }, "alloc": { "8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC": { "balance": "0x52B7D2DCC80CD2E4000000" }, "0x0Fa8EA536Be85F32724D57A37758761B86416123": { "balance": "0x52B7D2DCC80CD2E4000000" } }, "nonce": "0x0", "timestamp": "0x66321C34", "extraData": "0x00", "gasLimit": "0x1312D00", "difficulty": "0x0", "mixHash": "0x0000000000000000000000000000000000000000000000000000000000000000", "coinbase": "0x0000000000000000000000000000000000000000", "number": "0x0", "gasUsed": "0x0", "parentHash": "0x0000000000000000000000000000000000000000000000000000000000000000" } ``` Adding this to our genesis enables our HelloWorld precompile at the genesis block (0th block), with `0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC` as the admin address. ```json { "helloWorldConfig": { "blockTimestamp": 0, "adminAddresses": ["0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC"] } } ``` ## Declaring the HardHat E2E Test Now that we have declared the HardHat test and corresponding `genesis.json` file. The last step to running the e2e test is to declare the new test in `/tests/precompile/solidity/suites.go`. At the bottom of the file you will see the following code commented out: ```go title="suites.go" // ADD YOUR PRECOMPILE HERE /* ginkgo.It("your precompile", ginkgo.Label("Precompile"), ginkgo.Label("YourPrecompile"), func() { ctx, cancel := context.WithTimeout(context.Background(), time.Minute) defer cancel() // Specify the name shared by the genesis file in ./tests/precompile/genesis/{your_precompile}.json // and the test file in ./contracts/tests/{your_precompile}.ts blockchainID := subnetsSuite.GetBlockchainID("{your_precompile}") runDefaultHardhatTests(ctx, blockchainID, "{your_precompile}") */ ``` `runDefaultHardhatTests` will run the default Hardhat test command and use the default genesis path. If you want to use a different test command and genesis path than the defaults, you can use the `utils.CreateSubnet` and `utils.RunTestCMD`. See how they were used with default params [here](https://github.com/ava-labs/subnet-evm/blob/helloworld-official-tutorial-v2/tests/utils/subnet.go#L113) You should copy and paste the ginkgo `It` node and update from `{your_precompile}` to `hello_world`. The string passed in to `utils.RunDefaultHardhatTests(ctx, "your_precompile")` will be used to find both the HardHat test file to execute and the genesis file, which is why you need to use the same name for both. After modifying the `It` node, it should look like the following (you can copy and paste this directly if you prefer): ```go ginkgo.It("hello world", ginkgo.Label("Precompile"), ginkgo.Label("HelloWorld"), func() { ctx, cancel := context.WithTimeout(context.Background(), time.Minute) defer cancel() blockchainID := subnetsSuite.GetBlockchainID("hello_world") runDefaultHardhatTests(ctx, blockchainID, "hello_world") }) ``` Now that we've set up the new ginkgo test, we can run the ginkgo test that we want by using the `GINKGO_LABEL_FILTER`. This environment variable is passed as a flag to Ginkgo in `./scripts/run_ginkgo.sh` and restricts what tests will run to only the tests with a matching label. ## Running E2E Tests Before we start testing, we will need to build the AvalancheGo binary and the custom Subnet-EVM binary. Precompile-EVM bundles Subnet-EVM and runs it under the hood in the [`plugins/main.go`](https://github.com/ava-labs/precompile-evm/blob/hello-world-example/plugin/main.go#L24). Meaning that Precompile-EVM binary works the same way as Subnet-EVM binary. Precompile-EVM repo has also same scripts and the build process as Subnet-EVM. Following steps also apply to Precompile-EVM. You should have cloned [AvalancheGo](https://github.com/ava-labs/avalanchego) within your `$GOPATH` in the [Background and Requirements](/docs/avalanche-l1s/custom-precompiles/background-requirements) section, so you can build AvalancheGo with the following command: ```bash cd $GOPATH/src/github.com/ava-labs/avalanchego ./scripts/build.sh ``` Once you've built AvalancheGo, you can confirm that it was successful by printing the version: ```bash ./build/avalanchego --version ``` This should print something like the following (if you are running AvalancheGo v1.9.7): ```bash avalanchego/1.11.0 [database=v1.4.5, rpcchainvm=33, commit=c60f7d2dd10c87f57382885b59d6fb2c763eded7, go=1.21.7] ``` This path will be used later as the environment variable `AVALANCHEGO_EXEC_PATH` in the network runner. Please note that the RPCChainVM version of AvalancheGo and Subnet-EVM must match. Once we've built AvalancheGo, we can navigate back to the repo and build the binary: ```bash cd $GOPATH/src/github.com/ava-labs/subnet-evm ./scripts/build.sh ``` This will build the Subnet-EVM binary and place it in AvalancheGo's `build/plugins` directory by default at the file path: `$GOPATH/src/github.com/ava-labs/avalanchego/build/plugins/srEXiWaHuhNyGwPUi444Tu47ZEDwxTWrbQiuD7FmgSAQ6X7Dy` To confirm that the Subnet-EVM binary is compatible with AvalancheGo, you can run the same version command and confirm the RPCChainVM version matches: ```bash $GOPATH/src/github.com/ava-labs/avalanchego/build/plugins/srEXiWaHuhNyGwPUi444Tu47ZEDwxTWrbQiuD7FmgSAQ6X7Dy --version ``` This should give similar output: ```bash Subnet-EVM/v0.6.1 [AvalancheGo=v1.11.1, rpcchainvm=33] ``` ```bash cd $GOPATH/src/github.com/ava-labs/precompile-evm ./scripts/build.sh ``` This will build the Precompile-EVM binary and place it in AvalancheGo's `build/plugins` directory by default at the file path: `$GOPATH/src/github.com/ava-labs/avalanchego/build/plugins/srEXiWaHuhNyGwPUi444Tu47ZEDwxTWrbQiuD7FmgSAQ6X7Dy` To confirm that the Precompile-EVM binary is compatible with AvalancheGo, you can run the same version command and confirm the RPCChainVM version matches: ```bash $GOPATH/src/github.com/ava-labs/avalanchego/build/plugins/srEXiWaHuhNyGwPUi444Tu47ZEDwxTWrbQiuD7FmgSAQ6X7Dy --version ``` This should give similar output: ```bash Precompile-EVM/v0.2.0 Subnet-EVM/v0.6.1 [AvalancheGo=v1.11.1, rpcchainvm=33] ``` If the RPCChainVM Protocol version printed out does not match the one used in AvalancheGo then Subnet-EVM will not be able to talk to AvalancheGo and the blockchain will not start. You can find the compatibility table for AvalancheGo and Subnet-EVM [here](https://github.com/ava-labs/subnet-evm#avalanchego-compatibility). The `build/plugins` directory will later be used as the `AVALANCHEGO_PLUGIN_PATH`. ### Running Ginkgo Tests To run ONLY the HelloWorld precompile test, run the command: ```bash cd $GOPATH/src/github.com/ava-labs/subnet-evm ``` ```bash cd $GOPATH/src/github.com/ava-labs/precompile-evm ``` use `GINKGO_LABEL_FILTER` env var to filter the test: ```bash GINKGO_LABEL_FILTER=HelloWorld ./scripts/run_ginkgo.sh ``` You will first see the node starting up in the `BeforeSuite` section of the precompile test: ```bash GINKGO_LABEL_FILTER=HelloWorld ./scripts/run_ginkgo.sh # output Using branch: hello-world-tutorial-walkthrough building precompile.test # github.com/ava-labs/subnet-evm/tests/precompile.test ld: warning: could not create compact unwind for _blst_sha256_block_data_order: does not use RBP or RSP based frame Compiled precompile.test # github.com/ava-labs/subnet-evm/tests/load.test ld: warning: could not create compact unwind for _blst_sha256_block_data_order: does not use RBP or RSP based frame Compiled load.test Running Suite: subnet-evm precompile ginkgo test suite - /Users/avalabs/go/src/github.com/ava-labs/subnet-evm =================================================================================================================== Random Seed: 1674833631 Will run 1 of 7 specs ------------------------------ [BeforeSuite] /Users/avalabs/go/src/github.com/ava-labs/subnet-evm/tests/precompile/precompile_test.go:31 > Enter [BeforeSuite] TOP-LEVEL - /Users/avalabs/go/src/github.com/ava-labs/subnet-evm/tests/precompile/precompile_test.go:31 @ 01/27/23 10:33:51.001 INFO [01-27|10:33:51.002] Starting AvalancheGo node wd=/Users/avalabs/go/src/github.com/ava-labs/subnet-evm INFO [01-27|10:33:51.002] Executing cmd="./scripts/run.sh " [streaming output] Using branch: hello-world-tutorial-walkthrough ... [BeforeSuite] PASSED [15.002 seconds] ``` After the `BeforeSuite` completes successfully, it will skip all but the `HelloWorld` labeled precompile test: ```bash S [SKIPPED] [Precompiles] /Users/avalabs/go/src/github.com/ava-labs/subnet-evm/tests/precompile/solidity/suites.go:26 contract native minter [Precompile, ContractNativeMinter] /Users/avalabs/go/src/github.com/ava-labs/subnet-evm/tests/precompile/solidity/suites.go:29 ------------------------------ S [SKIPPED] [Precompiles] /Users/avalabs/go/src/github.com/ava-labs/subnet-evm/tests/precompile/solidity/suites.go:26 tx allow list [Precompile, TxAllowList] /Users/avalabs/go/src/github.com/ava-labs/subnet-evm/tests/precompile/solidity/suites.go:36 ------------------------------ ... Combined output: Compiling 2 files with 0.8.0 Compilation finished successfully ExampleHelloWorldTest ✓ should gets default hello world (4057ms) ✓ should not set greeting before enabled (4067ms) ✓ should set and get greeting with enabled account (4074ms) 3 passing (33s) < Exit [It] hello world - /Users/avalabs/go/src/github.com/ava-labs/subnet-evm/tests/precompile/solidity/suites.go:64 @ 01/27/23 10:34:17.484 (11.48s) • [11.480 seconds] ------------------------------ ``` Finally, you will see the load test being skipped as well: ```bash Running Suite: subnet-evm small load simulator test suite - /Users/avalabs/go/src/github.com/ava-labs/subnet-evm ====================================================================================================================== Random Seed: 1674833658 Will run 0 of 1 specs S [SKIPPED] [Load Simulator] /Users/avalabs/go/src/github.com/ava-labs/subnet-evm/tests/load/load_test.go:49 basic subnet load test [load] /Users/avalabs/go/src/github.com/ava-labs/subnet-evm/tests/load/load_test.go:50 ------------------------------ Ran 0 of 1 Specs in 0.000 seconds SUCCESS! -- 0 Passed | 0 Failed | 0 Pending | 1 Skipped PASS ``` Looks like the tests are passing! If your tests failed, please retrace your steps. Most likely the error is that the precompile was not enabled and some code is missing. Try running `npm install` in the contracts directory to ensure that hardhat and other packages are installed. You may also use the [official tutorial implementation](https://github.com/ava-labs/subnet-evm/tree/helloworld-official-tutorial-v2) to double-check your work as well. # Custom Precompiles (/docs/avalanche-l1s/custom-precompiles) --- title: Custom Precompiles description: In this tutorial, we are going to walk through how we can generate a stateful precompile from scratch. Before we start, let's brush up on what a precompile is, what a stateful precompile is, and why this is extremely useful. --- ## Background ### Precompiled Contracts Ethereum uses precompiles to efficiently implement cryptographic primitives within the EVM instead of re-implementing the same primitives in Solidity. The following precompiles are currently included: ecrecover, sha256, blake2f, ripemd-160, Bn256Add, Bn256Mul, Bn256Pairing, the identity function, and modular exponentiation. We can see these [precompile](https://github.com/ethereum/go-ethereum/blob/v1.11.1/core/vm/contracts.go#L82) mappings from address to function here in the Ethereum VM: ```go // PrecompiledContractsBerlin contains the default set of pre-compiled Ethereum // contracts used in the Berlin release. var PrecompiledContractsBerlin = map[common.Address]PrecompiledContract{ common.BytesToAddress([]byte{1}): &ecrecover{}, common.BytesToAddress([]byte{2}): &sha256hash{}, common.BytesToAddress([]byte{3}): &ripemd160hash{}, common.BytesToAddress([]byte{4}): &dataCopy{}, common.BytesToAddress([]byte{5}): &bigModExp{eip2565: true}, common.BytesToAddress([]byte{6}): &bn256AddIstanbul{}, common.BytesToAddress([]byte{7}): &bn256ScalarMulIstanbul{}, common.BytesToAddress([]byte{8}): &bn256PairingIstanbul{}, common.BytesToAddress([]byte{9}): &blake2F{}, } ``` These precompile addresses start from `0x0000000000000000000000000000000000000001` and increment by 1. A [precompile](https://github.com/ava-labs/subnet-evm/blob/helloworld-official-tutorial-v2/core/vm/contracts.go#L54-L57) follows this interface: ```go // PrecompiledContract is the basic interface for native Go contracts. The implementation // requires a deterministic gas count based on the input size of the Run method of the // contract. type PrecompiledContract interface { RequiredGas(input []byte) uint64 // RequiredPrice calculates the contract gas use Run(input []byte) ([]byte, error) // Run runs the precompiled contract } ``` Here is an example of the [sha256 precompile](https://github.com/ava-labs/subnet-evm/blob/helloworld-official-tutorial-v2/core/vm/contracts.go#L237-L250) function. ```go type sha256hash struct{} // RequiredGas returns the gas required to execute the pre-compiled contract. // // This method does not require any overflow checking as the input size gas costs // required for anything significant is so high it's impossible to pay for. func (c *sha256hash) RequiredGas(input []byte) uint64 { return uint64(len(input)+31)/32*params.Sha256PerWordGas + params.Sha256BaseGas } func (c *sha256hash) Run(input []byte) ([]byte, error) { h := sha256.Sum256(input) return h[:], nil } ``` The CALL opcode (CALL, STATICCALL, DELEGATECALL, and CALLCODE) allows us to invoke this precompile. The function signature of CALL in the EVM is as follows: ```go Call( caller ContractRef, addr common.Address, input []byte, gas uint64, value *big.Int, )(ret []byte, leftOverGas uint64, err error) ``` Precompiles are a shortcut to execute a function implemented by the EVM itself, rather than an actual contract. A precompile is associated with a fixed address defined in the EVM. There is no byte code associated with that address. When a precompile is called, the EVM checks if the input address is a precompile address, and if so it executes the precompile. Otherwise, it loads the smart contract at the input address and runs it on the EVM interpreter with the specified input data. ### Stateful Precompiled Contracts A stateful precompile builds on a precompile in that it adds state access. Stateful precompiles are not available in the default EVM, and are specific to Avalanche EVMs such as [Coreth](https://github.com/ava-labs/avalanchego/tree/master/graft/coreth) and [Subnet-EVM](https://github.com/ava-labs/subnet-evm). A stateful precompile follows this [interface](https://github.com/ava-labs/subnet-evm/blob/helloworld-official-tutorial-v2/precompile/contract/interfaces.go#L17-L20): ```go // StatefulPrecompiledContract is the interface for executing a precompiled contract type StatefulPrecompiledContract interface { // Run executes the precompiled contract. Run(accessibleState PrecompileAccessibleState, caller common.Address, addr common.Address, input []byte, suppliedGas uint64, readOnly bool) (ret []byte, remainingGas uint64, err error) } ``` A stateful precompile injects state access through the `PrecompileAccessibleState` interface to provide access to the EVM state including the ability to modify balances and read/write storage. This way we can provide even more customization of the EVM through Stateful Precompiles than we can with the original precompile interface! ### AllowList The AllowList enables a precompile to enforce permissions on addresses. The AllowList is not a contract itself, but a helper structure to provide a control mechanism for wrapping contracts. It provides an `AllowListConfig` to the precompile so that it can take an initial configuration from genesis/upgrade. It also provides functions to set/read the permissions. In this tutorial, we used `IAllowList` interface to provide permission control to the `HelloWorld` precompile. `IAllowList` is defined in Subnet-EVM under [`./contracts/contracts/interfaces/IAllowList.sol`](https://github.com/ava-labs/subnet-evm/blob/helloworld-official-tutorial-v2/contracts/contracts/interfaces/IAllowList.sol). The interface is as follows: ```go //SPDX-License-Identifier: MIT pragma solidity ^0.8.0; interface IAllowList { event RoleSet( uint256 indexed role, address indexed account, address indexed sender, uint256 oldRole ); // Set [addr] to have the admin role over the precompile contract. function setAdmin(address addr) external; // Set [addr] to be enabled on the precompile contract. function setEnabled(address addr) external; // Set [addr] to have the manager role over the precompile contract. function setManager(address addr) external; // Set [addr] to have no role for the precompile contract. function setNone(address addr) external; // Read the status of [addr]. function readAllowList(address addr) external view returns (uint256 role); } ``` You can find more information about the AllowList interface [here](/docs/avalanche-l1s/evm-configuration/customize-avalanche-l1#allowlist-interface). # Deploying Your Precompile (/docs/avalanche-l1s/custom-precompiles/precompile-deployment) --- title: Deploying Your Precompile description: Now that we have defined our precompile, let's deploy it to a local network. --- We made it! Everything works in our Ginkgo tests, and now we want to spin up a local network with the Hello World precompile activated. Start the server in a terminal in a new tab using avalanche-network-runner. Please check out [this link](/docs/tooling/avalanche-cli) for more information on Avalanche Network Runner, how to download it, and how to use it. The server will be in "listening" mode waiting for API calls. We will start the server from the Subnet-EVM directory so that we can use a relative file path to the genesis JSON file: ```bash cd $GOPATH/src/github.com/ava-labs/subnet-evm ``` ```bash cd $GOPATH/src/github.com/ava-labs/precompile-evm ``` Then run ANR: ```bash avalanche-network-runner server \ --log-level debug \ --port=":8080" \ --grpc-gateway-port=":8081" ``` Since we already compiled AvalancheGo and Subnet-EVM/Precompile-EVM in a previous step, we should have the AvalancheGo and Subnet-EVM binaries ready to go. We can now set the following paths. `AVALANCHEGO_EXEC_PATH` points to the latest AvalancheGo binary we have just built. `AVALANCHEGO_PLUGIN_PATH` points to the plugins path which should have the Subnet-EVM binary we have just built: ```bash export AVALANCHEGO_EXEC_PATH="${GOPATH}/src/github.com/ava-labs/avalanchego/build/avalanchego" export AVALANCHEGO_PLUGIN_PATH="${HOME}/.avalanchego/plugins" ``` The following command will "issue requests" to the server we just spun up. We can use avalanche-network-runner to spin up some nodes that run the latest version of Subnet-EVM: ```bash avalanche-network-runner control start \ --log-level debug \ --endpoint="0.0.0.0:8080" \ --number-of-nodes=5 \ --avalanchego-path ${AVALANCHEGO_EXEC_PATH} \ --plugin-dir ${AVALANCHEGO_PLUGIN_PATH} \ --blockchain-specs '[{"vm_name": "subnetevm", "genesis": "./tests/precompile/genesis/hello_world.json"}]' ``` We can look at the server terminal tab and see it booting up the local network. If the network startup is successful then you should see something like this: ```bash [blockchain RPC for "srEXiWaHuhNyGwPUi444Tu47ZEDwxTWrbQiuD7FmgSAQ6X7Dy"] "http://127.0.0.1:9650/ext/bc/2jDWMrF9yKK8gZfJaaaSfACKeMasiNgHmuZip5mWxUfhKaYoEU" [blockchain RPC for "srEXiWaHuhNyGwPUi444Tu47ZEDwxTWrbQiuD7FmgSAQ6X7Dy"] "http://127.0.0.1:9652/ext/bc/2jDWMrF9yKK8gZfJaaaSfACKeMasiNgHmuZip5mWxUfhKaYoEU" [blockchain RPC for "srEXiWaHuhNyGwPUi444Tu47ZEDwxTWrbQiuD7FmgSAQ6X7Dy"] "http://127.0.0.1:9654/ext/bc/2jDWMrF9yKK8gZfJaaaSfACKeMasiNgHmuZip5mWxUfhKaYoEU" [blockchain RPC for "srEXiWaHuhNyGwPUi444Tu47ZEDwxTWrbQiuD7FmgSAQ6X7Dy"] "http://127.0.0.1:9656/ext/bc/2jDWMrF9yKK8gZfJaaaSfACKeMasiNgHmuZip5mWxUfhKaYoEU" [blockchain RPC for "srEXiWaHuhNyGwPUi444Tu47ZEDwxTWrbQiuD7FmgSAQ6X7Dy"] "http://127.0.0.1:9658/ext/bc/2jDWMrF9yKK8gZfJaaaSfACKeMasiNgHmuZip5mWxUfhKaYoEU" ``` This shows the extension to the API server on AvalancheGo that's specific to the Subnet-EVM Blockchain instance. To interact with it, you will want to append the `/rpc` extension, which will supply the standard Ethereum API calls. For example, you can use the RPC URL: `http://127.0.0.1:9650/ext/bc/2jDWMrF9yKK8gZfJaaaSfACKeMasiNgHmuZip5mWxUfhKaYoEU/rpc` ## Maintenance You should always keep your fork up to date with the latest changes in the official Subnet-EVM repo. If you have forked the Subnet-EVM repo, there could be conflicts and you may need to manually resolve them. If you used Precompile-EVM, you can update your repo by bumping Subnet-EVM versions in [`go.mod`](https://github.com/ava-labs/precompile-evm/blob/hello-world-example/go.mod#L7) and [`version.sh`](https://github.com/ava-labs/precompile-evm/blob/hello-world-example/scripts/versions.sh#L4) ## Conclusion We have now created a stateful precompile from scratch with the precompile generation tool. We hope you had fun and learned a little more about the Subnet-EVM. Now that you have created a simple stateful precompile, we urge you to create one of your own. If you have an idea for a stateful precompile that may be useful to the community, feel free to create a fork of [Subnet-EVM](https://github.com/ava-labs/subnet-evm) and create a pull request. # cli info (/docs/avalanche-l1s/deploy-a-avalanche-l1/cli_structure) --- title: cli info description: cli flags and stuff --- ## avalanche blockchain The blockchain command suite provides a collection of tools for developing and deploying Blockchains. To get started, use the blockchain create command wizard to walk through the configuration of your very first Blockchain. Then, go ahead and deploy it with the blockchain deploy command. You can use the rest of the commands to manage your Blockchain configurations and live deployments. **Usage:** ```bash avalanche blockchain [subcommand] [flags] ``` **Subcommands:** - [`addValidator`](#avalanche-blockchain-addvalidator): The blockchain addValidator command adds a node as a validator to an L1 of the user provided deployed network. If the network is proof of authority, the owner of the validator manager contract must sign the transaction. If the network is proof of stake, the node must stake the L1's staking token. Both processes will issue a RegisterL1ValidatorTx on the P-Chain. This command currently only works on Blockchains deployed to either the Fuji Testnet or Mainnet. - [`changeOwner`](#avalanche-blockchain-changeowner): The blockchain changeOwner changes the owner of the subnet of the deployed Blockchain. - [`changeWeight`](#avalanche-blockchain-changeweight): The blockchain changeWeight command changes the weight of a Subnet Validator. The Subnet has to be a Proof of Authority Subnet-Only Validator Subnet. - [`configure`](#avalanche-blockchain-configure): AvalancheGo nodes support several different configuration files. Subnets have their own Subnet config which applies to all chains/VMs in the Subnet. Each chain within the Subnet can have its own chain config. A chain can also have special requirements for the AvalancheGo node configuration itself. This command allows you to set all those files. - [`create`](#avalanche-blockchain-create): The blockchain create command builds a new genesis file to configure your Blockchain. By default, the command runs an interactive wizard. It walks you through all the steps you need to create your first Blockchain. The tool supports deploying Subnet-EVM, and custom VMs. You can create a custom, user-generated genesis with a custom VM by providing the path to your genesis and VM binaries with the --genesis and --vm flags. By default, running the command with a blockchainName that already exists causes the command to fail. If you'd like to overwrite an existing configuration, pass the -f flag. - [`delete`](#avalanche-blockchain-delete): The blockchain delete command deletes an existing blockchain configuration. - [`deploy`](#avalanche-blockchain-deploy): The blockchain deploy command deploys your Blockchain configuration locally, to Fuji Testnet, or to Mainnet. At the end of the call, the command prints the RPC URL you can use to interact with the Subnet. Avalanche-CLI only supports deploying an individual Blockchain once per network. Subsequent attempts to deploy the same Blockchain to the same network (local, Fuji, Mainnet) aren't allowed. If you'd like to redeploy a Blockchain locally for testing, you must first call avalanche network clean to reset all deployed chain state. Subsequent local deploys redeploy the chain with fresh state. You can deploy the same Blockchain to multiple networks, so you can take your locally tested Subnet and deploy it on Fuji or Mainnet. - [`describe`](#avalanche-blockchain-describe): The blockchain describe command prints the details of a Blockchain configuration to the console. By default, the command prints a summary of the configuration. By providing the --genesis flag, the command instead prints out the raw genesis file. - [`export`](#avalanche-blockchain-export): The blockchain export command write the details of an existing Blockchain deploy to a file. The command prompts for an output path. You can also provide one with the --output flag. - [`import`](#avalanche-blockchain-import): Import blockchain configurations into avalanche-cli. This command suite supports importing from a file created on another computer, or importing from blockchains running public networks (e.g. created manually or with the deprecated subnet-cli) - [`join`](#avalanche-blockchain-join): The subnet join command configures your validator node to begin validating a new Blockchain. To complete this process, you must have access to the machine running your validator. If the CLI is running on the same machine as your validator, it can generate or update your node's config file automatically. Alternatively, the command can print the necessary instructions to update your node manually. To complete the validation process, the Subnet's admins must add the NodeID of your validator to the Subnet's allow list by calling addValidator with your NodeID. After you update your validator's config, you need to restart your validator manually. If you provide the --avalanchego-config flag, this command attempts to edit the config file at that path. This command currently only supports Blockchains deployed on the Fuji Testnet and Mainnet. - [`list`](#avalanche-blockchain-list): The blockchain list command prints the names of all created Blockchain configurations. Without any flags, it prints some general, static information about the Blockchain. With the --deployed flag, the command shows additional information including the VMID, BlockchainID and SubnetID. - [`publish`](#avalanche-blockchain-publish): The blockchain publish command publishes the Blockchain's VM to a repository. - [`removeValidator`](#avalanche-blockchain-removevalidator): The blockchain removeValidator command stops a whitelisted, subnet network validator from validating your deployed Blockchain. To remove the validator from the Subnet's allow list, provide the validator's unique NodeID. You can bypass these prompts by providing the values with flags. - [`stats`](#avalanche-blockchain-stats): The blockchain stats command prints validator statistics for the given Blockchain. - [`upgrade`](#avalanche-blockchain-upgrade): The blockchain upgrade command suite provides a collection of tools for updating your developmental and deployed Blockchains. - [`validators`](#avalanche-blockchain-validators): The blockchain validators command lists the validators of a blockchain's subnet and provides several statistics about them. - [`vmid`](#avalanche-blockchain-vmid): The blockchain vmid command prints the virtual machine ID (VMID) for the given Blockchain. **Flags:** ```bash -h, --help help for blockchain --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### addValidator The blockchain addValidator command adds a node as a validator to an L1 of the user provided deployed network. If the network is proof of authority, the owner of the validator manager contract must sign the transaction. If the network is proof of stake, the node must stake the L1's staking token. Both processes will issue a RegisterL1ValidatorTx on the P-Chain. This command currently only works on Blockchains deployed to either the Fuji Testnet or Mainnet. **Usage:** ```bash avalanche blockchain addValidator [subcommand] [flags] ``` **Flags:** ```bash --aggregator-allow-private-peers allow the signature aggregator to connect to peers with private IP (default true) --aggregator-extra-endpoints strings endpoints for extra nodes that are needed in signature aggregation --aggregator-log-level string log level to use with signature aggregator (default "Off") --balance uint set the AVAX balance of the validator that will be used for continuous fee on P-Chain --blockchain-genesis-key use genesis allocated key to pay fees for completing the validator's registration (blockchain gas token) --blockchain-key string CLI stored key to use to pay fees for completing the validator's registration (blockchain gas token) --blockchain-private-key string private key to use to pay fees for completing the validator's registration (blockchain gas token) --bls-proof-of-possession string set the BLS proof of possession of the validator to add --bls-public-key string set the BLS public key of the validator to add --cluster string operate on the given cluster --create-local-validator create additional local validator and add it to existing running local node --default-duration (for Subnets, not L1s) set duration so as to validate until primary validator ends its period --default-start-time (for Subnets, not L1s) use default start time for subnet validator (5 minutes later for fuji & mainnet, 30 seconds later for devnet) --default-validator-params (for Subnets, not L1s) use default weight/start/duration params for subnet validator --delegation-fee uint16 (PoS only) delegation fee (in bips) (default 100) --devnet operate on a devnet network --disable-owner string P-Chain address that will able to disable the validator with a P-Chain transaction --endpoint string use the given endpoint for network operations -e, --ewoq use ewoq key [fuji/devnet only] -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for addValidator -k, --key string select the key to use [fuji/devnet only] -g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji/devnet) --ledger-addrs strings use the given ledger addresses -l, --local operate on a local network -m, --mainnet operate on mainnet --node-endpoint string gather node id/bls from publicly available avalanchego apis on the given endpoint --node-id string node-id of the validator to add --output-tx-path string (for Subnets, not L1s) file path of the add validator tx --partial-sync set primary network partial sync for new validators (default true) --remaining-balance-owner string P-Chain address that will receive any leftover AVAX from the validator when it is removed from Subnet --rpc string connect to validator manager at the given rpc endpoint --stake-amount uint (PoS only) amount of tokens to stake --staking-period duration how long this validator will be staking --start-time string (for Subnets, not L1s) UTC start time when this validator starts validating, in 'YYYY-MM-DD HH:MM:SS' format --subnet-auth-keys strings (for Subnets, not L1s) control keys that will be used to authenticate add validator tx -t, --testnet fuji operate on testnet (alias to fuji) --wait-for-tx-acceptance (for Subnets, not L1s) just issue the add validator tx, without waiting for its acceptance (default true) --weight uint set the staking weight of the validator to add (default 20) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### changeOwner The blockchain changeOwner changes the owner of the subnet of the deployed Blockchain. **Usage:** ```bash avalanche blockchain changeOwner [subcommand] [flags] ``` **Flags:** ```bash --cluster string operate on the given cluster --control-keys strings addresses that may make subnet changes --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -e, --ewoq use ewoq key [fuji/devnet] -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for changeOwner -k, --key string select the key to use [fuji/devnet] -g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji/devnet) --ledger-addrs strings use the given ledger addresses -l, --local operate on a local network -m, --mainnet operate on mainnet --output-tx-path string file path of the transfer subnet ownership tx -s, --same-control-key use the fee-paying key as control key --subnet-auth-keys strings control keys that will be used to authenticate transfer subnet ownership tx -t, --testnet fuji operate on testnet (alias to fuji) --threshold uint32 required number of control key signatures to make subnet changes --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### changeWeight The blockchain changeWeight command changes the weight of a Subnet Validator. The Subnet has to be a Proof of Authority Subnet-Only Validator Subnet. **Usage:** ```bash avalanche blockchain changeWeight [subcommand] [flags] ``` **Flags:** ```bash --cluster string operate on the given cluster --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -e, --ewoq use ewoq key [fuji/devnet only] -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for changeWeight -k, --key string select the key to use [fuji/devnet only] -g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji/devnet) --ledger-addrs strings use the given ledger addresses -l, --local operate on a local network -m, --mainnet operate on mainnet --node-id string node-id of the validator -t, --testnet fuji operate on testnet (alias to fuji) --weight uint set the new staking weight of the validator (default 20) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### configure AvalancheGo nodes support several different configuration files. Subnets have their own Subnet config which applies to all chains/VMs in the Subnet. Each chain within the Subnet can have its own chain config. A chain can also have special requirements for the AvalancheGo node configuration itself. This command allows you to set all those files. **Usage:** ```bash avalanche blockchain configure [subcommand] [flags] ``` **Flags:** ```bash --chain-config string path to the chain configuration -h, --help help for configure --node-config string path to avalanchego node configuration --per-node-chain-config string path to per node chain configuration for local network --subnet-config string path to the subnet configuration --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### create The blockchain create command builds a new genesis file to configure your Blockchain. By default, the command runs an interactive wizard. It walks you through all the steps you need to create your first Blockchain. The tool supports deploying Subnet-EVM, and custom VMs. You can create a custom, user-generated genesis with a custom VM by providing the path to your genesis and VM binaries with the --genesis and --vm flags. By default, running the command with a blockchainName that already exists causes the command to fail. If you'd like to overwrite an existing configuration, pass the -f flag. **Usage:** ```bash avalanche blockchain create [subcommand] [flags] ``` **Flags:** ```bash --custom use a custom VM template --custom-vm-branch string custom vm branch or commit --custom-vm-build-script string custom vm build-script --custom-vm-path string file path of custom vm to use --custom-vm-repo-url string custom vm repository url --debug enable blockchain debugging (default true) --evm use the Subnet-EVM as the base template --evm-chain-id uint chain ID to use with Subnet-EVM --evm-defaults deprecation notice: use '--production-defaults' --evm-token string token symbol to use with Subnet-EVM --external-gas-token use a gas token from another blockchain -f, --force overwrite the existing configuration if one exists --from-github-repo generate custom VM binary from github repository --genesis string file path of genesis to use -h, --help help for create --icm interoperate with other blockchains using ICM --icm-registry-at-genesis setup ICM registry smart contract on genesis [experimental] --latest use latest Subnet-EVM released version, takes precedence over --vm-version --pre-release use latest Subnet-EVM pre-released version, takes precedence over --vm-version --production-defaults use default production settings for your blockchain --proof-of-authority use proof of authority(PoA) for validator management --proof-of-stake use proof of stake(PoS) for validator management --proxy-contract-owner string EVM address that controls ProxyAdmin for TransparentProxy of ValidatorManager contract --reward-basis-points uint (PoS only) reward basis points for PoS Reward Calculator (default 100) --sovereign set to false if creating non-sovereign blockchain (default true) --teleporter interoperate with other blockchains using ICM --test-defaults use default test settings for your blockchain --validator-manager-owner string EVM address that controls Validator Manager Owner --vm string file path of custom vm to use. alias to custom-vm-path --vm-version string version of Subnet-EVM template to use --warp generate a vm with warp support (needed for ICM) (default true) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### delete The blockchain delete command deletes an existing blockchain configuration. **Usage:** ```bash avalanche blockchain delete [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for delete --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### deploy The blockchain deploy command deploys your Blockchain configuration locally, to Fuji Testnet, or to Mainnet. At the end of the call, the command prints the RPC URL you can use to interact with the Subnet. Avalanche-CLI only supports deploying an individual Blockchain once per network. Subsequent attempts to deploy the same Blockchain to the same network (local, Fuji, Mainnet) aren't allowed. If you'd like to redeploy a Blockchain locally for testing, you must first call avalanche network clean to reset all deployed chain state. Subsequent local deploys redeploy the chain with fresh state. You can deploy the same Blockchain to multiple networks, so you can take your locally tested Subnet and deploy it on Fuji or Mainnet. **Usage:** ```bash avalanche blockchain deploy [subcommand] [flags] ``` **Flags:** ```bash --aggregator-allow-private-peers allow the signature aggregator to connect to peers with private IP (default true) --aggregator-extra-endpoints strings endpoints for extra nodes that are needed in signature aggregation --aggregator-log-level string log level to use with signature aggregator (default "Off") --avalanchego-path string use this avalanchego binary path --avalanchego-version string use this version of avalanchego (ex: v1.17.12) (default "latest-prerelease") --balance float set the AVAX balance of each bootstrap validator that will be used for continuous fee on P-Chain (default 0.1) --blockchain-genesis-key use genesis allocated key to fund validator manager initialization --blockchain-key string CLI stored key to use to fund validator manager initialization --blockchain-private-key string private key to use to fund validator manager initialization --bootstrap-endpoints strings take validator node info from the given endpoints --bootstrap-filepath string JSON file path that provides details about bootstrap validators, leave Node-ID and BLS values empty if using --generate-node-id=true --cchain-funding-key string key to be used to fund relayer account on cchain --cchain-icm-key string key to be used to pay for ICM deploys on C-Chain --change-owner-address string address that will receive change if node is no longer L1 validator --cluster string operate on the given cluster --control-keys strings addresses that may make subnet changes --convert-only avoid node track, restart and poa manager setup --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -e, --ewoq use ewoq key [fuji/devnet deploy only] -f, --fuji testnet operate on fuji (alias to testnet --generate-node-id whether to create new node id for bootstrap validators (Node-ID and BLS values in bootstrap JSON file will be overridden if --bootstrap-filepath flag is used) -h, --help help for deploy --icm-key string key to be used to pay for ICM deploys (default "cli-teleporter-deployer") --icm-version string ICM version to deploy (default "latest") -k, --key string select the key to use [fuji/devnet deploy only] -g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji/devnet) --ledger-addrs strings use the given ledger addresses -l, --local operate on a local network -m, --mainnet operate on mainnet --mainnet-chain-id uint32 use different ChainID for mainnet deployment --noicm skip automatic ICM deploy --num-bootstrap-validators int (only if --generate-node-id is true) number of bootstrap validators to set up in sovereign L1 validator) --num-local-nodes int number of nodes to be created on local machine --num-nodes uint32 number of nodes to be created on local network deploy (default 2) --output-tx-path string file path of the blockchain creation tx --partial-sync set primary network partial sync for new validators (default true) --pos-maximum-stake-amount uint maximum stake amount (default 1000) --pos-maximum-stake-multiplier uint8 maximum stake multiplier (default 1) --pos-minimum-delegation-fee uint16 minimum delegation fee (default 1) --pos-minimum-stake-amount uint minimum stake amount (default 1) --pos-minimum-stake-duration uint minimum stake duration (default 100) --pos-weight-to-value-factor uint weight to value factor (default 1) --relay-cchain relay C-Chain as source and destination (default true) --relayer-allow-private-ips allow relayer to connec to private ips (default true) --relayer-amount float automatically fund relayer fee payments with the given amount --relayer-key string key to be used by default both for rewards and to pay fees --relayer-log-level string log level to be used for relayer logs (default "info") --relayer-path string relayer binary to use --relayer-version string relayer version to deploy (default "latest-prerelease") -s, --same-control-key use the fee-paying key as control key --skip-icm-deploy skip automatic ICM deploy --skip-local-teleporter skip automatic ICM deploy on local networks [to be deprecated] --skip-relayer skip relayer deploy --skip-teleporter-deploy skip automatic ICM deploy --subnet-auth-keys strings control keys that will be used to authenticate chain creation -u, --subnet-id string do not create a subnet, deploy the blockchain into the given subnet id --subnet-only only create a subnet --teleporter-messenger-contract-address-path string path to an ICM Messenger contract address file --teleporter-messenger-deployer-address-path string path to an ICM Messenger deployer address file --teleporter-messenger-deployer-tx-path string path to an ICM Messenger deployer tx file --teleporter-registry-bytecode-path string path to an ICM Registry bytecode file --teleporter-version string ICM version to deploy (default "latest") -t, --testnet fuji operate on testnet (alias to fuji) --threshold uint32 required number of control key signatures to make subnet changes --use-local-machine use local machine as a blockchain validator --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### describe The blockchain describe command prints the details of a Blockchain configuration to the console. By default, the command prints a summary of the configuration. By providing the --genesis flag, the command instead prints out the raw genesis file. **Usage:** ```bash avalanche blockchain describe [subcommand] [flags] ``` **Flags:** ```bash -g, --genesis Print the genesis to the console directly instead of the summary -h, --help help for describe --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### export The blockchain export command write the details of an existing Blockchain deploy to a file. The command prompts for an output path. You can also provide one with the --output flag. **Usage:** ```bash avalanche blockchain export [subcommand] [flags] ``` **Flags:** ```bash --custom-vm-branch string custom vm branch --custom-vm-build-script string custom vm build-script --custom-vm-repo-url string custom vm repository url -h, --help help for export -o, --output string write the export data to the provided file path --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### import Import blockchain configurations into avalanche-cli. This command suite supports importing from a file created on another computer, or importing from blockchains running public networks (e.g. created manually or with the deprecated subnet-cli) **Usage:** ```bash avalanche blockchain import [subcommand] [flags] ``` **Subcommands:** - [`file`](#avalanche-blockchain-import-file): The blockchain import command will import a blockchain configuration from a file or a git repository. To import from a file, you can optionally provide the path as a command-line argument. Alternatively, running the command without any arguments triggers an interactive wizard. To import from a repository, go through the wizard. By default, an imported Blockchain doesn't overwrite an existing Blockchain with the same name. To allow overwrites, provide the --force flag. - [`public`](#avalanche-blockchain-import-public): The blockchain import public command imports a Blockchain configuration from a running network. By default, an imported Blockchain doesn't overwrite an existing Blockchain with the same name. To allow overwrites, provide the --force flag. **Flags:** ```bash -h, --help help for import --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### import file The blockchain import command will import a blockchain configuration from a file or a git repository. To import from a file, you can optionally provide the path as a command-line argument. Alternatively, running the command without any arguments triggers an interactive wizard. To import from a repository, go through the wizard. By default, an imported Blockchain doesn't overwrite an existing Blockchain with the same name. To allow overwrites, provide the --force flag. **Usage:** ```bash avalanche blockchain import file [subcommand] [flags] ``` **Flags:** ```bash --branch string the repo branch to use if downloading a new repo -f, --force overwrite the existing configuration if one exists -h, --help help for file --repo string the repo to import (ex: ava-labs/avalanche-plugins-core) or url to download the repo from --subnet string the subnet configuration to import from the provided repo --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### import public The blockchain import public command imports a Blockchain configuration from a running network. By default, an imported Blockchain doesn't overwrite an existing Blockchain with the same name. To allow overwrites, provide the --force flag. **Usage:** ```bash avalanche blockchain import public [subcommand] [flags] ``` **Flags:** ```bash --blockchain-id string the blockchain ID --cluster string operate on the given cluster --custom use a custom VM template --devnet operate on a devnet network --endpoint string use the given endpoint for network operations --evm import a subnet-evm --force overwrite the existing configuration if one exists -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for public -l, --local operate on a local network -m, --mainnet operate on mainnet --node-url string [optional] URL of an already running subnet validator -t, --testnet fuji operate on testnet (alias to fuji) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### join The subnet join command configures your validator node to begin validating a new Blockchain. To complete this process, you must have access to the machine running your validator. If the CLI is running on the same machine as your validator, it can generate or update your node's config file automatically. Alternatively, the command can print the necessary instructions to update your node manually. To complete the validation process, the Subnet's admins must add the NodeID of your validator to the Subnet's allow list by calling addValidator with your NodeID. After you update your validator's config, you need to restart your validator manually. If you provide the --avalanchego-config flag, this command attempts to edit the config file at that path. This command currently only supports Blockchains deployed on the Fuji Testnet and Mainnet. **Usage:** ```bash avalanche blockchain join [subcommand] [flags] ``` **Flags:** ```bash --avalanchego-config string file path of the avalanchego config file --cluster string operate on the given cluster --data-dir string path of avalanchego's data dir directory --devnet operate on a devnet network --endpoint string use the given endpoint for network operations --force-write if true, skip to prompt to overwrite the config file -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for join -k, --key string select the key to use [fuji only] -g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji) --ledger-addrs strings use the given ledger addresses -l, --local operate on a local network -m, --mainnet operate on mainnet --node-id string set the NodeID of the validator to check --plugin-dir string file path of avalanchego's plugin directory --print if true, print the manual config without prompting --stake-amount uint amount of tokens to stake on validator --staking-period duration how long validator validates for after start time --start-time string start time that validator starts validating -t, --testnet fuji operate on testnet (alias to fuji) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### list The blockchain list command prints the names of all created Blockchain configurations. Without any flags, it prints some general, static information about the Blockchain. With the --deployed flag, the command shows additional information including the VMID, BlockchainID and SubnetID. **Usage:** ```bash avalanche blockchain list [subcommand] [flags] ``` **Flags:** ```bash --deployed show additional deploy information -h, --help help for list --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### publish The blockchain publish command publishes the Blockchain's VM to a repository. **Usage:** ```bash avalanche blockchain publish [subcommand] [flags] ``` **Flags:** ```bash --alias string We publish to a remote repo, but identify the repo locally under a user-provided alias (e.g. myrepo). --force If true, ignores if the subnet has been published in the past, and attempts a forced publish. -h, --help help for publish --no-repo-path string Do not let the tool manage file publishing, but have it only generate the files and put them in the location given by this flag. --repo-url string The URL of the repo where we are publishing --subnet-file-path string Path to the Subnet description file. If not given, a prompting sequence will be initiated. --vm-file-path string Path to the VM description file. If not given, a prompting sequence will be initiated. --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### removeValidator The blockchain removeValidator command stops a whitelisted, subnet network validator from validating your deployed Blockchain. To remove the validator from the Subnet's allow list, provide the validator's unique NodeID. You can bypass these prompts by providing the values with flags. **Usage:** ```bash avalanche blockchain removeValidator [subcommand] [flags] ``` **Flags:** ```bash --aggregator-allow-private-peers allow the signature aggregator to connect to peers with private IP (default true) --aggregator-extra-endpoints strings endpoints for extra nodes that are needed in signature aggregation --aggregator-log-level string log level to use with signature aggregator (default "Off") --blockchain-genesis-key use genesis allocated key to pay fees for completing the validator's removal (blockchain gas token) --blockchain-key string CLI stored key to use to pay fees for completing the validator's removal (blockchain gas token) --blockchain-private-key string private key to use to pay fees for completing the validator's removal (blockchain gas token) --cluster string operate on the given cluster --devnet operate on a devnet network --endpoint string use the given endpoint for network operations --force force validator removal even if it's not getting rewarded -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for removeValidator -k, --key string select the key to use [fuji deploy only] -g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji) --ledger-addrs strings use the given ledger addresses -l, --local operate on a local network -m, --mainnet operate on mainnet --node-endpoint string remove validator that responds to the given endpoint --node-id string node-id of the validator --output-tx-path string (for non-SOV blockchain only) file path of the removeValidator tx --rpc string connect to validator manager at the given rpc endpoint --subnet-auth-keys strings (for non-SOV blockchain only) control keys that will be used to authenticate the removeValidator tx -t, --testnet fuji operate on testnet (alias to fuji) --uptime uint validator's uptime in seconds. If not provided, it will be automatically calculated --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### stats The blockchain stats command prints validator statistics for the given Blockchain. **Usage:** ```bash avalanche blockchain stats [subcommand] [flags] ``` **Flags:** ```bash --cluster string operate on the given cluster --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for stats -l, --local operate on a local network -m, --mainnet operate on mainnet -t, --testnet fuji operate on testnet (alias to fuji) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### upgrade The blockchain upgrade command suite provides a collection of tools for updating your developmental and deployed Blockchains. **Usage:** ```bash avalanche blockchain upgrade [subcommand] [flags] ``` **Subcommands:** - [`apply`](#avalanche-blockchain-upgrade-apply): Apply generated upgrade bytes to running Blockchain nodes to trigger a network upgrade. For public networks (Fuji Testnet or Mainnet), to complete this process, you must have access to the machine running your validator. If the CLI is running on the same machine as your validator, it can manipulate your node's configuration automatically. Alternatively, the command can print the necessary instructions to upgrade your node manually. After you update your validator's configuration, you need to restart your validator manually. If you provide the --avalanchego-chain-config-dir flag, this command attempts to write the upgrade file at that path. Refer to https://docs.avax.network/nodes/maintain/chain-config-flags#subnet-chain-configs for related documentation. - [`export`](#avalanche-blockchain-upgrade-export): Export the upgrade bytes file to a location of choice on disk - [`generate`](#avalanche-blockchain-upgrade-generate): The blockchain upgrade generate command builds a new upgrade.json file to customize your Blockchain. It guides the user through the process using an interactive wizard. - [`import`](#avalanche-blockchain-upgrade-import): Import the upgrade bytes file into the local environment - [`print`](#avalanche-blockchain-upgrade-print): Print the upgrade.json file content - [`vm`](#avalanche-blockchain-upgrade-vm): The blockchain upgrade vm command enables the user to upgrade their Blockchain's VM binary. The command can upgrade both local Blockchains and publicly deployed Blockchains on Fuji and Mainnet. The command walks the user through an interactive wizard. The user can skip the wizard by providing command line flags. **Flags:** ```bash -h, --help help for upgrade --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### upgrade apply Apply generated upgrade bytes to running Blockchain nodes to trigger a network upgrade. For public networks (Fuji Testnet or Mainnet), to complete this process, you must have access to the machine running your validator. If the CLI is running on the same machine as your validator, it can manipulate your node's configuration automatically. Alternatively, the command can print the necessary instructions to upgrade your node manually. After you update your validator's configuration, you need to restart your validator manually. If you provide the --avalanchego-chain-config-dir flag, this command attempts to write the upgrade file at that path. Refer to https://docs.avax.network/nodes/maintain/chain-config-flags#subnet-chain-configs for related documentation. **Usage:** ```bash avalanche blockchain upgrade apply [subcommand] [flags] ``` **Flags:** ```bash --avalanchego-chain-config-dir string avalanchego's chain config file directory (default "/Users/owen.wahlgren/.avalanchego/chains") --config create upgrade config for future subnet deployments (same as generate) --force If true, don't prompt for confirmation of timestamps in the past --fuji fuji apply upgrade existing fuji deployment (alias for `testnet`) -h, --help help for apply --local local apply upgrade existing local deployment --mainnet mainnet apply upgrade existing mainnet deployment --print if true, print the manual config without prompting (for public networks only) --testnet testnet apply upgrade existing testnet deployment (alias for `fuji`) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### upgrade export Export the upgrade bytes file to a location of choice on disk **Usage:** ```bash avalanche blockchain upgrade export [subcommand] [flags] ``` **Flags:** ```bash --force If true, overwrite a possibly existing file without prompting -h, --help help for export --upgrade-filepath string Export upgrade bytes file to location of choice on disk --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### upgrade generate The blockchain upgrade generate command builds a new upgrade.json file to customize your Blockchain. It guides the user through the process using an interactive wizard. **Usage:** ```bash avalanche blockchain upgrade generate [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for generate --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### upgrade import Import the upgrade bytes file into the local environment **Usage:** ```bash avalanche blockchain upgrade import [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for import --upgrade-filepath string Import upgrade bytes file into local environment --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### upgrade print Print the upgrade.json file content **Usage:** ```bash avalanche blockchain upgrade print [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for print --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### upgrade vm The blockchain upgrade vm command enables the user to upgrade their Blockchain's VM binary. The command can upgrade both local Blockchains and publicly deployed Blockchains on Fuji and Mainnet. The command walks the user through an interactive wizard. The user can skip the wizard by providing command line flags. **Usage:** ```bash avalanche blockchain upgrade vm [subcommand] [flags] ``` **Flags:** ```bash --binary string Upgrade to custom binary --config upgrade config for future subnet deployments --fuji fuji upgrade existing fuji deployment (alias for `testnet`) -h, --help help for vm --latest upgrade to latest version --local local upgrade existing local deployment --mainnet mainnet upgrade existing mainnet deployment --plugin-dir string plugin directory to automatically upgrade VM --print print instructions for upgrading --testnet testnet upgrade existing testnet deployment (alias for `fuji`) --version string Upgrade to custom version --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### validators The blockchain validators command lists the validators of a blockchain's subnet and provides several statistics about them. **Usage:** ```bash avalanche blockchain validators [subcommand] [flags] ``` **Flags:** ```bash --cluster string operate on the given cluster --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for validators -l, --local operate on a local network -m, --mainnet operate on mainnet -t, --testnet fuji operate on testnet (alias to fuji) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### vmid The blockchain vmid command prints the virtual machine ID (VMID) for the given Blockchain. **Usage:** ```bash avalanche blockchain vmid [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for vmid --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche config Customize configuration for Avalanche-CLI **Usage:** ```bash avalanche config [subcommand] [flags] ``` **Subcommands:** - [`authorize-cloud-access`](#avalanche-config-authorize-cloud-access): set preferences to authorize access to cloud resources - [`metrics`](#avalanche-config-metrics): set user metrics collection preferences - [`migrate`](#avalanche-config-migrate): migrate command migrates old ~/.avalanche-cli.json and ~/.avalanche-cli/config to /.avalanche-cli/config.json.. - [`snapshotsAutoSave`](#avalanche-config-snapshotsautosave): set user preference between auto saving local network snapshots or not - [`update`](#avalanche-config-update): set user preference between update check or not **Flags:** ```bash -h, --help help for config --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### authorize-cloud-access set preferences to authorize access to cloud resources **Usage:** ```bash avalanche config authorize-cloud-access [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for authorize-cloud-access --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### metrics set user metrics collection preferences **Usage:** ```bash avalanche config metrics [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for metrics --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### migrate migrate command migrates old ~/.avalanche-cli.json and ~/.avalanche-cli/config to /.avalanche-cli/config.json.. **Usage:** ```bash avalanche config migrate [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for migrate --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### snapshotsAutoSave set user preference between auto saving local network snapshots or not **Usage:** ```bash avalanche config snapshotsAutoSave [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for snapshotsAutoSave --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### update set user preference between update check or not **Usage:** ```bash avalanche config update [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for update --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche contract The contract command suite provides a collection of tools for deploying and interacting with smart contracts. **Usage:** ```bash avalanche contract [subcommand] [flags] ``` **Subcommands:** - [`deploy`](#avalanche-contract-deploy): The contract command suite provides a collection of tools for deploying smart contracts. - [`initValidatorManager`](#avalanche-contract-initvalidatormanager): Initializes Proof of Authority(PoA) or Proof of Stake(PoS)Validator Manager contract on a Blockchain and sets up initial validator set on the Blockchain. For more info on Validator Manager, please head to https://github.com/ava-labs/icm-contracts/tree/main/contracts/validator-manager **Flags:** ```bash -h, --help help for contract --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### deploy The contract command suite provides a collection of tools for deploying smart contracts. **Usage:** ```bash avalanche contract deploy [subcommand] [flags] ``` **Subcommands:** - [`erc20`](#avalanche-contract-deploy-erc20): Deploy an ERC20 token into a given Network and Blockchain **Flags:** ```bash -h, --help help for deploy --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### deploy erc20 Deploy an ERC20 token into a given Network and Blockchain **Usage:** ```bash avalanche contract deploy erc20 [subcommand] [flags] ``` **Flags:** ```bash --blockchain string deploy the ERC20 contract into the given CLI blockchain --blockchain-id string deploy the ERC20 contract into the given blockchain ID/Alias --c-chain deploy the ERC20 contract into C-Chain --cluster string operate on the given cluster --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet --funded string set the funded address --genesis-key use genesis allocated key as contract deployer -h, --help help for erc20 --key string CLI stored key to use as contract deployer -l, --local operate on a local network --private-key string private key to use as contract deployer --rpc string deploy the contract into the given rpc endpoint --supply uint set the token supply --symbol string set the token symbol -t, --testnet fuji operate on testnet (alias to fuji) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### initValidatorManager Initializes Proof of Authority(PoA) or Proof of Stake(PoS)Validator Manager contract on a Blockchain and sets up initial validator set on the Blockchain. For more info on Validator Manager, please head to https://github.com/ava-labs/icm-contracts/tree/main/contracts/validator-manager **Usage:** ```bash avalanche contract initValidatorManager [subcommand] [flags] ``` **Flags:** ```bash --aggregator-allow-private-peers allow the signature aggregator to connect to peers with private IP (default true) --aggregator-extra-endpoints strings endpoints for extra nodes that are needed in signature aggregation --aggregator-log-level string log level to use with signature aggregator (default "Off") --cluster string operate on the given cluster --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet --genesis-key use genesis allocated key as contract deployer -h, --help help for initValidatorManager --key string CLI stored key to use as contract deployer -l, --local operate on a local network -m, --mainnet operate on mainnet --pos-maximum-stake-amount uint (PoS only) maximum stake amount (default 1000) --pos-maximum-stake-multiplier uint8 (PoS only )maximum stake multiplier (default 1) --pos-minimum-delegation-fee uint16 (PoS only) minimum delegation fee (default 1) --pos-minimum-stake-amount uint (PoS only) minimum stake amount (default 1) --pos-minimum-stake-duration uint (PoS only) minimum stake duration (default 100) --pos-reward-calculator-address string (PoS only) initialize the ValidatorManager with reward calculator address --pos-weight-to-value-factor uint (PoS only) weight to value factor (default 1) --private-key string private key to use as contract deployer --rpc string deploy the contract into the given rpc endpoint -t, --testnet fuji operate on testnet (alias to fuji) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche help Help provides help for any command in the application. Simply type avalanche help [path to command] for full details. **Usage:** ```bash avalanche help [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for help --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche icm The messenger command suite provides a collection of tools for interacting with ICM messenger contracts. **Usage:** ```bash avalanche icm [subcommand] [flags] ``` **Subcommands:** - [`deploy`](#avalanche-icm-deploy): Deploys ICM Messenger and Registry into a given L1. - [`sendMsg`](#avalanche-icm-sendmsg): Sends and wait reception for a ICM msg between two subnets. **Flags:** ```bash -h, --help help for icm --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### deploy Deploys ICM Messenger and Registry into a given L1. **Usage:** ```bash avalanche icm deploy [subcommand] [flags] ``` **Flags:** ```bash --blockchain string deploy ICM into the given CLI blockchain --blockchain-id string deploy ICM into the given blockchain ID/Alias --c-chain deploy ICM into C-Chain --cchain-key string key to be used to pay fees to deploy ICM to C-Chain --cluster string operate on the given cluster --deploy-messenger deploy ICM Messenger (default true) --deploy-registry deploy ICM Registry (default true) --devnet operate on a devnet network --endpoint string use the given endpoint for network operations --force-registry-deploy deploy ICM Registry even if Messenger has already been deployed -f, --fuji testnet operate on fuji (alias to testnet --genesis-key use genesis allocated key to fund ICM deploy -h, --help help for deploy --include-cchain deploy ICM also to C-Chain --key string CLI stored key to use to fund ICM deploy -l, --local operate on a local network --messenger-contract-address-path string path to a messenger contract address file --messenger-deployer-address-path string path to a messenger deployer address file --messenger-deployer-tx-path string path to a messenger deployer tx file --private-key string private key to use to fund ICM deploy --registry-bytecode-path string path to a registry bytecode file --rpc-url string use the given RPC URL to connect to the subnet -t, --testnet fuji operate on testnet (alias to fuji) --version string version to deploy (default "latest") --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### sendMsg Sends and wait reception for a ICM msg between two subnets. **Usage:** ```bash avalanche icm sendMsg [subcommand] [flags] ``` **Flags:** ```bash --cluster string operate on the given cluster --dest-rpc string use the given destination blockchain rpc endpoint --destination-address string deliver the message to the given contract destination address --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet --genesis-key use genesis allocated key as message originator and to pay source blockchain fees -h, --help help for sendMsg --hex-encoded given message is hex encoded --key string CLI stored key to use as message originator and to pay source blockchain fees -l, --local operate on a local network --private-key string private key to use as message originator and to pay source blockchain fees --source-rpc string use the given source blockchain rpc endpoint -t, --testnet fuji operate on testnet (alias to fuji) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche ictt The ictt command suite provides tools to deploy and manage Interchain Token Transferrers. **Usage:** ```bash avalanche ictt [subcommand] [flags] ``` **Subcommands:** - [`deploy`](#avalanche-ictt-deploy): Deploys a Token Transferrer into a given Network and Subnets **Flags:** ```bash -h, --help help for ictt --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### deploy Deploys a Token Transferrer into a given Network and Subnets **Usage:** ```bash avalanche ictt deploy [subcommand] [flags] ``` **Flags:** ```bash --c-chain-home set the Transferrer's Home Chain into C-Chain --c-chain-remote set the Transferrer's Remote Chain into C-Chain --cluster string operate on the given cluster --deploy-erc20-home string deploy a Transferrer Home for the given Chain's ERC20 Token --deploy-native-home deploy a Transferrer Home for the Chain's Native Token --deploy-native-remote deploy a Transferrer Remote for the Chain's Native Token --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for deploy --home-blockchain string set the Transferrer's Home Chain into the given CLI blockchain --home-genesis-key use genesis allocated key to deploy Transferrer Home --home-key string CLI stored key to use to deploy Transferrer Home --home-private-key string private key to use to deploy Transferrer Home --home-rpc string use the given RPC URL to connect to the home blockchain -l, --local operate on a local network --remote-blockchain string set the Transferrer's Remote Chain into the given CLI blockchain --remote-genesis-key use genesis allocated key to deploy Transferrer Remote --remote-key string CLI stored key to use to deploy Transferrer Remote --remote-private-key string private key to use to deploy Transferrer Remote --remote-rpc string use the given RPC URL to connect to the remote blockchain --remote-token-decimals uint8 use the given number of token decimals for the Transferrer Remote [defaults to token home's decimals (18 for a new wrapped native home token)] --remove-minter-admin remove the native minter precompile admin found on remote blockchain genesis -t, --testnet fuji operate on testnet (alias to fuji) --use-home string use the given Transferrer's Home Address --version string tag/branch/commit of Avalanche Interchain Token Transfer (ICTT) to be used (defaults to main branch) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche interchain The interchain command suite provides a collection of tools to set and manage interoperability between blockchains. **Usage:** ```bash avalanche interchain [subcommand] [flags] ``` **Subcommands:** - [`messenger`](#avalanche-interchain-messenger): The messenger command suite provides a collection of tools for interacting with ICM messenger contracts. - [`relayer`](#avalanche-interchain-relayer): The relayer command suite provides a collection of tools for deploying and configuring an ICM relayers. - [`tokenTransferrer`](#avalanche-interchain-tokentransferrer): The tokenTransfer command suite provides tools to deploy and manage Token Transferrers. **Flags:** ```bash -h, --help help for interchain --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### messenger The messenger command suite provides a collection of tools for interacting with ICM messenger contracts. **Usage:** ```bash avalanche interchain messenger [subcommand] [flags] ``` **Subcommands:** - [`deploy`](#avalanche-interchain-messenger-deploy): Deploys ICM Messenger and Registry into a given L1. - [`sendMsg`](#avalanche-interchain-messenger-sendmsg): Sends and wait reception for a ICM msg between two subnets. **Flags:** ```bash -h, --help help for messenger --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### messenger deploy Deploys ICM Messenger and Registry into a given L1. **Usage:** ```bash avalanche interchain messenger deploy [subcommand] [flags] ``` **Flags:** ```bash --blockchain string deploy ICM into the given CLI blockchain --blockchain-id string deploy ICM into the given blockchain ID/Alias --c-chain deploy ICM into C-Chain --cchain-key string key to be used to pay fees to deploy ICM to C-Chain --cluster string operate on the given cluster --deploy-messenger deploy ICM Messenger (default true) --deploy-registry deploy ICM Registry (default true) --devnet operate on a devnet network --endpoint string use the given endpoint for network operations --force-registry-deploy deploy ICM Registry even if Messenger has already been deployed -f, --fuji testnet operate on fuji (alias to testnet --genesis-key use genesis allocated key to fund ICM deploy -h, --help help for deploy --include-cchain deploy ICM also to C-Chain --key string CLI stored key to use to fund ICM deploy -l, --local operate on a local network --messenger-contract-address-path string path to a messenger contract address file --messenger-deployer-address-path string path to a messenger deployer address file --messenger-deployer-tx-path string path to a messenger deployer tx file --private-key string private key to use to fund ICM deploy --registry-bytecode-path string path to a registry bytecode file --rpc-url string use the given RPC URL to connect to the subnet -t, --testnet fuji operate on testnet (alias to fuji) --version string version to deploy (default "latest") --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### messenger sendMsg Sends and wait reception for a ICM msg between two subnets. **Usage:** ```bash avalanche interchain messenger sendMsg [subcommand] [flags] ``` **Flags:** ```bash --cluster string operate on the given cluster --dest-rpc string use the given destination blockchain rpc endpoint --destination-address string deliver the message to the given contract destination address --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet --genesis-key use genesis allocated key as message originator and to pay source blockchain fees -h, --help help for sendMsg --hex-encoded given message is hex encoded --key string CLI stored key to use as message originator and to pay source blockchain fees -l, --local operate on a local network --private-key string private key to use as message originator and to pay source blockchain fees --source-rpc string use the given source blockchain rpc endpoint -t, --testnet fuji operate on testnet (alias to fuji) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### relayer The relayer command suite provides a collection of tools for deploying and configuring an ICM relayers. **Usage:** ```bash avalanche interchain relayer [subcommand] [flags] ``` **Subcommands:** - [`deploy`](#avalanche-interchain-relayer-deploy): Deploys an ICM Relayer for the given Network. - [`logs`](#avalanche-interchain-relayer-logs): Shows pretty formatted AWM relayer logs - [`start`](#avalanche-interchain-relayer-start): Starts AWM relayer on the specified network (Currently only for local network). - [`stop`](#avalanche-interchain-relayer-stop): Stops AWM relayer on the specified network (Currently only for local network, cluster). **Flags:** ```bash -h, --help help for relayer --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### relayer deploy Deploys an ICM Relayer for the given Network. **Usage:** ```bash avalanche interchain relayer deploy [subcommand] [flags] ``` **Flags:** ```bash --allow-private-ips allow relayer to connec to private ips (default true) --amount float automatically fund l1s fee payments with the given amount --bin-path string use the given relayer binary --blockchain-funding-key string key to be used to fund relayer account on all l1s --blockchains strings blockchains to relay as source and destination --cchain relay C-Chain as source and destination --cchain-amount float automatically fund cchain fee payments with the given amount --cchain-funding-key string key to be used to fund relayer account on cchain --cluster string operate on the given cluster --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for deploy --key string key to be used by default both for rewards and to pay fees -l, --local operate on a local network --log-level string log level to use for relayer logs -t, --testnet fuji operate on testnet (alias to fuji) --version string version to deploy (default "latest-prerelease") --config string config file (default is $HOME/.avalanche-cli/config.json) --skip-update-check skip check for new versions ``` #### relayer logs Shows pretty formatted AWM relayer logs **Usage:** ```bash avalanche interchain relayer logs [subcommand] [flags] ``` **Flags:** ```bash --endpoint string use the given endpoint for network operations --first uint output first N log lines -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for logs --last uint output last N log lines -l, --local operate on a local network --raw raw logs output -t, --testnet fuji operate on testnet (alias to fuji) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### relayer start Starts AWM relayer on the specified network (Currently only for local network). **Usage:** ```bash avalanche interchain relayer start [subcommand] [flags] ``` **Flags:** ```bash --bin-path string use the given relayer binary --cluster string operate on the given cluster --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for start -l, --local operate on a local network -t, --testnet fuji operate on testnet (alias to fuji) --version string version to use (default "latest-prerelease") --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### relayer stop Stops AWM relayer on the specified network (Currently only for local network, cluster). **Usage:** ```bash avalanche interchain relayer stop [subcommand] [flags] ``` **Flags:** ```bash --cluster string operate on the given cluster --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for stop -l, --local operate on a local network -t, --testnet fuji operate on testnet (alias to fuji) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### tokenTransferrer The tokenTransfer command suite provides tools to deploy and manage Token Transferrers. **Usage:** ```bash avalanche interchain tokenTransferrer [subcommand] [flags] ``` **Subcommands:** - [`deploy`](#avalanche-interchain-tokentransferrer-deploy): Deploys a Token Transferrer into a given Network and Subnets **Flags:** ```bash -h, --help help for tokenTransferrer --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### tokenTransferrer deploy Deploys a Token Transferrer into a given Network and Subnets **Usage:** ```bash avalanche interchain tokenTransferrer deploy [subcommand] [flags] ``` **Flags:** ```bash --c-chain-home set the Transferrer's Home Chain into C-Chain --c-chain-remote set the Transferrer's Remote Chain into C-Chain --cluster string operate on the given cluster --deploy-erc20-home string deploy a Transferrer Home for the given Chain's ERC20 Token --deploy-native-home deploy a Transferrer Home for the Chain's Native Token --deploy-native-remote deploy a Transferrer Remote for the Chain's Native Token --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for deploy --home-blockchain string set the Transferrer's Home Chain into the given CLI blockchain --home-genesis-key use genesis allocated key to deploy Transferrer Home --home-key string CLI stored key to use to deploy Transferrer Home --home-private-key string private key to use to deploy Transferrer Home --home-rpc string use the given RPC URL to connect to the home blockchain -l, --local operate on a local network --remote-blockchain string set the Transferrer's Remote Chain into the given CLI blockchain --remote-genesis-key use genesis allocated key to deploy Transferrer Remote --remote-key string CLI stored key to use to deploy Transferrer Remote --remote-private-key string private key to use to deploy Transferrer Remote --remote-rpc string use the given RPC URL to connect to the remote blockchain --remote-token-decimals uint8 use the given number of token decimals for the Transferrer Remote [defaults to token home's decimals (18 for a new wrapped native home token)] --remove-minter-admin remove the native minter precompile admin found on remote blockchain genesis -t, --testnet fuji operate on testnet (alias to fuji) --use-home string use the given Transferrer's Home Address --version string tag/branch/commit of Avalanche Interchain Token Transfer (ICTT) to be used (defaults to main branch) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche key The key command suite provides a collection of tools for creating and managing signing keys. You can use these keys to deploy Subnets to the Fuji Testnet, but these keys are NOT suitable to use in production environments. DO NOT use these keys on Mainnet. To get started, use the key create command. **Usage:** ```bash avalanche key [subcommand] [flags] ``` **Subcommands:** - [`create`](#avalanche-key-create): The key create command generates a new private key to use for creating and controlling test Subnets. Keys generated by this command are NOT cryptographically secure enough to use in production environments. DO NOT use these keys on Mainnet. The command works by generating a secp256 key and storing it with the provided keyName. You can use this key in other commands by providing this keyName. If you'd like to import an existing key instead of generating one from scratch, provide the --file flag. - [`delete`](#avalanche-key-delete): The key delete command deletes an existing signing key. To delete a key, provide the keyName. The command prompts for confirmation before deleting the key. To skip the confirmation, provide the --force flag. - [`export`](#avalanche-key-export): The key export command exports a created signing key. You can use an exported key in other applications or import it into another instance of Avalanche-CLI. By default, the tool writes the hex encoded key to stdout. If you provide the --output flag, the command writes the key to a file of your choosing. - [`list`](#avalanche-key-list): The key list command prints information for all stored signing keys or for the ledger addresses associated to certain indices. - [`transfer`](#avalanche-key-transfer): The key transfer command allows to transfer funds between stored keys or ledger addresses. **Flags:** ```bash -h, --help help for key --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### create The key create command generates a new private key to use for creating and controlling test Subnets. Keys generated by this command are NOT cryptographically secure enough to use in production environments. DO NOT use these keys on Mainnet. The command works by generating a secp256 key and storing it with the provided keyName. You can use this key in other commands by providing this keyName. If you'd like to import an existing key instead of generating one from scratch, provide the --file flag. **Usage:** ```bash avalanche key create [subcommand] [flags] ``` **Flags:** ```bash --file string import the key from an existing key file -f, --force overwrite an existing key with the same name -h, --help help for create --skip-balances do not query public network balances for an imported key --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### delete The key delete command deletes an existing signing key. To delete a key, provide the keyName. The command prompts for confirmation before deleting the key. To skip the confirmation, provide the --force flag. **Usage:** ```bash avalanche key delete [subcommand] [flags] ``` **Flags:** ```bash -f, --force delete the key without confirmation -h, --help help for delete --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### export The key export command exports a created signing key. You can use an exported key in other applications or import it into another instance of Avalanche-CLI. By default, the tool writes the hex encoded key to stdout. If you provide the --output flag, the command writes the key to a file of your choosing. **Usage:** ```bash avalanche key export [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for export -o, --output string write the key to the provided file path --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### list The key list command prints information for all stored signing keys or for the ledger addresses associated to certain indices. **Usage:** ```bash avalanche key list [subcommand] [flags] ``` **Flags:** ```bash -a, --all-networks list all network addresses --blockchains strings blockchains to show information about (p=p-chain, x=x-chain, c=c-chain, and blockchain names) (default p,x,c) -c, --cchain list C-Chain addresses (default true) --cluster string operate on the given cluster --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for list --keys strings list addresses for the given keys -g, --ledger uints list ledger addresses for the given indices (default []) -l, --local operate on a local network -m, --mainnet operate on mainnet --pchain list P-Chain addresses (default true) --subnets strings subnets to show information about (p=p-chain, x=x-chain, c=c-chain, and subnet names) (default p,x,c) -t, --testnet fuji operate on testnet (alias to fuji) --tokens strings provide balance information for the given token contract addresses (Evm only) (default [Native]) --use-gwei use gwei for EVM balances -n, --use-nano-avax use nano Avax for balances --xchain list X-Chain addresses (default true) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### transfer The key transfer command allows to transfer funds between stored keys or ledger addresses. **Usage:** ```bash avalanche key transfer [subcommand] [flags] ``` **Flags:** ```bash -o, --amount float amount to send or receive (AVAX or TOKEN units) --c-chain-receiver receive at C-Chain --c-chain-sender send from C-Chain --cluster string operate on the given cluster -a, --destination-addr string destination address --destination-key string key associated to a destination address --destination-subnet string subnet where the funds will be sent (token transferrer experimental) --destination-transferrer-address string token transferrer address at the destination subnet (token transferrer experimental) --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for transfer -k, --key string key associated to the sender or receiver address -i, --ledger uint32 ledger index associated to the sender or receiver address (default 32768) -l, --local operate on a local network -m, --mainnet operate on mainnet --origin-subnet string subnet where the funds belong (token transferrer experimental) --origin-transferrer-address string token transferrer address at the origin subnet (token transferrer experimental) --p-chain-receiver receive at P-Chain --p-chain-sender send from P-Chain --receiver-blockchain string receive at the given CLI blockchain --receiver-blockchain-id string receive at the given blockchain ID/Alias --sender-blockchain string send from the given CLI blockchain --sender-blockchain-id string send from the given blockchain ID/Alias -t, --testnet fuji operate on testnet (alias to fuji) --x-chain-receiver receive at X-Chain --x-chain-sender send from X-Chain --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche network The network command suite provides a collection of tools for managing local Subnet deployments. When you deploy a Subnet locally, it runs on a local, multi-node Avalanche network. The subnet deploy command starts this network in the background. This command suite allows you to shutdown, restart, and clear that network. This network currently supports multiple, concurrently deployed Subnets. **Usage:** ```bash avalanche network [subcommand] [flags] ``` **Subcommands:** - [`clean`](#avalanche-network-clean): The network clean command shuts down your local, multi-node network. All deployed Subnets shutdown and delete their state. You can restart the network by deploying a new Subnet configuration. - [`start`](#avalanche-network-start): The network start command starts a local, multi-node Avalanche network on your machine. By default, the command loads the default snapshot. If you provide the --snapshot-name flag, the network loads that snapshot instead. The command fails if the local network is already running. - [`status`](#avalanche-network-status): The network status command prints whether or not a local Avalanche network is running and some basic stats about the network. - [`stop`](#avalanche-network-stop): The network stop command shuts down your local, multi-node network. All deployed Subnets shutdown gracefully and save their state. If you provide the --snapshot-name flag, the network saves its state under this named snapshot. You can reload this snapshot with network start --snapshot-name `snapshotName`. Otherwise, the network saves to the default snapshot, overwriting any existing state. You can reload the default snapshot with network start. **Flags:** ```bash -h, --help help for network --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### clean The network clean command shuts down your local, multi-node network. All deployed Subnets shutdown and delete their state. You can restart the network by deploying a new Subnet configuration. **Usage:** ```bash avalanche network clean [subcommand] [flags] ``` **Flags:** ```bash --hard Also clean downloaded avalanchego and plugin binaries -h, --help help for clean --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### start The network start command starts a local, multi-node Avalanche network on your machine. By default, the command loads the default snapshot. If you provide the --snapshot-name flag, the network loads that snapshot instead. The command fails if the local network is already running. **Usage:** ```bash avalanche network start [subcommand] [flags] ``` **Flags:** ```bash --avalanchego-path string use this avalanchego binary path --avalanchego-version string use this version of avalanchego (ex: v1.17.12) (default "latest-prerelease") -h, --help help for start --num-nodes uint32 number of nodes to be created on local network (default 2) --relayer-path string use this relayer binary path --relayer-version string use this relayer version (default "latest-prerelease") --snapshot-name string name of snapshot to use to start the network from (default "default-1654102509") --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### status The network status command prints whether or not a local Avalanche network is running and some basic stats about the network. **Usage:** ```bash avalanche network status [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for status --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### stop The network stop command shuts down your local, multi-node network. All deployed Subnets shutdown gracefully and save their state. If you provide the --snapshot-name flag, the network saves its state under this named snapshot. You can reload this snapshot with network start --snapshot-name `snapshotName`. Otherwise, the network saves to the default snapshot, overwriting any existing state. You can reload the default snapshot with network start. **Usage:** ```bash avalanche network stop [subcommand] [flags] ``` **Flags:** ```bash --dont-save do not save snapshot, just stop the network -h, --help help for stop --snapshot-name string name of snapshot to use to save network state into (default "default-1654102509") --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche node The node command suite provides a collection of tools for creating and maintaining validators on Avalanche Network. To get started, use the node create command wizard to walk through the configuration to make your node a primary validator on Avalanche public network. You can use the rest of the commands to maintain your node and make your node a Subnet Validator. **Usage:** ```bash avalanche node [subcommand] [flags] ``` **Subcommands:** - [`addDashboard`](#avalanche-node-adddashboard): (ALPHA Warning) This command is currently in experimental mode. The node addDashboard command adds custom dashboard to the Grafana monitoring dashboard for the cluster. - [`create`](#avalanche-node-create): (ALPHA Warning) This command is currently in experimental mode. The node create command sets up a validator on a cloud server of your choice. The validator will be validating the Avalanche Primary Network and Subnet of your choice. By default, the command runs an interactive wizard. It walks you through all the steps you need to set up a validator. Once this command is completed, you will have to wait for the validator to finish bootstrapping on the primary network before running further commands on it, e.g. validating a Subnet. You can check the bootstrapping status by running avalanche node status The created node will be part of group of validators called `clusterName` and users can call node commands with `clusterName` so that the command will apply to all nodes in the cluster - [`destroy`](#avalanche-node-destroy): (ALPHA Warning) This command is currently in experimental mode. The node destroy command terminates all running nodes in cloud server and deletes all storage disks. If there is a static IP address attached, it will be released. - [`devnet`](#avalanche-node-devnet): (ALPHA Warning) This command is currently in experimental mode. The node devnet command suite provides a collection of commands related to devnets. You can check the updated status by calling avalanche node status `clusterName` - [`export`](#avalanche-node-export): (ALPHA Warning) This command is currently in experimental mode. The node export command exports cluster configuration and its nodes config to a text file. If no file is specified, the configuration is printed to the stdout. Use --include-secrets to include keys in the export. In this case please keep the file secure as it contains sensitive information. Exported cluster configuration without secrets can be imported by another user using node import command. - [`import`](#avalanche-node-import): (ALPHA Warning) This command is currently in experimental mode. The node import command imports cluster configuration and its nodes configuration from a text file created from the node export command. Prior to calling this command, call node whitelist command to have your SSH public key and IP whitelisted by the cluster owner. This will enable you to use avalanche-cli commands to manage the imported cluster. Please note, that this imported cluster will be considered as EXTERNAL by avalanche-cli, so some commands affecting cloud nodes like node create or node destroy will be not applicable to it. - [`list`](#avalanche-node-list): (ALPHA Warning) This command is currently in experimental mode. The node list command lists all clusters together with their nodes. - [`loadtest`](#avalanche-node-loadtest): (ALPHA Warning) This command is currently in experimental mode. The node loadtest command suite starts and stops a load test for an existing devnet cluster. - [`local`](#avalanche-node-local): (ALPHA Warning) This command is currently in experimental mode. The node local command suite provides a collection of commands related to local nodes - [`refresh-ips`](#avalanche-node-refresh-ips): (ALPHA Warning) This command is currently in experimental mode. The node refresh-ips command obtains the current IP for all nodes with dynamic IPs in the cluster, and updates the local node information used by CLI commands. - [`resize`](#avalanche-node-resize): (ALPHA Warning) This command is currently in experimental mode. The node resize command can change the amount of CPU, memory and disk space available for the cluster nodes. - [`scp`](#avalanche-node-scp): (ALPHA Warning) This command is currently in experimental mode. The node scp command securely copies files to and from nodes. Remote source or destionation can be specified using the following format: [clusterName|nodeID|instanceID|IP]:/path/to/file. Regular expressions are supported for the source files like /tmp/*.txt. File transfer to the nodes are parallelized. IF source or destination is cluster, the other should be a local file path. If both destinations are remote, they must be nodes for the same cluster and not clusters themselves. For example: $ avalanche node scp [cluster1|node1]:/tmp/file.txt /tmp/file.txt $ avalanche node scp /tmp/file.txt [cluster1|NodeID-XXXX]:/tmp/file.txt $ avalanche node scp node1:/tmp/file.txt NodeID-XXXX:/tmp/file.txt - [`ssh`](#avalanche-node-ssh): (ALPHA Warning) This command is currently in experimental mode. The node ssh command execute a given command [cmd] using ssh on all nodes in the cluster if ClusterName is given. If no command is given, just prints the ssh command to be used to connect to each node in the cluster. For provided NodeID or InstanceID or IP, the command [cmd] will be executed on that node. If no [cmd] is provided for the node, it will open ssh shell there. - [`status`](#avalanche-node-status): (ALPHA Warning) This command is currently in experimental mode. The node status command gets the bootstrap status of all nodes in a cluster with the Primary Network. If no cluster is given, defaults to node list behaviour. To get the bootstrap status of a node with a Blockchain, use --blockchain flag - [`sync`](#avalanche-node-sync): (ALPHA Warning) This command is currently in experimental mode. The node sync command enables all nodes in a cluster to be bootstrapped to a Blockchain. You can check the blockchain bootstrap status by calling avalanche node status `clusterName` --blockchain `blockchainName` - [`update`](#avalanche-node-update): (ALPHA Warning) This command is currently in experimental mode. The node update command suite provides a collection of commands for nodes to update their avalanchego or VM config. You can check the status after update by calling avalanche node status - [`upgrade`](#avalanche-node-upgrade): (ALPHA Warning) This command is currently in experimental mode. The node update command suite provides a collection of commands for nodes to update their avalanchego or VM version. You can check the status after upgrade by calling avalanche node status - [`validate`](#avalanche-node-validate): (ALPHA Warning) This command is currently in experimental mode. The node validate command suite provides a collection of commands for nodes to join the Primary Network and Subnets as validators. If any of the commands is run before the nodes are bootstrapped on the Primary Network, the command will fail. You can check the bootstrap status by calling avalanche node status `clusterName` - [`whitelist`](#avalanche-node-whitelist): (ALPHA Warning) The whitelist command suite provides a collection of tools for granting access to the cluster. Command adds IP if --ip params provided to cloud security access rules allowing it to access all nodes in the cluster via ssh or http. It also command adds SSH public key to all nodes in the cluster if --ssh params is there. If no params provided it detects current user IP automaticaly and whitelists it **Flags:** ```bash -h, --help help for node --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### addDashboard (ALPHA Warning) This command is currently in experimental mode. The node addDashboard command adds custom dashboard to the Grafana monitoring dashboard for the cluster. **Usage:** ```bash avalanche node addDashboard [subcommand] [flags] ``` **Flags:** ```bash --add-grafana-dashboard string path to additional grafana dashboard json file -h, --help help for addDashboard --subnet string subnet that the dasbhoard is intended for (if any) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### create (ALPHA Warning) This command is currently in experimental mode. The node create command sets up a validator on a cloud server of your choice. The validator will be validating the Avalanche Primary Network and Subnet of your choice. By default, the command runs an interactive wizard. It walks you through all the steps you need to set up a validator. Once this command is completed, you will have to wait for the validator to finish bootstrapping on the primary network before running further commands on it, e.g. validating a Subnet. You can check the bootstrapping status by running avalanche node status The created node will be part of group of validators called `clusterName` and users can call node commands with `clusterName` so that the command will apply to all nodes in the cluster **Usage:** ```bash avalanche node create [subcommand] [flags] ``` **Flags:** ```bash --add-grafana-dashboard string path to additional grafana dashboard json file --alternative-key-pair-name string key pair name to use if default one generates conflicts --authorize-access authorize CLI to create cloud resources --auto-replace-keypair automatically replaces key pair to access node if previous key pair is not found --avalanchego-version-from-subnet string install latest avalanchego version, that is compatible with the given subnet, on node/s --aws create node/s in AWS cloud --aws-profile string aws profile to use (default "default") --aws-volume-iops int AWS iops (for gp3, io1, and io2 volume types only) (default 3000) --aws-volume-size int AWS volume size in GB (default 1000) --aws-volume-throughput int AWS throughput in MiB/s (for gp3 volume type only) (default 125) --aws-volume-type string AWS volume type (default "gp3") --bootstrap-ids stringArray nodeIDs of bootstrap nodes --bootstrap-ips stringArray IP:port pairs of bootstrap nodes --cluster string operate on the given cluster --custom-avalanchego-version string install given avalanchego version on node/s --devnet operate on a devnet network --enable-monitoring set up Prometheus monitoring for created nodes. This option creates a separate monitoring cloud instance and incures additional cost --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet --gcp create node/s in GCP cloud --gcp-credentials string use given GCP credentials --gcp-project string use given GCP project --genesis string path to genesis file --grafana-pkg string use grafana pkg instead of apt repo(by default), for example https://dl.grafana.com/oss/release/grafana_10.4.1_amd64.deb -h, --help help for create --latest-avalanchego-pre-release-version install latest avalanchego pre-release version on node/s --latest-avalanchego-version install latest avalanchego release version on node/s -m, --mainnet operate on mainnet --node-type string cloud instance type. Use 'default' to use recommended default instance type --num-apis ints number of API nodes(nodes without stake) to create in the new Devnet --num-validators ints number of nodes to create per region(s). Use comma to separate multiple numbers for each region in the same order as --region flag --partial-sync primary network partial sync (default true) --public-http-port allow public access to avalanchego HTTP port --region strings create node(s) in given region(s). Use comma to separate multiple regions --ssh-agent-identity string use given ssh identity(only for ssh agent). If not set, default will be used -t, --testnet fuji operate on testnet (alias to fuji) --upgrade string path to upgrade file --use-ssh-agent use ssh agent(ex: Yubikey) for ssh auth --use-static-ip attach static Public IP on cloud servers (default true) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### destroy (ALPHA Warning) This command is currently in experimental mode. The node destroy command terminates all running nodes in cloud server and deletes all storage disks. If there is a static IP address attached, it will be released. **Usage:** ```bash avalanche node destroy [subcommand] [flags] ``` **Flags:** ```bash --all destroy all existing clusters created by Avalanche CLI --authorize-access authorize CLI to release cloud resources -y, --authorize-all authorize all CLI requests --authorize-remove authorize CLI to remove all local files related to cloud nodes --aws-profile string aws profile to use (default "default") -h, --help help for destroy --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### devnet (ALPHA Warning) This command is currently in experimental mode. The node devnet command suite provides a collection of commands related to devnets. You can check the updated status by calling avalanche node status `clusterName` **Usage:** ```bash avalanche node devnet [subcommand] [flags] ``` **Subcommands:** - [`deploy`](#avalanche-node-devnet-deploy): (ALPHA Warning) This command is currently in experimental mode. The node devnet deploy command deploys a subnet into a devnet cluster, creating subnet and blockchain txs for it. It saves the deploy info both locally and remotely. - [`wiz`](#avalanche-node-devnet-wiz): (ALPHA Warning) This command is currently in experimental mode. The node wiz command creates a devnet and deploys, sync and validate a subnet into it. It creates the subnet if so needed. **Flags:** ```bash -h, --help help for devnet --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### devnet deploy (ALPHA Warning) This command is currently in experimental mode. The node devnet deploy command deploys a subnet into a devnet cluster, creating subnet and blockchain txs for it. It saves the deploy info both locally and remotely. **Usage:** ```bash avalanche node devnet deploy [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for deploy --no-checks do not check for healthy status or rpc compatibility of nodes against subnet --subnet-aliases strings additional subnet aliases to be used for RPC calls in addition to subnet blockchain name --subnet-only only create a subnet --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### devnet wiz (ALPHA Warning) This command is currently in experimental mode. The node wiz command creates a devnet and deploys, sync and validate a subnet into it. It creates the subnet if so needed. **Usage:** ```bash avalanche node devnet wiz [subcommand] [flags] ``` **Flags:** ```bash --add-grafana-dashboard string path to additional grafana dashboard json file --alternative-key-pair-name string key pair name to use if default one generates conflicts --authorize-access authorize CLI to create cloud resources --auto-replace-keypair automatically replaces key pair to access node if previous key pair is not found --aws create node/s in AWS cloud --aws-profile string aws profile to use (default "default") --aws-volume-iops int AWS iops (for gp3, io1, and io2 volume types only) (default 3000) --aws-volume-size int AWS volume size in GB (default 1000) --aws-volume-throughput int AWS throughput in MiB/s (for gp3 volume type only) (default 125) --aws-volume-type string AWS volume type (default "gp3") --chain-config string path to the chain configuration for subnet --custom-avalanchego-version string install given avalanchego version on node/s --custom-subnet use a custom VM as the subnet virtual machine --custom-vm-branch string custom vm branch or commit --custom-vm-build-script string custom vm build-script --custom-vm-repo-url string custom vm repository url --default-validator-params use default weight/start/duration params for subnet validator --deploy-icm-messenger deploy Interchain Messenger (default true) --deploy-icm-registry deploy Interchain Registry (default true) --deploy-teleporter-messenger deploy Interchain Messenger (default true) --deploy-teleporter-registry deploy Interchain Registry (default true) --enable-monitoring set up Prometheus monitoring for created nodes. Please note that this option creates a separate monitoring instance and incures additional cost --evm-chain-id uint chain ID to use with Subnet-EVM --evm-defaults use default production settings with Subnet-EVM --evm-production-defaults use default production settings for your blockchain --evm-subnet use Subnet-EVM as the subnet virtual machine --evm-test-defaults use default test settings for your blockchain --evm-token string token name to use with Subnet-EVM --evm-version string version of Subnet-EVM to use --force-subnet-create overwrite the existing subnet configuration if one exists --gcp create node/s in GCP cloud --gcp-credentials string use given GCP credentials --gcp-project string use given GCP project --grafana-pkg string use grafana pkg instead of apt repo(by default), for example https://dl.grafana.com/oss/release/grafana_10.4.1_amd64.deb -h, --help help for wiz --icm generate an icm-ready vm --icm-messenger-contract-address-path string path to an icm messenger contract address file --icm-messenger-deployer-address-path string path to an icm messenger deployer address file --icm-messenger-deployer-tx-path string path to an icm messenger deployer tx file --icm-registry-bytecode-path string path to an icm registry bytecode file --icm-version string icm version to deploy (default "latest") --latest-avalanchego-pre-release-version install latest avalanchego pre-release version on node/s --latest-avalanchego-version install latest avalanchego release version on node/s --latest-evm-version use latest Subnet-EVM released version --latest-pre-released-evm-version use latest Subnet-EVM pre-released version --node-config string path to avalanchego node configuration for subnet --node-type string cloud instance type. Use 'default' to use recommended default instance type --num-apis ints number of API nodes(nodes without stake) to create in the new Devnet --num-validators ints number of nodes to create per region(s). Use comma to separate multiple numbers for each region in the same order as --region flag --public-http-port allow public access to avalanchego HTTP port --region strings create node/s in given region(s). Use comma to separate multiple regions --relayer run AWM relayer when deploying the vm --ssh-agent-identity string use given ssh identity(only for ssh agent). If not set, default will be used. --subnet-aliases strings additional subnet aliases to be used for RPC calls in addition to subnet blockchain name --subnet-config string path to the subnet configuration for subnet --subnet-genesis string file path of the subnet genesis --teleporter generate an icm-ready vm --teleporter-messenger-contract-address-path string path to an icm messenger contract address file --teleporter-messenger-deployer-address-path string path to an icm messenger deployer address file --teleporter-messenger-deployer-tx-path string path to an icm messenger deployer tx file --teleporter-registry-bytecode-path string path to an icm registry bytecode file --teleporter-version string icm version to deploy (default "latest") --use-ssh-agent use ssh agent for ssh --use-static-ip attach static Public IP on cloud servers (default true) --validators strings deploy subnet into given comma separated list of validators. defaults to all cluster nodes --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### export (ALPHA Warning) This command is currently in experimental mode. The node export command exports cluster configuration and its nodes config to a text file. If no file is specified, the configuration is printed to the stdout. Use --include-secrets to include keys in the export. In this case please keep the file secure as it contains sensitive information. Exported cluster configuration without secrets can be imported by another user using node import command. **Usage:** ```bash avalanche node export [subcommand] [flags] ``` **Flags:** ```bash --file string specify the file to export the cluster configuration to --force overwrite the file if it exists -h, --help help for export --include-secrets include keys in the export --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### import (ALPHA Warning) This command is currently in experimental mode. The node import command imports cluster configuration and its nodes configuration from a text file created from the node export command. Prior to calling this command, call node whitelist command to have your SSH public key and IP whitelisted by the cluster owner. This will enable you to use avalanche-cli commands to manage the imported cluster. Please note, that this imported cluster will be considered as EXTERNAL by avalanche-cli, so some commands affecting cloud nodes like node create or node destroy will be not applicable to it. **Usage:** ```bash avalanche node import [subcommand] [flags] ``` **Flags:** ```bash --file string specify the file to export the cluster configuration to -h, --help help for import --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### list (ALPHA Warning) This command is currently in experimental mode. The node list command lists all clusters together with their nodes. **Usage:** ```bash avalanche node list [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for list --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### loadtest (ALPHA Warning) This command is currently in experimental mode. The node loadtest command suite starts and stops a load test for an existing devnet cluster. **Usage:** ```bash avalanche node loadtest [subcommand] [flags] ``` **Subcommands:** - [`start`](#avalanche-node-loadtest-start): (ALPHA Warning) This command is currently in experimental mode. The node loadtest command starts load testing for an existing devnet cluster. If the cluster does not have an existing load test host, the command creates a separate cloud server and builds the load test binary based on the provided load test Git Repo URL and load test binary build command. The command will then run the load test binary based on the provided load test run command. - [`stop`](#avalanche-node-loadtest-stop): (ALPHA Warning) This command is currently in experimental mode. The node loadtest stop command stops load testing for an existing devnet cluster and terminates the separate cloud server created to host the load test. **Flags:** ```bash -h, --help help for loadtest --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### loadtest start (ALPHA Warning) This command is currently in experimental mode. The node loadtest command starts load testing for an existing devnet cluster. If the cluster does not have an existing load test host, the command creates a separate cloud server and builds the load test binary based on the provided load test Git Repo URL and load test binary build command. The command will then run the load test binary based on the provided load test run command. **Usage:** ```bash avalanche node loadtest start [subcommand] [flags] ``` **Flags:** ```bash --authorize-access authorize CLI to create cloud resources --aws create loadtest node in AWS cloud --aws-profile string aws profile to use (default "default") --gcp create loadtest in GCP cloud -h, --help help for start --load-test-branch string load test branch or commit --load-test-build-cmd string command to build load test binary --load-test-cmd string command to run load test --load-test-repo string load test repo url to use --node-type string cloud instance type for loadtest script --region string create load test node in a given region --ssh-agent-identity string use given ssh identity(only for ssh agent). If not set, default will be used --use-ssh-agent use ssh agent(ex: Yubikey) for ssh auth --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### loadtest stop (ALPHA Warning) This command is currently in experimental mode. The node loadtest stop command stops load testing for an existing devnet cluster and terminates the separate cloud server created to host the load test. **Usage:** ```bash avalanche node loadtest stop [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for stop --load-test strings stop specified load test node(s). Use comma to separate multiple load test instance names --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### local (ALPHA Warning) This command is currently in experimental mode. The node local command suite provides a collection of commands related to local nodes **Usage:** ```bash avalanche node local [subcommand] [flags] ``` **Subcommands:** - [`destroy`](#avalanche-node-local-destroy): Cleanup local node. - [`start`](#avalanche-node-local-start): (ALPHA Warning) This command is currently in experimental mode. The node local start command sets up a validator on a local server. The validator will be validating the Avalanche Primary Network and Subnet of your choice. By default, the command runs an interactive wizard. It walks you through all the steps you need to set up a validator. Once this command is completed, you will have to wait for the validator to finish bootstrapping on the primary network before running further commands on it, e.g. validating a Subnet. You can check the bootstrapping status by running avalanche node status local - [`status`](#avalanche-node-local-status): Get status of local node. - [`stop`](#avalanche-node-local-stop): Stop local node. - [`track`](#avalanche-node-local-track): (ALPHA Warning) make the local node at the cluster to track given blockchain **Flags:** ```bash -h, --help help for local --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### local destroy Cleanup local node. **Usage:** ```bash avalanche node local destroy [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for destroy --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### local start (ALPHA Warning) This command is currently in experimental mode. The node local start command sets up a validator on a local server. The validator will be validating the Avalanche Primary Network and Subnet of your choice. By default, the command runs an interactive wizard. It walks you through all the steps you need to set up a validator. Once this command is completed, you will have to wait for the validator to finish bootstrapping on the primary network before running further commands on it, e.g. validating a Subnet. You can check the bootstrapping status by running avalanche node status local **Usage:** ```bash avalanche node local start [subcommand] [flags] ``` **Flags:** ```bash --avalanchego-path string use this avalanchego binary path --bootstrap-id stringArray nodeIDs of bootstrap nodes --bootstrap-ip stringArray IP:port pairs of bootstrap nodes --cluster string operate on the given cluster --custom-avalanchego-version string install given avalanchego version on node/s --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet --genesis string path to genesis file -h, --help help for start --latest-avalanchego-pre-release-version install latest avalanchego pre-release version on node/s (default true) --latest-avalanchego-version install latest avalanchego release version on node/s -l, --local operate on a local network -m, --mainnet operate on mainnet --node-config string path to common avalanchego config settings for all nodes --num-nodes uint32 number of nodes to start (default 1) --partial-sync primary network partial sync (default true) --staking-cert-key-path string path to provided staking cert key for node --staking-signer-key-path string path to provided staking signer key for node --staking-tls-key-path string path to provided staking tls key for node -t, --testnet fuji operate on testnet (alias to fuji) --upgrade string path to upgrade file --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### local status Get status of local node. **Usage:** ```bash avalanche node local status [subcommand] [flags] ``` **Flags:** ```bash --blockchain string specify the blockchain the node is syncing with -h, --help help for status --subnet string specify the blockchain the node is syncing with --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### local stop Stop local node. **Usage:** ```bash avalanche node local stop [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for stop --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### local track (ALPHA Warning) make the local node at the cluster to track given blockchain **Usage:** ```bash avalanche node local track [subcommand] [flags] ``` **Flags:** ```bash --avalanchego-path string use this avalanchego binary path --custom-avalanchego-version string install given avalanchego version on node/s -h, --help help for track --latest-avalanchego-pre-release-version install latest avalanchego pre-release version on node/s (default true) --latest-avalanchego-version install latest avalanchego release version on node/s --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### refresh-ips (ALPHA Warning) This command is currently in experimental mode. The node refresh-ips command obtains the current IP for all nodes with dynamic IPs in the cluster, and updates the local node information used by CLI commands. **Usage:** ```bash avalanche node refresh-ips [subcommand] [flags] ``` **Flags:** ```bash --aws-profile string aws profile to use (default "default") -h, --help help for refresh-ips --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### resize (ALPHA Warning) This command is currently in experimental mode. The node resize command can change the amount of CPU, memory and disk space available for the cluster nodes. **Usage:** ```bash avalanche node resize [subcommand] [flags] ``` **Flags:** ```bash --aws-profile string aws profile to use (default "default") --disk-size string Disk size to resize in Gb (e.g. 1000Gb) -h, --help help for resize --node-type string Node type to resize (e.g. t3.2xlarge) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### scp (ALPHA Warning) This command is currently in experimental mode. The node scp command securely copies files to and from nodes. Remote source or destionation can be specified using the following format: [clusterName|nodeID|instanceID|IP]:/path/to/file. Regular expressions are supported for the source files like /tmp/*.txt. File transfer to the nodes are parallelized. IF source or destination is cluster, the other should be a local file path. If both destinations are remote, they must be nodes for the same cluster and not clusters themselves. For example: $ avalanche node scp [cluster1|node1]:/tmp/file.txt /tmp/file.txt $ avalanche node scp /tmp/file.txt [cluster1|NodeID-XXXX]:/tmp/file.txt $ avalanche node scp node1:/tmp/file.txt NodeID-XXXX:/tmp/file.txt **Usage:** ```bash avalanche node scp [subcommand] [flags] ``` **Flags:** ```bash --compress use compression for ssh -h, --help help for scp --recursive copy directories recursively --with-loadtest include loadtest node for scp cluster operations --with-monitor include monitoring node for scp cluster operations --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### ssh (ALPHA Warning) This command is currently in experimental mode. The node ssh command execute a given command [cmd] using ssh on all nodes in the cluster if ClusterName is given. If no command is given, just prints the ssh command to be used to connect to each node in the cluster. For provided NodeID or InstanceID or IP, the command [cmd] will be executed on that node. If no [cmd] is provided for the node, it will open ssh shell there. **Usage:** ```bash avalanche node ssh [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for ssh --parallel run ssh command on all nodes in parallel --with-loadtest include loadtest node for ssh cluster operations --with-monitor include monitoring node for ssh cluster operations --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### status (ALPHA Warning) This command is currently in experimental mode. The node status command gets the bootstrap status of all nodes in a cluster with the Primary Network. If no cluster is given, defaults to node list behaviour. To get the bootstrap status of a node with a Blockchain, use --blockchain flag **Usage:** ```bash avalanche node status [subcommand] [flags] ``` **Flags:** ```bash --blockchain string specify the blockchain the node is syncing with -h, --help help for status --subnet string specify the blockchain the node is syncing with --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### sync (ALPHA Warning) This command is currently in experimental mode. The node sync command enables all nodes in a cluster to be bootstrapped to a Blockchain. You can check the blockchain bootstrap status by calling avalanche node status `clusterName` --blockchain `blockchainName` **Usage:** ```bash avalanche node sync [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for sync --no-checks do not check for bootstrapped/healthy status or rpc compatibility of nodes against subnet --subnet-aliases strings subnet alias to be used for RPC calls. defaults to subnet blockchain ID --validators strings sync subnet into given comma separated list of validators. defaults to all cluster nodes --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### update (ALPHA Warning) This command is currently in experimental mode. The node update command suite provides a collection of commands for nodes to update their avalanchego or VM config. You can check the status after update by calling avalanche node status **Usage:** ```bash avalanche node update [subcommand] [flags] ``` **Subcommands:** - [`subnet`](#avalanche-node-update-subnet): (ALPHA Warning) This command is currently in experimental mode. The node update subnet command updates all nodes in a cluster with latest Subnet configuration and VM for custom VM. You can check the updated subnet bootstrap status by calling avalanche node status `clusterName` --subnet `subnetName` **Flags:** ```bash -h, --help help for update --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### update subnet (ALPHA Warning) This command is currently in experimental mode. The node update subnet command updates all nodes in a cluster with latest Subnet configuration and VM for custom VM. You can check the updated subnet bootstrap status by calling avalanche node status `clusterName` --subnet `subnetName` **Usage:** ```bash avalanche node update subnet [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for subnet --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### upgrade (ALPHA Warning) This command is currently in experimental mode. The node update command suite provides a collection of commands for nodes to update their avalanchego or VM version. You can check the status after upgrade by calling avalanche node status **Usage:** ```bash avalanche node upgrade [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for upgrade --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### validate (ALPHA Warning) This command is currently in experimental mode. The node validate command suite provides a collection of commands for nodes to join the Primary Network and Subnets as validators. If any of the commands is run before the nodes are bootstrapped on the Primary Network, the command will fail. You can check the bootstrap status by calling avalanche node status `clusterName` **Usage:** ```bash avalanche node validate [subcommand] [flags] ``` **Subcommands:** - [`primary`](#avalanche-node-validate-primary): (ALPHA Warning) This command is currently in experimental mode. The node validate primary command enables all nodes in a cluster to be validators of Primary Network. - [`subnet`](#avalanche-node-validate-subnet): (ALPHA Warning) This command is currently in experimental mode. The node validate subnet command enables all nodes in a cluster to be validators of a Subnet. If the command is run before the nodes are Primary Network validators, the command will first make the nodes Primary Network validators before making them Subnet validators. If The command is run before the nodes are bootstrapped on the Primary Network, the command will fail. You can check the bootstrap status by calling avalanche node status `clusterName` If The command is run before the nodes are synced to the subnet, the command will fail. You can check the subnet sync status by calling avalanche node status `clusterName` --subnet `subnetName` **Flags:** ```bash -h, --help help for validate --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### validate primary (ALPHA Warning) This command is currently in experimental mode. The node validate primary command enables all nodes in a cluster to be validators of Primary Network. **Usage:** ```bash avalanche node validate primary [subcommand] [flags] ``` **Flags:** ```bash -e, --ewoq use ewoq key [fuji/devnet only] -h, --help help for primary -k, --key string select the key to use [fuji only] -g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji/devnet) --ledger-addrs strings use the given ledger addresses --stake-amount uint how many AVAX to stake in the validator --staking-period duration how long validator validates for after start time --start-time string UTC start time when this validator starts validating, in 'YYYY-MM-DD HH:MM:SS' format --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### validate subnet (ALPHA Warning) This command is currently in experimental mode. The node validate subnet command enables all nodes in a cluster to be validators of a Subnet. If the command is run before the nodes are Primary Network validators, the command will first make the nodes Primary Network validators before making them Subnet validators. If The command is run before the nodes are bootstrapped on the Primary Network, the command will fail. You can check the bootstrap status by calling avalanche node status `clusterName` If The command is run before the nodes are synced to the subnet, the command will fail. You can check the subnet sync status by calling avalanche node status `clusterName` --subnet `subnetName` **Usage:** ```bash avalanche node validate subnet [subcommand] [flags] ``` **Flags:** ```bash --default-validator-params use default weight/start/duration params for subnet validator -e, --ewoq use ewoq key [fuji/devnet only] -h, --help help for subnet -k, --key string select the key to use [fuji/devnet only] -g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji/devnet) --ledger-addrs strings use the given ledger addresses --no-checks do not check for bootstrapped status or healthy status --no-validation-checks do not check if subnet is already synced or validated (default true) --stake-amount uint how many AVAX to stake in the validator --staking-period duration how long validator validates for after start time --start-time string UTC start time when this validator starts validating, in 'YYYY-MM-DD HH:MM:SS' format --validators strings validate subnet for the given comma separated list of validators. defaults to all cluster nodes --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### whitelist (ALPHA Warning) The whitelist command suite provides a collection of tools for granting access to the cluster. Command adds IP if --ip params provided to cloud security access rules allowing it to access all nodes in the cluster via ssh or http. It also command adds SSH public key to all nodes in the cluster if --ssh params is there. If no params provided it detects current user IP automaticaly and whitelists it **Usage:** ```bash avalanche node whitelist [subcommand] [flags] ``` **Flags:** ```bash -y, --current-ip whitelist current host ip -h, --help help for whitelist --ip string ip address to whitelist --ssh string ssh public key to whitelist --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche primary The primary command suite provides a collection of tools for interacting with the Primary Network **Usage:** ```bash avalanche primary [subcommand] [flags] ``` **Subcommands:** - [`addValidator`](#avalanche-primary-addvalidator): The primary addValidator command adds a node as a validator in the Primary Network - [`describe`](#avalanche-primary-describe): The subnet describe command prints details of the primary network configuration to the console. **Flags:** ```bash -h, --help help for primary --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### addValidator The primary addValidator command adds a node as a validator in the Primary Network **Usage:** ```bash avalanche primary addValidator [subcommand] [flags] ``` **Flags:** ```bash --cluster string operate on the given cluster --delegation-fee uint32 set the delegation fee (20 000 is equivalent to 2%) --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for addValidator -k, --key string select the key to use [fuji only] -g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji) --ledger-addrs strings use the given ledger addresses -m, --mainnet operate on mainnet --nodeID string set the NodeID of the validator to add --proof-of-possession string set the BLS proof of possession of the validator to add --public-key string set the BLS public key of the validator to add --staking-period duration how long this validator will be staking --start-time string UTC start time when this validator starts validating, in 'YYYY-MM-DD HH:MM:SS' format -t, --testnet fuji operate on testnet (alias to fuji) --weight uint set the staking weight of the validator to add --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### describe The subnet describe command prints details of the primary network configuration to the console. **Usage:** ```bash avalanche primary describe [subcommand] [flags] ``` **Flags:** ```bash --cluster string operate on the given cluster -h, --help help for describe -l, --local operate on a local network --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche subnet The subnet command suite provides a collection of tools for developing and deploying Blockchains. To get started, use the subnet create command wizard to walk through the configuration of your very first Blockchain. Then, go ahead and deploy it with the subnet deploy command. You can use the rest of the commands to manage your Blockchain configurations and live deployments. Deprecation notice: use 'avalanche blockchain' **Usage:** ```bash avalanche subnet [subcommand] [flags] ``` **Subcommands:** - [`addValidator`](#avalanche-subnet-addvalidator): The blockchain addValidator command adds a node as a validator to an L1 of the user provided deployed network. If the network is proof of authority, the owner of the validator manager contract must sign the transaction. If the network is proof of stake, the node must stake the L1's staking token. Both processes will issue a RegisterL1ValidatorTx on the P-Chain. This command currently only works on Blockchains deployed to either the Fuji Testnet or Mainnet. - [`changeOwner`](#avalanche-subnet-changeowner): The blockchain changeOwner changes the owner of the subnet of the deployed Blockchain. - [`changeWeight`](#avalanche-subnet-changeweight): The blockchain changeWeight command changes the weight of a Subnet Validator. The Subnet has to be a Proof of Authority Subnet-Only Validator Subnet. - [`configure`](#avalanche-subnet-configure): AvalancheGo nodes support several different configuration files. Subnets have their own Subnet config which applies to all chains/VMs in the Subnet. Each chain within the Subnet can have its own chain config. A chain can also have special requirements for the AvalancheGo node configuration itself. This command allows you to set all those files. - [`create`](#avalanche-subnet-create): The blockchain create command builds a new genesis file to configure your Blockchain. By default, the command runs an interactive wizard. It walks you through all the steps you need to create your first Blockchain. The tool supports deploying Subnet-EVM, and custom VMs. You can create a custom, user-generated genesis with a custom VM by providing the path to your genesis and VM binaries with the --genesis and --vm flags. By default, running the command with a blockchainName that already exists causes the command to fail. If you'd like to overwrite an existing configuration, pass the -f flag. - [`delete`](#avalanche-subnet-delete): The blockchain delete command deletes an existing blockchain configuration. - [`deploy`](#avalanche-subnet-deploy): The blockchain deploy command deploys your Blockchain configuration locally, to Fuji Testnet, or to Mainnet. At the end of the call, the command prints the RPC URL you can use to interact with the Subnet. Avalanche-CLI only supports deploying an individual Blockchain once per network. Subsequent attempts to deploy the same Blockchain to the same network (local, Fuji, Mainnet) aren't allowed. If you'd like to redeploy a Blockchain locally for testing, you must first call avalanche network clean to reset all deployed chain state. Subsequent local deploys redeploy the chain with fresh state. You can deploy the same Blockchain to multiple networks, so you can take your locally tested Subnet and deploy it on Fuji or Mainnet. - [`describe`](#avalanche-subnet-describe): The blockchain describe command prints the details of a Blockchain configuration to the console. By default, the command prints a summary of the configuration. By providing the --genesis flag, the command instead prints out the raw genesis file. - [`export`](#avalanche-subnet-export): The blockchain export command write the details of an existing Blockchain deploy to a file. The command prompts for an output path. You can also provide one with the --output flag. - [`import`](#avalanche-subnet-import): Import blockchain configurations into avalanche-cli. This command suite supports importing from a file created on another computer, or importing from blockchains running public networks (e.g. created manually or with the deprecated subnet-cli) - [`join`](#avalanche-subnet-join): The subnet join command configures your validator node to begin validating a new Blockchain. To complete this process, you must have access to the machine running your validator. If the CLI is running on the same machine as your validator, it can generate or update your node's config file automatically. Alternatively, the command can print the necessary instructions to update your node manually. To complete the validation process, the Subnet's admins must add the NodeID of your validator to the Subnet's allow list by calling addValidator with your NodeID. After you update your validator's config, you need to restart your validator manually. If you provide the --avalanchego-config flag, this command attempts to edit the config file at that path. This command currently only supports Blockchains deployed on the Fuji Testnet and Mainnet. - [`list`](#avalanche-subnet-list): The blockchain list command prints the names of all created Blockchain configurations. Without any flags, it prints some general, static information about the Blockchain. With the --deployed flag, the command shows additional information including the VMID, BlockchainID and SubnetID. - [`publish`](#avalanche-subnet-publish): The blockchain publish command publishes the Blockchain's VM to a repository. - [`removeValidator`](#avalanche-subnet-removevalidator): The blockchain removeValidator command stops a whitelisted, subnet network validator from validating your deployed Blockchain. To remove the validator from the Subnet's allow list, provide the validator's unique NodeID. You can bypass these prompts by providing the values with flags. - [`stats`](#avalanche-subnet-stats): The blockchain stats command prints validator statistics for the given Blockchain. - [`upgrade`](#avalanche-subnet-upgrade): The blockchain upgrade command suite provides a collection of tools for updating your developmental and deployed Blockchains. - [`validators`](#avalanche-subnet-validators): The blockchain validators command lists the validators of a blockchain's subnet and provides several statistics about them. - [`vmid`](#avalanche-subnet-vmid): The blockchain vmid command prints the virtual machine ID (VMID) for the given Blockchain. **Flags:** ```bash -h, --help help for subnet --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### addValidator The blockchain addValidator command adds a node as a validator to an L1 of the user provided deployed network. If the network is proof of authority, the owner of the validator manager contract must sign the transaction. If the network is proof of stake, the node must stake the L1's staking token. Both processes will issue a RegisterL1ValidatorTx on the P-Chain. This command currently only works on Blockchains deployed to either the Fuji Testnet or Mainnet. **Usage:** ```bash avalanche subnet addValidator [subcommand] [flags] ``` **Flags:** ```bash --aggregator-allow-private-peers allow the signature aggregator to connect to peers with private IP (default true) --aggregator-extra-endpoints strings endpoints for extra nodes that are needed in signature aggregation --aggregator-log-level string log level to use with signature aggregator (default "Off") --balance uint set the AVAX balance of the validator that will be used for continuous fee on P-Chain --blockchain-genesis-key use genesis allocated key to pay fees for completing the validator's registration (blockchain gas token) --blockchain-key string CLI stored key to use to pay fees for completing the validator's registration (blockchain gas token) --blockchain-private-key string private key to use to pay fees for completing the validator's registration (blockchain gas token) --bls-proof-of-possession string set the BLS proof of possession of the validator to add --bls-public-key string set the BLS public key of the validator to add --cluster string operate on the given cluster --create-local-validator create additional local validator and add it to existing running local node --default-duration (for Subnets, not L1s) set duration so as to validate until primary validator ends its period --default-start-time (for Subnets, not L1s) use default start time for subnet validator (5 minutes later for fuji & mainnet, 30 seconds later for devnet) --default-validator-params (for Subnets, not L1s) use default weight/start/duration params for subnet validator --delegation-fee uint16 (PoS only) delegation fee (in bips) (default 100) --devnet operate on a devnet network --disable-owner string P-Chain address that will able to disable the validator with a P-Chain transaction --endpoint string use the given endpoint for network operations -e, --ewoq use ewoq key [fuji/devnet only] -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for addValidator -k, --key string select the key to use [fuji/devnet only] -g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji/devnet) --ledger-addrs strings use the given ledger addresses -l, --local operate on a local network -m, --mainnet operate on mainnet --node-endpoint string gather node id/bls from publicly available avalanchego apis on the given endpoint --node-id string node-id of the validator to add --output-tx-path string (for Subnets, not L1s) file path of the add validator tx --partial-sync set primary network partial sync for new validators (default true) --remaining-balance-owner string P-Chain address that will receive any leftover AVAX from the validator when it is removed from Subnet --rpc string connect to validator manager at the given rpc endpoint --stake-amount uint (PoS only) amount of tokens to stake --staking-period duration how long this validator will be staking --start-time string (for Subnets, not L1s) UTC start time when this validator starts validating, in 'YYYY-MM-DD HH:MM:SS' format --subnet-auth-keys strings (for Subnets, not L1s) control keys that will be used to authenticate add validator tx -t, --testnet fuji operate on testnet (alias to fuji) --wait-for-tx-acceptance (for Subnets, not L1s) just issue the add validator tx, without waiting for its acceptance (default true) --weight uint set the staking weight of the validator to add (default 20) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### changeOwner The blockchain changeOwner changes the owner of the subnet of the deployed Blockchain. **Usage:** ```bash avalanche subnet changeOwner [subcommand] [flags] ``` **Flags:** ```bash --cluster string operate on the given cluster --control-keys strings addresses that may make subnet changes --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -e, --ewoq use ewoq key [fuji/devnet] -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for changeOwner -k, --key string select the key to use [fuji/devnet] -g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji/devnet) --ledger-addrs strings use the given ledger addresses -l, --local operate on a local network -m, --mainnet operate on mainnet --output-tx-path string file path of the transfer subnet ownership tx -s, --same-control-key use the fee-paying key as control key --subnet-auth-keys strings control keys that will be used to authenticate transfer subnet ownership tx -t, --testnet fuji operate on testnet (alias to fuji) --threshold uint32 required number of control key signatures to make subnet changes --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### changeWeight The blockchain changeWeight command changes the weight of a Subnet Validator. The Subnet has to be a Proof of Authority Subnet-Only Validator Subnet. **Usage:** ```bash avalanche subnet changeWeight [subcommand] [flags] ``` **Flags:** ```bash --cluster string operate on the given cluster --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -e, --ewoq use ewoq key [fuji/devnet only] -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for changeWeight -k, --key string select the key to use [fuji/devnet only] -g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji/devnet) --ledger-addrs strings use the given ledger addresses -l, --local operate on a local network -m, --mainnet operate on mainnet --node-id string node-id of the validator -t, --testnet fuji operate on testnet (alias to fuji) --weight uint set the new staking weight of the validator (default 20) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### configure AvalancheGo nodes support several different configuration files. Subnets have their own Subnet config which applies to all chains/VMs in the Subnet. Each chain within the Subnet can have its own chain config. A chain can also have special requirements for the AvalancheGo node configuration itself. This command allows you to set all those files. **Usage:** ```bash avalanche subnet configure [subcommand] [flags] ``` **Flags:** ```bash --chain-config string path to the chain configuration -h, --help help for configure --node-config string path to avalanchego node configuration --per-node-chain-config string path to per node chain configuration for local network --subnet-config string path to the subnet configuration --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### create The blockchain create command builds a new genesis file to configure your Blockchain. By default, the command runs an interactive wizard. It walks you through all the steps you need to create your first Blockchain. The tool supports deploying Subnet-EVM, and custom VMs. You can create a custom, user-generated genesis with a custom VM by providing the path to your genesis and VM binaries with the --genesis and --vm flags. By default, running the command with a blockchainName that already exists causes the command to fail. If you'd like to overwrite an existing configuration, pass the -f flag. **Usage:** ```bash avalanche subnet create [subcommand] [flags] ``` **Flags:** ```bash --custom use a custom VM template --custom-vm-branch string custom vm branch or commit --custom-vm-build-script string custom vm build-script --custom-vm-path string file path of custom vm to use --custom-vm-repo-url string custom vm repository url --debug enable blockchain debugging (default true) --evm use the Subnet-EVM as the base template --evm-chain-id uint chain ID to use with Subnet-EVM --evm-defaults deprecation notice: use '--production-defaults' --evm-token string token symbol to use with Subnet-EVM --external-gas-token use a gas token from another blockchain -f, --force overwrite the existing configuration if one exists --from-github-repo generate custom VM binary from github repository --genesis string file path of genesis to use -h, --help help for create --icm interoperate with other blockchains using ICM --icm-registry-at-genesis setup ICM registry smart contract on genesis [experimental] --latest use latest Subnet-EVM released version, takes precedence over --vm-version --pre-release use latest Subnet-EVM pre-released version, takes precedence over --vm-version --production-defaults use default production settings for your blockchain --proof-of-authority use proof of authority(PoA) for validator management --proof-of-stake use proof of stake(PoS) for validator management --proxy-contract-owner string EVM address that controls ProxyAdmin for TransparentProxy of ValidatorManager contract --reward-basis-points uint (PoS only) reward basis points for PoS Reward Calculator (default 100) --sovereign set to false if creating non-sovereign blockchain (default true) --teleporter interoperate with other blockchains using ICM --test-defaults use default test settings for your blockchain --validator-manager-owner string EVM address that controls Validator Manager Owner --vm string file path of custom vm to use. alias to custom-vm-path --vm-version string version of Subnet-EVM template to use --warp generate a vm with warp support (needed for ICM) (default true) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### delete The blockchain delete command deletes an existing blockchain configuration. **Usage:** ```bash avalanche subnet delete [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for delete --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### deploy The blockchain deploy command deploys your Blockchain configuration locally, to Fuji Testnet, or to Mainnet. At the end of the call, the command prints the RPC URL you can use to interact with the Subnet. Avalanche-CLI only supports deploying an individual Blockchain once per network. Subsequent attempts to deploy the same Blockchain to the same network (local, Fuji, Mainnet) aren't allowed. If you'd like to redeploy a Blockchain locally for testing, you must first call avalanche network clean to reset all deployed chain state. Subsequent local deploys redeploy the chain with fresh state. You can deploy the same Blockchain to multiple networks, so you can take your locally tested Subnet and deploy it on Fuji or Mainnet. **Usage:** ```bash avalanche subnet deploy [subcommand] [flags] ``` **Flags:** ```bash --aggregator-allow-private-peers allow the signature aggregator to connect to peers with private IP (default true) --aggregator-extra-endpoints strings endpoints for extra nodes that are needed in signature aggregation --aggregator-log-level string log level to use with signature aggregator (default "Off") --avalanchego-path string use this avalanchego binary path --avalanchego-version string use this version of avalanchego (ex: v1.17.12) (default "latest-prerelease") --balance float set the AVAX balance of each bootstrap validator that will be used for continuous fee on P-Chain (default 0.1) --blockchain-genesis-key use genesis allocated key to fund validator manager initialization --blockchain-key string CLI stored key to use to fund validator manager initialization --blockchain-private-key string private key to use to fund validator manager initialization --bootstrap-endpoints strings take validator node info from the given endpoints --bootstrap-filepath string JSON file path that provides details about bootstrap validators, leave Node-ID and BLS values empty if using --generate-node-id=true --cchain-funding-key string key to be used to fund relayer account on cchain --cchain-icm-key string key to be used to pay for ICM deploys on C-Chain --change-owner-address string address that will receive change if node is no longer L1 validator --cluster string operate on the given cluster --control-keys strings addresses that may make subnet changes --convert-only avoid node track, restart and poa manager setup --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -e, --ewoq use ewoq key [fuji/devnet deploy only] -f, --fuji testnet operate on fuji (alias to testnet --generate-node-id whether to create new node id for bootstrap validators (Node-ID and BLS values in bootstrap JSON file will be overridden if --bootstrap-filepath flag is used) -h, --help help for deploy --icm-key string key to be used to pay for ICM deploys (default "cli-teleporter-deployer") --icm-version string ICM version to deploy (default "latest") -k, --key string select the key to use [fuji/devnet deploy only] -g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji/devnet) --ledger-addrs strings use the given ledger addresses -l, --local operate on a local network -m, --mainnet operate on mainnet --mainnet-chain-id uint32 use different ChainID for mainnet deployment --noicm skip automatic ICM deploy --num-bootstrap-validators int (only if --generate-node-id is true) number of bootstrap validators to set up in sovereign L1 validator) --num-local-nodes int number of nodes to be created on local machine --num-nodes uint32 number of nodes to be created on local network deploy (default 2) --output-tx-path string file path of the blockchain creation tx --partial-sync set primary network partial sync for new validators (default true) --pos-maximum-stake-amount uint maximum stake amount (default 1000) --pos-maximum-stake-multiplier uint8 maximum stake multiplier (default 1) --pos-minimum-delegation-fee uint16 minimum delegation fee (default 1) --pos-minimum-stake-amount uint minimum stake amount (default 1) --pos-minimum-stake-duration uint minimum stake duration (default 100) --pos-weight-to-value-factor uint weight to value factor (default 1) --relay-cchain relay C-Chain as source and destination (default true) --relayer-allow-private-ips allow relayer to connec to private ips (default true) --relayer-amount float automatically fund relayer fee payments with the given amount --relayer-key string key to be used by default both for rewards and to pay fees --relayer-log-level string log level to be used for relayer logs (default "info") --relayer-path string relayer binary to use --relayer-version string relayer version to deploy (default "latest-prerelease") -s, --same-control-key use the fee-paying key as control key --skip-icm-deploy skip automatic ICM deploy --skip-local-teleporter skip automatic ICM deploy on local networks [to be deprecated] --skip-relayer skip relayer deploy --skip-teleporter-deploy skip automatic ICM deploy --subnet-auth-keys strings control keys that will be used to authenticate chain creation -u, --subnet-id string do not create a subnet, deploy the blockchain into the given subnet id --subnet-only only create a subnet --teleporter-messenger-contract-address-path string path to an ICM Messenger contract address file --teleporter-messenger-deployer-address-path string path to an ICM Messenger deployer address file --teleporter-messenger-deployer-tx-path string path to an ICM Messenger deployer tx file --teleporter-registry-bytecode-path string path to an ICM Registry bytecode file --teleporter-version string ICM version to deploy (default "latest") -t, --testnet fuji operate on testnet (alias to fuji) --threshold uint32 required number of control key signatures to make subnet changes --use-local-machine use local machine as a blockchain validator --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### describe The blockchain describe command prints the details of a Blockchain configuration to the console. By default, the command prints a summary of the configuration. By providing the --genesis flag, the command instead prints out the raw genesis file. **Usage:** ```bash avalanche subnet describe [subcommand] [flags] ``` **Flags:** ```bash -g, --genesis Print the genesis to the console directly instead of the summary -h, --help help for describe --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### export The blockchain export command write the details of an existing Blockchain deploy to a file. The command prompts for an output path. You can also provide one with the --output flag. **Usage:** ```bash avalanche subnet export [subcommand] [flags] ``` **Flags:** ```bash --custom-vm-branch string custom vm branch --custom-vm-build-script string custom vm build-script --custom-vm-repo-url string custom vm repository url -h, --help help for export -o, --output string write the export data to the provided file path --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### import Import blockchain configurations into avalanche-cli. This command suite supports importing from a file created on another computer, or importing from blockchains running public networks (e.g. created manually or with the deprecated subnet-cli) **Usage:** ```bash avalanche subnet import [subcommand] [flags] ``` **Subcommands:** - [`file`](#avalanche-subnet-import-file): The blockchain import command will import a blockchain configuration from a file or a git repository. To import from a file, you can optionally provide the path as a command-line argument. Alternatively, running the command without any arguments triggers an interactive wizard. To import from a repository, go through the wizard. By default, an imported Blockchain doesn't overwrite an existing Blockchain with the same name. To allow overwrites, provide the --force flag. - [`public`](#avalanche-subnet-import-public): The blockchain import public command imports a Blockchain configuration from a running network. By default, an imported Blockchain doesn't overwrite an existing Blockchain with the same name. To allow overwrites, provide the --force flag. **Flags:** ```bash -h, --help help for import --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### import file The blockchain import command will import a blockchain configuration from a file or a git repository. To import from a file, you can optionally provide the path as a command-line argument. Alternatively, running the command without any arguments triggers an interactive wizard. To import from a repository, go through the wizard. By default, an imported Blockchain doesn't overwrite an existing Blockchain with the same name. To allow overwrites, provide the --force flag. **Usage:** ```bash avalanche subnet import file [subcommand] [flags] ``` **Flags:** ```bash --branch string the repo branch to use if downloading a new repo -f, --force overwrite the existing configuration if one exists -h, --help help for file --repo string the repo to import (ex: ava-labs/avalanche-plugins-core) or url to download the repo from --subnet string the subnet configuration to import from the provided repo --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### import public The blockchain import public command imports a Blockchain configuration from a running network. By default, an imported Blockchain doesn't overwrite an existing Blockchain with the same name. To allow overwrites, provide the --force flag. **Usage:** ```bash avalanche subnet import public [subcommand] [flags] ``` **Flags:** ```bash --blockchain-id string the blockchain ID --cluster string operate on the given cluster --custom use a custom VM template --devnet operate on a devnet network --endpoint string use the given endpoint for network operations --evm import a subnet-evm --force overwrite the existing configuration if one exists -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for public -l, --local operate on a local network -m, --mainnet operate on mainnet --node-url string [optional] URL of an already running subnet validator -t, --testnet fuji operate on testnet (alias to fuji) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### join The subnet join command configures your validator node to begin validating a new Blockchain. To complete this process, you must have access to the machine running your validator. If the CLI is running on the same machine as your validator, it can generate or update your node's config file automatically. Alternatively, the command can print the necessary instructions to update your node manually. To complete the validation process, the Subnet's admins must add the NodeID of your validator to the Subnet's allow list by calling addValidator with your NodeID. After you update your validator's config, you need to restart your validator manually. If you provide the --avalanchego-config flag, this command attempts to edit the config file at that path. This command currently only supports Blockchains deployed on the Fuji Testnet and Mainnet. **Usage:** ```bash avalanche subnet join [subcommand] [flags] ``` **Flags:** ```bash --avalanchego-config string file path of the avalanchego config file --cluster string operate on the given cluster --data-dir string path of avalanchego's data dir directory --devnet operate on a devnet network --endpoint string use the given endpoint for network operations --force-write if true, skip to prompt to overwrite the config file -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for join -k, --key string select the key to use [fuji only] -g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji) --ledger-addrs strings use the given ledger addresses -l, --local operate on a local network -m, --mainnet operate on mainnet --node-id string set the NodeID of the validator to check --plugin-dir string file path of avalanchego's plugin directory --print if true, print the manual config without prompting --stake-amount uint amount of tokens to stake on validator --staking-period duration how long validator validates for after start time --start-time string start time that validator starts validating -t, --testnet fuji operate on testnet (alias to fuji) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### list The blockchain list command prints the names of all created Blockchain configurations. Without any flags, it prints some general, static information about the Blockchain. With the --deployed flag, the command shows additional information including the VMID, BlockchainID and SubnetID. **Usage:** ```bash avalanche subnet list [subcommand] [flags] ``` **Flags:** ```bash --deployed show additional deploy information -h, --help help for list --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### publish The blockchain publish command publishes the Blockchain's VM to a repository. **Usage:** ```bash avalanche subnet publish [subcommand] [flags] ``` **Flags:** ```bash --alias string We publish to a remote repo, but identify the repo locally under a user-provided alias (e.g. myrepo). --force If true, ignores if the subnet has been published in the past, and attempts a forced publish. -h, --help help for publish --no-repo-path string Do not let the tool manage file publishing, but have it only generate the files and put them in the location given by this flag. --repo-url string The URL of the repo where we are publishing --subnet-file-path string Path to the Subnet description file. If not given, a prompting sequence will be initiated. --vm-file-path string Path to the VM description file. If not given, a prompting sequence will be initiated. --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### removeValidator The blockchain removeValidator command stops a whitelisted, subnet network validator from validating your deployed Blockchain. To remove the validator from the Subnet's allow list, provide the validator's unique NodeID. You can bypass these prompts by providing the values with flags. **Usage:** ```bash avalanche subnet removeValidator [subcommand] [flags] ``` **Flags:** ```bash --aggregator-allow-private-peers allow the signature aggregator to connect to peers with private IP (default true) --aggregator-extra-endpoints strings endpoints for extra nodes that are needed in signature aggregation --aggregator-log-level string log level to use with signature aggregator (default "Off") --blockchain-genesis-key use genesis allocated key to pay fees for completing the validator's removal (blockchain gas token) --blockchain-key string CLI stored key to use to pay fees for completing the validator's removal (blockchain gas token) --blockchain-private-key string private key to use to pay fees for completing the validator's removal (blockchain gas token) --cluster string operate on the given cluster --devnet operate on a devnet network --endpoint string use the given endpoint for network operations --force force validator removal even if it's not getting rewarded -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for removeValidator -k, --key string select the key to use [fuji deploy only] -g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji) --ledger-addrs strings use the given ledger addresses -l, --local operate on a local network -m, --mainnet operate on mainnet --node-endpoint string remove validator that responds to the given endpoint --node-id string node-id of the validator --output-tx-path string (for non-SOV blockchain only) file path of the removeValidator tx --rpc string connect to validator manager at the given rpc endpoint --subnet-auth-keys strings (for non-SOV blockchain only) control keys that will be used to authenticate the removeValidator tx -t, --testnet fuji operate on testnet (alias to fuji) --uptime uint validator's uptime in seconds. If not provided, it will be automatically calculated --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### stats The blockchain stats command prints validator statistics for the given Blockchain. **Usage:** ```bash avalanche subnet stats [subcommand] [flags] ``` **Flags:** ```bash --cluster string operate on the given cluster --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for stats -l, --local operate on a local network -m, --mainnet operate on mainnet -t, --testnet fuji operate on testnet (alias to fuji) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### upgrade The blockchain upgrade command suite provides a collection of tools for updating your developmental and deployed Blockchains. **Usage:** ```bash avalanche subnet upgrade [subcommand] [flags] ``` **Subcommands:** - [`apply`](#avalanche-subnet-upgrade-apply): Apply generated upgrade bytes to running Blockchain nodes to trigger a network upgrade. For public networks (Fuji Testnet or Mainnet), to complete this process, you must have access to the machine running your validator. If the CLI is running on the same machine as your validator, it can manipulate your node's configuration automatically. Alternatively, the command can print the necessary instructions to upgrade your node manually. After you update your validator's configuration, you need to restart your validator manually. If you provide the --avalanchego-chain-config-dir flag, this command attempts to write the upgrade file at that path. Refer to https://docs.avax.network/nodes/maintain/chain-config-flags#subnet-chain-configs for related documentation. - [`export`](#avalanche-subnet-upgrade-export): Export the upgrade bytes file to a location of choice on disk - [`generate`](#avalanche-subnet-upgrade-generate): The blockchain upgrade generate command builds a new upgrade.json file to customize your Blockchain. It guides the user through the process using an interactive wizard. - [`import`](#avalanche-subnet-upgrade-import): Import the upgrade bytes file into the local environment - [`print`](#avalanche-subnet-upgrade-print): Print the upgrade.json file content - [`vm`](#avalanche-subnet-upgrade-vm): The blockchain upgrade vm command enables the user to upgrade their Blockchain's VM binary. The command can upgrade both local Blockchains and publicly deployed Blockchains on Fuji and Mainnet. The command walks the user through an interactive wizard. The user can skip the wizard by providing command line flags. **Flags:** ```bash -h, --help help for upgrade --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### upgrade apply Apply generated upgrade bytes to running Blockchain nodes to trigger a network upgrade. For public networks (Fuji Testnet or Mainnet), to complete this process, you must have access to the machine running your validator. If the CLI is running on the same machine as your validator, it can manipulate your node's configuration automatically. Alternatively, the command can print the necessary instructions to upgrade your node manually. After you update your validator's configuration, you need to restart your validator manually. If you provide the --avalanchego-chain-config-dir flag, this command attempts to write the upgrade file at that path. Refer to https://docs.avax.network/nodes/maintain/chain-config-flags#subnet-chain-configs for related documentation. **Usage:** ```bash avalanche subnet upgrade apply [subcommand] [flags] ``` **Flags:** ```bash --avalanchego-chain-config-dir string avalanchego's chain config file directory (default "/Users/owen.wahlgren/.avalanchego/chains") --config create upgrade config for future subnet deployments (same as generate) --force If true, don't prompt for confirmation of timestamps in the past --fuji fuji apply upgrade existing fuji deployment (alias for `testnet`) -h, --help help for apply --local local apply upgrade existing local deployment --mainnet mainnet apply upgrade existing mainnet deployment --print if true, print the manual config without prompting (for public networks only) --testnet testnet apply upgrade existing testnet deployment (alias for `fuji`) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### upgrade export Export the upgrade bytes file to a location of choice on disk **Usage:** ```bash avalanche subnet upgrade export [subcommand] [flags] ``` **Flags:** ```bash --force If true, overwrite a possibly existing file without prompting -h, --help help for export --upgrade-filepath string Export upgrade bytes file to location of choice on disk --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### upgrade generate The blockchain upgrade generate command builds a new upgrade.json file to customize your Blockchain. It guides the user through the process using an interactive wizard. **Usage:** ```bash avalanche subnet upgrade generate [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for generate --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### upgrade import Import the upgrade bytes file into the local environment **Usage:** ```bash avalanche subnet upgrade import [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for import --upgrade-filepath string Import upgrade bytes file into local environment --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### upgrade print Print the upgrade.json file content **Usage:** ```bash avalanche subnet upgrade print [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for print --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### upgrade vm The blockchain upgrade vm command enables the user to upgrade their Blockchain's VM binary. The command can upgrade both local Blockchains and publicly deployed Blockchains on Fuji and Mainnet. The command walks the user through an interactive wizard. The user can skip the wizard by providing command line flags. **Usage:** ```bash avalanche subnet upgrade vm [subcommand] [flags] ``` **Flags:** ```bash --binary string Upgrade to custom binary --config upgrade config for future subnet deployments --fuji fuji upgrade existing fuji deployment (alias for `testnet`) -h, --help help for vm --latest upgrade to latest version --local local upgrade existing local deployment --mainnet mainnet upgrade existing mainnet deployment --plugin-dir string plugin directory to automatically upgrade VM --print print instructions for upgrading --testnet testnet upgrade existing testnet deployment (alias for `fuji`) --version string Upgrade to custom version --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### validators The blockchain validators command lists the validators of a blockchain's subnet and provides several statistics about them. **Usage:** ```bash avalanche subnet validators [subcommand] [flags] ``` **Flags:** ```bash --cluster string operate on the given cluster --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for validators -l, --local operate on a local network -m, --mainnet operate on mainnet -t, --testnet fuji operate on testnet (alias to fuji) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### vmid The blockchain vmid command prints the virtual machine ID (VMID) for the given Blockchain. **Usage:** ```bash avalanche subnet vmid [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for vmid --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche teleporter The messenger command suite provides a collection of tools for interacting with ICM messenger contracts. **Usage:** ```bash avalanche teleporter [subcommand] [flags] ``` **Subcommands:** - [`deploy`](#avalanche-teleporter-deploy): Deploys ICM Messenger and Registry into a given L1. - [`sendMsg`](#avalanche-teleporter-sendmsg): Sends and wait reception for a ICM msg between two subnets. **Flags:** ```bash -h, --help help for teleporter --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### deploy Deploys ICM Messenger and Registry into a given L1. **Usage:** ```bash avalanche teleporter deploy [subcommand] [flags] ``` **Flags:** ```bash --blockchain string deploy ICM into the given CLI blockchain --blockchain-id string deploy ICM into the given blockchain ID/Alias --c-chain deploy ICM into C-Chain --cchain-key string key to be used to pay fees to deploy ICM to C-Chain --cluster string operate on the given cluster --deploy-messenger deploy ICM Messenger (default true) --deploy-registry deploy ICM Registry (default true) --devnet operate on a devnet network --endpoint string use the given endpoint for network operations --force-registry-deploy deploy ICM Registry even if Messenger has already been deployed -f, --fuji testnet operate on fuji (alias to testnet --genesis-key use genesis allocated key to fund ICM deploy -h, --help help for deploy --include-cchain deploy ICM also to C-Chain --key string CLI stored key to use to fund ICM deploy -l, --local operate on a local network --messenger-contract-address-path string path to a messenger contract address file --messenger-deployer-address-path string path to a messenger deployer address file --messenger-deployer-tx-path string path to a messenger deployer tx file --private-key string private key to use to fund ICM deploy --registry-bytecode-path string path to a registry bytecode file --rpc-url string use the given RPC URL to connect to the subnet -t, --testnet fuji operate on testnet (alias to fuji) --version string version to deploy (default "latest") --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### sendMsg Sends and wait reception for a ICM msg between two subnets. **Usage:** ```bash avalanche teleporter sendMsg [subcommand] [flags] ``` **Flags:** ```bash --cluster string operate on the given cluster --dest-rpc string use the given destination blockchain rpc endpoint --destination-address string deliver the message to the given contract destination address --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet --genesis-key use genesis allocated key as message originator and to pay source blockchain fees -h, --help help for sendMsg --hex-encoded given message is hex encoded --key string CLI stored key to use as message originator and to pay source blockchain fees -l, --local operate on a local network --private-key string private key to use as message originator and to pay source blockchain fees --source-rpc string use the given source blockchain rpc endpoint -t, --testnet fuji operate on testnet (alias to fuji) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche transaction The transaction command suite provides all of the utilities required to sign multisig transactions. **Usage:** ```bash avalanche transaction [subcommand] [flags] ``` **Subcommands:** - [`commit`](#avalanche-transaction-commit): The transaction commit command commits a transaction by submitting it to the P-Chain. - [`sign`](#avalanche-transaction-sign): The transaction sign command signs a multisig transaction. **Flags:** ```bash -h, --help help for transaction --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### commit The transaction commit command commits a transaction by submitting it to the P-Chain. **Usage:** ```bash avalanche transaction commit [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for commit --input-tx-filepath string Path to the transaction signed by all signatories --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### sign The transaction sign command signs a multisig transaction. **Usage:** ```bash avalanche transaction sign [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for sign --input-tx-filepath string Path to the transaction file for signing -k, --key string select the key to use [fuji only] -g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji) --ledger-addrs strings use the given ledger addresses --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche update Check if an update is available, and prompt the user to install it **Usage:** ```bash avalanche update [subcommand] [flags] ``` **Flags:** ```bash -c, --confirm Assume yes for installation -h, --help help for update -v, --version version for update --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche validator The validator command suite provides a collection of tools for managing validator balance on P-Chain. Validator's balance is used to pay for continuous fee to the P-Chain. When this Balance reaches 0, the validator will be considered inactive and will no longer participate in validating the L1 **Usage:** ```bash avalanche validator [subcommand] [flags] ``` **Subcommands:** - [`getBalance`](#avalanche-validator-getbalance): This command gets the remaining validator P-Chain balance that is available to pay P-Chain continuous fee - [`increaseBalance`](#avalanche-validator-increasebalance): This command increases the validator P-Chain balance - [`list`](#avalanche-validator-list): This command gets a list of the validators of the L1 **Flags:** ```bash -h, --help help for validator --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### getBalance This command gets the remaining validator P-Chain balance that is available to pay P-Chain continuous fee **Usage:** ```bash avalanche validator getBalance [subcommand] [flags] ``` **Flags:** ```bash --cluster string operate on the given cluster --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for getBalance --l1 string name of L1 -l, --local operate on a local network -m, --mainnet operate on mainnet --node-id string node ID of the validator -t, --testnet fuji operate on testnet (alias to fuji) --validation-id string validation ID of the validator --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### increaseBalance This command increases the validator P-Chain balance **Usage:** ```bash avalanche validator increaseBalance [subcommand] [flags] ``` **Flags:** ```bash --balance float amount of AVAX to increase validator's balance by --cluster string operate on the given cluster --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for increaseBalance -k, --key string select the key to use [fuji/devnet deploy only] --l1 string name of L1 (to increase balance of bootstrap validators only) -l, --local operate on a local network -m, --mainnet operate on mainnet --node-id string node ID of the validator -t, --testnet fuji operate on testnet (alias to fuji) --validation-id string validationIDStr of the validator --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### list This command gets a list of the validators of the L1 **Usage:** ```bash avalanche validator list [subcommand] [flags] ``` **Flags:** ```bash --cluster string operate on the given cluster --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for list -l, --local operate on a local network -m, --mainnet operate on mainnet -t, --testnet fuji operate on testnet (alias to fuji) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` # AllowList Interface (/docs/avalanche-l1s/precompiles/allowlist-interface) --- title: AllowList Interface description: The AllowList interface is used by many default precompiles to permission access to the features they provide. --- ## Overview The AllowList is a security feature used by precompiles to manage which addresses have permission to interact with certain contract functionalities. It provides a consistent role-based permission system inherited by all precompiles that use it. | Property | Value | |----------|-------| | **Address** | Inherited by each precompile | | **ConfigKey** | N/A (Interface only) | ## Role-Based Permissions The AllowList implements a consistent role-based permission system: | Role | Value | Description | Permissions | |------|-------|-------------|-------------| | Admin | 2 | Can manage all roles | Can add/remove any role (Admin, Manager, Enabled) | | Manager | 3 | Can manage enabled addresses | Can add/remove only Enabled addresses | | Enabled | 1 | Basic permissions | Can use the precompile's functionality | | None | 0 | No permissions | Cannot use the precompile or manage permissions | Each precompile that uses the AllowList interface follows this permission structure, though the specific actions allowed for "Enabled" addresses vary depending on the precompile's purpose. For example: - In the Contract Deployer AllowList, "Enabled" addresses can deploy contracts - In the Transaction AllowList, "Enabled" addresses can submit transactions - In the Native Minter, "Enabled" addresses can mint tokens ## Interface The AllowList interface is defined as follows: ```solidity //SPDX-License-Identifier: MIT pragma solidity ^0.8.24; interface IAllowList { event RoleSet(uint256 indexed role, address indexed account, address indexed sender, uint256 oldRole); // Set [addr] to have the admin role over the precompile contract. function setAdmin(address addr) external; // Set [addr] to be enabled on the precompile contract. function setEnabled(address addr) external; // Set [addr] to have the manager role over the precompile contract. function setManager(address addr) external; // Set [addr] to have no role for the precompile contract. function setNone(address addr) external; // Read the status of [addr]. function readAllowList(address addr) external view returns (uint256 role); } ``` ## Implementation The AllowList interface is implemented by multiple precompiles in the Subnet-EVM. You can find the core implementation in the [subnet-evm repository](https://github.com/ava-labs/subnet-evm/blob/master/precompile/allowlist/allowlist.go). ## Precompiles Using AllowList Several precompiles in Subnet-EVM use the AllowList interface: - [Deployer AllowList](/docs/avalanche-l1s/precompiles/deployer-allowlist) - [Transaction AllowList](/docs/avalanche-l1s/precompiles/transaction-allowlist) - [Native Minter](/docs/avalanche-l1s/precompiles/native-minter) - [Fee Manager](/docs/avalanche-l1s/precompiles/fee-manager) - [Reward Manager](/docs/avalanche-l1s/precompiles/reward-manager) # Deployer AllowList (/docs/avalanche-l1s/precompiles/deployer-allowlist) --- title: Deployer AllowList description: Control which addresses can deploy smart contracts on your Avalanche L1 blockchain. --- ## Overview The Contract Deployer Allowlist allows you to maintain a controlled environment where only authorized addresses can deploy new smart contracts. This is particularly useful for: - Maintaining a curated ecosystem of verified contracts - Preventing malicious contract deployments - Implementing KYC/AML requirements for contract deployers | Property | Value | |----------|-------| | **Address** | `0x0200000000000000000000000000000000000000` | | **ConfigKey** | `contractDeployerAllowListConfig` | ## Configuration You can activate this precompile in your genesis file: ```json { "config": { "contractDeployerAllowListConfig": { "blockTimestamp": 0, "adminAddresses": ["0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC"] } } } ``` By enabling this feature, you can define which addresses are allowed to deploy smart contracts and manage these permissions over time. ## Interface The Contract Deployer Allowlist implements the [AllowList interface](/docs/avalanche-l1s/precompiles/allowlist-interface): ```solidity //SPDX-License-Identifier: MIT pragma solidity ^0.8.24; interface IAllowList { event RoleSet(uint256 indexed role, address indexed account, address indexed sender, uint256 oldRole); // Set [addr] to have the admin role over the precompile contract. function setAdmin(address addr) external; // Set [addr] to be enabled on the precompile contract. function setEnabled(address addr) external; // Set [addr] to have the manager role over the precompile contract. function setManager(address addr) external; // Set [addr] to have no role for the precompile contract. function setNone(address addr) external; // Read the status of [addr]. function readAllowList(address addr) external view returns (uint256 role); } ``` ## Permissions Management The Deployer Allowlist uses the [AllowList interface](/docs/avalanche-l1s/precompiles/allowlist-interface) to manage permissions. This provides a consistent way to: - Assign and revoke deployment permissions - Manage admin and manager roles - Control who can deploy contracts For detailed information about the role-based permission system and available functions, see the [AllowList interface documentation](/docs/avalanche-l1s/precompiles/allowlist-interface). ## Best Practices 1. **Initial Setup**: Always configure at least one admin address in the genesis file to ensure you can manage permissions after deployment. 2. **Role Management**: - Use Admin roles sparingly and secure their private keys - Assign Manager roles to trusted entities who need to manage user access - Regularly audit the list of enabled addresses 3. **Security Considerations**: - Keep private keys of admin addresses secure - Implement a multi-sig wallet as an admin for additional security - Maintain an off-chain record of role assignments 4. **Monitoring**: - Monitor the `RoleSet` events to track permission changes - Regularly audit the enabled addresses list - Keep documentation of why each address was granted permissions ## Implementation You can find the implementation in the [subnet-evm repository](https://github.com/ava-labs/subnet-evm/blob/master/precompile/contracts/deployerallowlist/contract.go). ## Interacting with the Precompile For information on how to interact with this precompile, see: - [Interacting with Precompiles](/docs/avalanche-l1s/precompiles/interacting-with-precompiles) - [Deployer Allowlist Console](/console/l1-access-restrictions/deployer-allowlist) # Fee Manager (/docs/avalanche-l1s/precompiles/fee-manager) --- title: Fee Manager description: Configure dynamic fee parameters and gas costs for your Avalanche L1 blockchain. --- ## Overview The Fee Manager allows you to configure the parameters of the dynamic fee algorithm on-chain. This gives you control over: - Gas limits and target block rates - Base fee parameters - Block gas cost parameters | Property | Value | |----------|-------| | **Address** | `0x0200000000000000000000000000000000000003` | | **ConfigKey** | `feeManagerConfig` | ## Configuration You can activate this precompile in your genesis file: ```json { "config": { "feeManagerConfig": { "blockTimestamp": 0, "adminAddresses": ["0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC"], "initialFeeConfig": { "gasLimit": 20000000, "targetBlockRate": 2, "minBaseFee": 1000000000, "targetGas": 100000000, "baseFeeChangeDenominator": 48, "minBlockGasCost": 0, "maxBlockGasCost": 10000000, "blockGasCostStep": 500000 } } } } ``` The following parameters were deprecated by the Granite upgrade: - `targetBlockRate` - `minBlockGasCost` - `maxBlockGasCost` - `blockGasCostStep` ## Fee Parameters The Fee Manager allows configuration of the following parameters: | Parameter | Description | Recommended Range | |-----------|-------------|------------------| | gasLimit | Maximum gas allowed per block | 8M - 100M | | targetBlockRate | Target time between blocks (seconds) | 2 - 10 | | minBaseFee | Minimum base fee (in wei) | 25 - 500 gwei | | targetGas | Target gas spending over the last 10 seconds | 5M - 50M | | baseFeeChangeDenominator | Controls how quickly base fee changes | 8 - 1000 | | minBlockGasCost | Minimum gas cost for a block | 0 - 1B | | maxBlockGasCost | Maximum gas cost for a block | > minBlockGasCost | | blockGasCostStep | How quickly block gas cost changes | < 5M | ## Interface ```solidity interface IFeeManager { struct FeeConfig { uint256 gasLimit; uint256 targetBlockRate; uint256 minBaseFee; uint256 targetGas; uint256 baseFeeChangeDenominator; uint256 minBlockGasCost; uint256 maxBlockGasCost; uint256 blockGasCostStep; } event FeeConfigChanged(address indexed sender, FeeConfig oldFeeConfig, FeeConfig newFeeConfig); function setFeeConfig( uint256 gasLimit, uint256 targetBlockRate, uint256 minBaseFee, uint256 targetGas, uint256 baseFeeChangeDenominator, uint256 minBlockGasCost, uint256 maxBlockGasCost, uint256 blockGasCostStep ) external; function getFeeConfig() external view returns ( uint256 gasLimit, uint256 targetBlockRate, uint256 minBaseFee, uint256 targetGas, uint256 baseFeeChangeDenominator, uint256 minBlockGasCost, uint256 maxBlockGasCost, uint256 blockGasCostStep ); function getFeeConfigLastChangedAt() external view returns (uint256 blockNumber); } ``` ## Access Control and Additional Features The FeeManager precompile uses the [AllowList interface](/docs/avalanche-l1s/precompiles/allowlist-interface) to restrict access to its functionality. In addition to the AllowList interface, the FeeManager adds the following capabilities: - `getFeeConfig`: retrieves the current dynamic fee config - `getFeeConfigLastChangedAt`: retrieves the timestamp of the last block where the fee config was updated - `setFeeConfig`: sets the dynamic fee config on chain. This function can only be called by an Admin, Manager or Enabled address. - `FeeConfigChanged`: an event that is emitted when the fee config is updated. Topics include the sender, the old fee config, and the new fee config. You can also get the fee configuration at a block with the `eth_feeConfig` RPC method. For more information see [here](/docs/rpcs/subnet-evm#eth_feeconfig). ## Best Practices 1. **Fee Configuration**: - Test fee changes on testnet first - Monitor network congestion and adjust accordingly - Document rationale for fee parameter changes - Announce changes to validators in advance 2. **Security Considerations**: - Use multi-sig for admin addresses - Monitor events for unauthorized changes - Have a plan for fee parameter adjustments - Keep backup of previous configurations ## Implementation You can find the Fee Manager implementation in the [subnet-evm repository](https://github.com/ava-labs/subnet-evm/blob/master/precompile/contracts/feemanager/contract.go). ## Interacting with the Precompile For information on how to interact with this precompile, see: - [Interacting with Precompiles](/docs/avalanche-l1s/precompiles/interacting-with-precompiles) - [Fee Manager Console](/console/l1-tokenomics/fee-manager) # Interacting with Precompiles (/docs/avalanche-l1s/precompiles/interacting-with-precompiles) --- title: Interacting with Precompiles description: Learn how to interact with Avalanche L1 precompiles using the Builder Hub Developer Console or Remix IDE. --- This guide shows you how to interact with precompiled contracts on your Avalanche L1. For standard precompile implementations, we recommend using the **Builder Hub Developer Console** for the best experience. For custom implementations or advanced use cases, you can use **Remix IDE** with browser wallets. ## Recommended: Using Builder Hub Developer Console The Builder Hub provides dedicated tools for interacting with standard Avalanche L1 precompiles. These tools offer: - ✅ **User-friendly interface** - No need to manually enter contract addresses or ABIs - ✅ **Built-in validation** - Prevents common configuration mistakes - ✅ **Connected to your Builder account** - Track your L1s and configurations - ✅ **Visual feedback** - See changes reflected in real-time ### Available Console Tools | Precompile | Console Tool | |------------|--------------| | Fee Manager | [Fee Manager Console](/console/l1-tokenomics/fee-manager) | | Reward Manager | [Reward Manager Console](/console/l1-tokenomics/reward-manager) | | Native Minter | [Native Minter Console](/console/l1-tokenomics/native-minter) | | Contract Deployer Allowlist | [Deployer Allowlist Console](/console/l1-access-restrictions/deployer-allowlist) | | Transaction Allowlist | [Transactor Allowlist Console](/console/l1-access-restrictions/transactor-allowlist) | ### How to Use Console Tools 1. **Navigate** to the appropriate console tool from the table above 2. **Connect** your wallet (Core or MetaMask) 3. **Switch** to your L1 network in your wallet 4. The tool will automatically detect your permissions 5. **Configure** using the visual interface: - For Fee Manager: Adjust gas limits, base fees, and target rates - For Native Minter: Mint tokens to specific addresses - For Allowlists: Add or remove addresses with specific roles - For Reward Manager: Configure fee distribution settings 6. **Review** the transaction details 7. **Submit** and approve in your wallet **Why use the Developer Console?** Using the Builder Hub console tools allows us to: - Provide better support for your L1 - Track feature usage to improve the platform - Build your profile in our builders/developers database - Offer personalized recommendations and resources ### Example Workflows **Configuring Transaction Fees:** 1. Go to [Fee Manager Console](/console/l1-tokenomics/fee-manager) 2. Connect wallet and switch to your L1 3. Adjust fee parameters using sliders and inputs 4. See real-time preview of how changes affect gas costs 5. Submit transaction to update fees **Minting Native Tokens:** 1. Go to [Native Minter Console](/console/l1-tokenomics/native-minter) 2. Connect with an admin/manager address 3. Enter recipient address and amount 4. Review the minting transaction 5. Approve to mint tokens instantly **Managing Permissions:** 1. Go to [Deployer Allowlist](/console/l1-access-restrictions/deployer-allowlist) or [Transactor Allowlist](/console/l1-access-restrictions/transactor-allowlist) 2. Connect with an admin address 3. Add addresses with desired roles (Admin, Manager, Enabled) 4. Remove addresses by changing their role to "None" 5. View current allowlist status ## Alternative: Using Remix IDE For custom precompile implementations or if you prefer a code-based approach, you can use Remix IDE to interact with precompiles directly. ### When to Use Remix Use Remix when: - You have a **custom precompile** implementation (non-standard addresses or interfaces) - You need to interact with precompiles **programmatically** - You're **debugging** contract interactions - The Builder Console doesn't support your specific use case ### Prerequisites - Access to an Avalanche L1 where you have admin/manager rights for a precompile - [Core Browser Extension](https://core.app) or MetaMask - Private key for an admin/manager address on your L1 - Your L1's RPC URL and Chain ID ## Setup Your Wallet ### Using Core 1. Install the [Core Browser Extension](https://core.app) 2. Import or create the account with admin/manager privileges 3. Enable **Testnet Mode** (if using testnet): - Open Core extension - Click the **Tools** icon in the sidebar → **Settings** - Toggle **Testnet Mode** on 4. Add your L1 network: - Click the networks dropdown - Select **Manage Networks** - Click **Add Network** and enter: - **Network Name**: Your L1 name - **RPC URL**: Your L1's RPC endpoint - **Chain ID**: Your L1's chain ID - **Symbol**: Your native token symbol - **Explorer**: (Optional) Your L1's explorer URL 5. Switch to your L1 network in the dropdown ### Using MetaMask 1. Install MetaMask browser extension 2. Import the account with admin/manager privileges 3. Add your L1 network: - Click the networks dropdown - Click **Add Network** → **Add a network manually** - Enter your L1's network details - Click **Save** ## Connect Remix to Your L1 1. Open [Remix IDE](https://remix.ethereum.org/) in your browser 2. In the left sidebar, click the **Deploy & run transactions** icon 3. In the **Environment** dropdown, select **Injected Provider - MetaMask** (or Core) 4. Approve the connection request in your wallet extension 5. Verify the connection shows your L1's network (e.g., "Custom (11111) network") ## Load Precompile Interfaces You need to load the Solidity interfaces for the precompiles you want to interact with. ### Available Precompile Interfaces From the Remix home screen, use **load from GitHub** to import: **Required for all precompiles:** - [IAllowList.sol](https://github.com/ava-labs/subnet-evm/blob/master/contracts/contracts/interfaces/IAllowList.sol) **Specific precompile interfaces:** - [IFeeManager.sol](https://github.com/ava-labs/subnet-evm/blob/master/contracts/contracts/interfaces/IFeeManager.sol) - For fee configuration - [INativeMinter.sol](https://github.com/ava-labs/subnet-evm/blob/master/contracts/contracts/interfaces/INativeMinter.sol) - For minting native tokens - [IAllowList.sol](https://github.com/ava-labs/subnet-evm/blob/master/contracts/contracts/interfaces/IAllowList.sol) - For transaction/deployer allowlists - [IRewardManager.sol](https://github.com/ava-labs/subnet-evm/blob/master/contracts/contracts/interfaces/IRewardManager.sol) - For block rewards ### Compile the Interface 1. In Remix, click the **Solidity Compiler** icon in the left sidebar 2. Select the precompile interface file (e.g., `IFeeManager.sol`) 3. Click **Compile** ## Interact with Precompiles ### Connect to Deployed Precompile Each precompile is deployed at a fixed address on your L1: | Precompile | Address | |------------|---------| | NativeMinter | `0x0200000000000000000000000000000000000001` | | ContractDeployerAllowList | `0x0200000000000000000000000000000000000000` | | FeeManager | `0x0200000000000000000000000000000000000003` | | RewardManager | `0x0200000000000000000000000000000000000004` | | TransactionAllowList | `0x0200000000000000000000000000000000000002` | 1. In Remix, click **Deploy & run transactions** 2. In the **Contract** dropdown, select your compiled interface 3. Paste the precompile address in the **At Address** field 4. Click **At Address** The precompile contract will appear in the **Deployed Contracts** section. ## Example: Using Fee Manager ### Read Current Fee Configuration Anyone can read the current fee configuration (no special permissions required): 1. Expand the FeeManager contract in **Deployed Contracts** 2. Click **getFeeConfig** 3. View the current fee parameters: - `gasLimit`: Maximum gas per block - `targetBlockRate`: Target time between blocks (seconds) - `minBaseFee`: Minimum base fee (wei) - `targetGas`: Target gas per second - `baseFeeChangeDenominator`: Rate of base fee adjustment - `minBlockGasCost`: Minimum gas cost for a block - `maxBlockGasCost`: Maximum gas cost for a block - `blockGasCostStep`: Increment for block gas cost ### Update Fee Configuration Only admin addresses can update the fee configuration: 1. Ensure you're connected with the admin address in your wallet 2. Expand **setFeeConfig** in the FeeManager contract 3. Fill in the new fee parameters: ``` gasLimit: 8000000 targetBlockRate: 2 minBaseFee: 25000000000 targetGas: 15000000 baseFeeChangeDenominator: 36 minBlockGasCost: 0 maxBlockGasCost: 1000000 blockGasCostStep: 200000 ``` 4. Click **transact** 5. Approve the transaction in your wallet 6. Wait for transaction confirmation The new fee configuration takes effect immediately after the transaction is accepted. ## Example: Using Native Minter ### Mint Native Tokens Only admin, manager, or enabled addresses can mint native tokens: 1. Expand the NativeMinter contract in **Deployed Contracts** 2. Click on **mintNativeCoin** 3. Fill in the parameters: - `addr`: Recipient address (e.g., `0xB78cbAa319ffBD899951AA30D4320f5818938310`) - `amount`: Amount to mint in wei (e.g., `1000000000000000000` for 1 token) 4. Click **transact** 5. Approve the transaction in your wallet The minted tokens are added directly to the recipient's balance by the EVM (no sender transaction). ### Check Minting Permissions Anyone can check who has minting permissions: 1. Click **readAllowList** with an address parameter 2. Returns: - `0`: No permission - `1`: Enabled (can mint) - `2`: Manager (can mint and manage enabled addresses) - `3`: Admin (full control) ## Example: Managing Allow Lists ### Add Address to Allow List Admins can add addresses to transaction or deployer allow lists: 1. Expand the AllowList contract 2. Use **setAdmin**, **setManager**, or **setEnabled**: ``` addr: 0x1234...5678 ``` 3. Click **transact** 4. Approve in wallet ### Remove Address from Allow List 1. Use **setNone** with the address: ``` addr: 0x1234...5678 ``` 2. Click **transact** ### Check Address Status 1. Click **readAllowList**: ``` addr: 0x1234...5678 ``` 2. Returns permission level (0-3) ## Best Practices ### Security - **Never share private keys** for admin addresses - **Use hardware wallets** for admin accounts when possible - **Test on testnet first** before making changes on mainnet - **Use multi-sig contracts** for critical admin operations - **Document all changes** and announce them to validators ### Network Upgrades When enabling precompiles via network upgrades: 1. **Announce upgrades** well in advance on social media and Discord 2. **Coordinate with validators** to ensure they update their nodes 3. **Use upgrade.json** to schedule precompile activation (see [Precompile Upgrades](/docs/avalanche-l1s/upgrade/precompile-upgrades)) 4. **Test the upgrade** on a testnet first 5. **Monitor** the network after activation ### Troubleshooting **Connection Issues:** - Verify your wallet is connected to the correct network - Check that the RPC URL is accessible - Ensure you have native tokens for gas fees **Transaction Failures:** - Confirm you're using an admin/manager address - Check that the precompile is enabled on your L1 - Verify parameter formats (addresses must be checksummed) - Ensure sufficient gas limit **Precompile Not Found:** - Verify the precompile address is correct - Confirm the precompile is activated in your genesis or upgrade.json - Check that you're on the correct network ## Additional Resources ### Builder Hub Console Tools - [Fee Manager Console](/console/l1-tokenomics/fee-manager) - Configure transaction fees - [Reward Manager Console](/console/l1-tokenomics/reward-manager) - Manage fee distribution - [Native Minter Console](/console/l1-tokenomics/native-minter) - Mint native tokens - [Deployer Allowlist Console](/console/l1-access-restrictions/deployer-allowlist) - Control contract deployment - [Transactor Allowlist Console](/console/l1-access-restrictions/transactor-allowlist) - Control transaction submission ### Documentation - [Precompile Configuration](/docs/avalanche-l1s/evm-configuration/evm-l1-customization) - Overview of precompiles - [Fee Manager](/docs/avalanche-l1s/precompiles/fee-manager) - Fee Manager details - [Reward Manager](/docs/avalanche-l1s/precompiles/reward-manager) - Reward Manager details - [Native Minter](/docs/avalanche-l1s/precompiles/native-minter) - Native Minter details - [Deployer AllowList](/docs/avalanche-l1s/precompiles/deployer-allowlist) - Deployer Allowlist details - [Transaction AllowList](/docs/avalanche-l1s/precompiles/transaction-allowlist) - Transaction Allowlist details - [Warp Messenger](/docs/avalanche-l1s/precompiles/warp-messenger) - Warp Messenger details - [Precompile Upgrades](/docs/avalanche-l1s/upgrade/precompile-upgrades) - Network upgrade process - [AllowList Interface](/docs/avalanche-l1s/precompiles/allowlist-interface) - Role-based access control - [Subnet-EVM Contracts](https://github.com/ava-labs/subnet-evm/tree/master/contracts/contracts/interfaces) - Precompile interfaces ## Conclusion For standard Avalanche L1 precompiles, **we strongly recommend using the [Builder Hub Developer Console tools](/console)** for the best experience. These tools provide: - ✅ Guided workflows with validation - ✅ No need to manage contract addresses or ABIs manually - ✅ Integration with your Builder Hub account - ✅ Support from the Builder Hub team For custom implementations or advanced scenarios, the Remix IDE approach provides flexibility to interact with any contract at any address. This is useful for: - Custom precompile implementations - Testing and debugging - Programmatic interactions - Non-standard use cases Whichever method you choose, always test on testnet first and follow security best practices when managing admin keys. # Native Minter (/docs/avalanche-l1s/precompiles/native-minter) --- title: Native Minter description: Manage the minting and burning of native tokens on your Avalanche L1 blockchain. --- ## Overview The Native Minter precompile allows authorized addresses to mint additional tokens after network launch. This is useful for: - Implementing programmatic token emission schedules - Providing validator rewards - Supporting ecosystem growth initiatives - Implementing monetary policy | Property | Value | |----------|-------| | **Address** | `0x0200000000000000000000000000000000000001` | | **ConfigKey** | `contractNativeMinterConfig` | ## Configuration You can activate this precompile in your genesis file: ```json { "config": { "contractNativeMinterConfig": { "blockTimestamp": 0, "adminAddresses": ["0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC"] } } } ``` ## Interface ```solidity //SPDX-License-Identifier: MIT pragma solidity ^0.8.24; interface INativeMinter { event NativeCoinMinted(address indexed sender, address indexed recipient, uint256 amount); // Mint [amount] number of native coins and send to [addr] function mintNativeCoin(address addr, uint256 amount) external; // IAllowList event RoleSet(uint256 indexed role, address indexed account, address indexed sender, uint256 oldRole); // Set [addr] to have the admin role over the precompile contract. function setAdmin(address addr) external; // Set [addr] to be enabled on the precompile contract. function setEnabled(address addr) external; // Set [addr] to have the manager role over the precompile contract. function setManager(address addr) external; // Set [addr] to have no role for the precompile contract. function setNone(address addr) external; // Read the status of [addr]. function readAllowList(address addr) external view returns (uint256 role); } ``` The Native Minter precompile uses the [AllowList interface](/docs/avalanche-l1s/precompiles/allowlist-interface) to restrict access to its functionality. ## Best Practices 1. **Minting Policy**: - Define clear minting guidelines - Use multi-sig for admin control - Implement transparent emission schedules - Monitor total supply changes 2. **Supply Management**: - Balance minting with burning mechanisms - Consider implementing supply caps - Monitor token velocity and distribution - Plan for long-term sustainability 3. **Security Considerations**: - Use multi-sig wallets for admin addresses - Implement time-locks for large mints - Regular audits of minting activity - Monitor for unusual minting patterns 4. **Validator Incentives**: - Design sustainable reward mechanisms - Balance inflation with network security - Consider validator stake requirements - Plan for long-term validator participation ## Example Implementations ### Programmatic Emission Schedule ```solidity contract EmissionSchedule { INativeMinter public constant NATIVE_MINTER = INativeMinter(0x0200000000000000000000000000000000000001); uint256 public constant EMISSION_RATE = 1000 * 1e18; // 1000 tokens per day uint256 public constant EMISSION_DURATION = 365 days; uint256 public immutable startTime; constructor() { startTime = block.timestamp; } function mintDailyEmission() external { require(block.timestamp < startTime + EMISSION_DURATION, "Emission ended"); NATIVE_MINTER.mintNativeCoin(address(this), EMISSION_RATE); // Distribution logic here } } ``` ### Validator Reward Contract ```solidity contract ValidatorRewards { INativeMinter public constant NATIVE_MINTER = INativeMinter(0x0200000000000000000000000000000000000001); uint256 public constant REWARD_RATE = 10 * 1e18; // 10 tokens per block function distributeRewards(address[] calldata validators) external { uint256 reward = REWARD_RATE / validators.length; for (uint i = 0; i < validators.length; i++) { NATIVE_MINTER.mintNativeCoin(validators[i], reward); } } } ``` ## Implementation You can find the Native Minter implementation in the [subnet-evm repository](https://github.com/ava-labs/subnet-evm/blob/master/precompile/contracts/nativeminter/contract.go). ## Interacting with the Precompile For information on how to interact with this precompile, see: - [Interacting with Precompiles](/docs/avalanche-l1s/precompiles/interacting-with-precompiles) - [Native Minter Console](/console/l1-tokenomics/native-minter) # Reward Manager (/docs/avalanche-l1s/precompiles/reward-manager) --- title: Reward Manager description: Control how transaction fees are distributed or burned on your Avalanche L1 blockchain. --- ## Overview The Reward Manager allows you to control how transaction fees are handled in your network. You can: - Send fees to a specific address (e.g., treasury) - Allow validators to collect fees - Burn fees entirely | Property | Value | |----------|-------| | **Address** | `0x0200000000000000000000000000000000000004` | | **ConfigKey** | `rewardManagerConfig` | ## Configuration You can activate this precompile in your genesis file: ```json { "config": { "rewardManagerConfig": { "blockTimestamp": 0, "adminAddresses": ["0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC"], "initialRewardConfig": { // Choose one of: "allowFeeRecipients": true, // Allow validators to collect fees "rewardAddress": "0x...", // Send fees to specific address // Empty config = burn fees } } } } ``` ## Reward Mechanisms The Reward Manager supports three mutually exclusive mechanisms: 1. **Validator Fee Collection** (`allowFeeRecipients`) - Validators can specify their own fee recipient addresses - Fees go to the block producer's chosen address - Good for incentivizing network participation 2. **Fixed Reward Address** (`rewardAddress`) - All fees go to a single specified address - Can be a contract or EOA - Useful for treasury or DAO-controlled fee collection 3. **Fee Burning** (default) - All transaction fees are burned - Reduces total token supply over time - Similar to Ethereum's EIP-1559 ## Interface ```solidity interface IRewardManager { event RewardAddressChanged( address indexed sender, address indexed oldRewardAddress, address indexed newRewardAddress ); event FeeRecipientsAllowed(address indexed sender); event RewardsDisabled(address indexed sender); function setRewardAddress(address addr) external; function allowFeeRecipients() external; function disableRewards() external; function currentRewardAddress() external view returns (address rewardAddress); function areFeeRecipientsAllowed() external view returns (bool isAllowed); } ``` The Reward Manager precompile uses the [AllowList interface](/docs/avalanche-l1s/precompiles/allowlist-interface) to restrict access to its functionality. ## Best Practices 1. **Reward Management**: - Choose reward mechanism based on network goals - Consider using a multi-sig or DAO as reward address - Monitor fee collection and distribution - Keep documentation of fee policy changes 2. **Security Considerations**: - Use multi-sig for admin addresses - Test reward changes on testnet first - Monitor events for unauthorized changes - Have a plan for reward parameter adjustments ## Implementation You can find the Reward Manager implementation in the [subnet-evm repository](https://github.com/ava-labs/subnet-evm/blob/master/precompile/contracts/rewardmanager/contract.go). ## Interacting with the Precompile For information on how to interact with this precompile, see: - [Interacting with Precompiles](/docs/avalanche-l1s/precompiles/interacting-with-precompiles) - [Reward Manager Console](/console/l1-tokenomics/reward-manager) # Transaction AllowList (/docs/avalanche-l1s/precompiles/transaction-allowlist) --- title: Transaction AllowList description: Control which addresses can submit transactions on your Avalanche L1 blockchain. --- ## Overview The Transaction Allowlist enables you to control which addresses can submit transactions to your network. This is essential for: - Creating fully permissioned networks - Implementing KYC/AML requirements for users - Controlling network access during testing or initial deployment | Property | Value | |----------|-------| | **Address** | `0x0200000000000000000000000000000000000002` | | **ConfigKey** | `txAllowListConfig` | ## Configuration You can activate this precompile in your genesis file: ```json { "config": { "txAllowListConfig": { "blockTimestamp": 0, "adminAddresses": ["0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC"] } } } ``` By enabling this feature, you can define which addresses are allowed to submit transactions and manage these permissions over time. ## Interface The Transaction Allowlist implements the [AllowList interface](/docs/avalanche-l1s/precompiles/allowlist-interface): ```solidity //SPDX-License-Identifier: MIT pragma solidity ^0.8.24; interface IAllowList { event RoleSet(uint256 indexed role, address indexed account, address indexed sender, uint256 oldRole); // Set [addr] to have the admin role over the precompile contract. function setAdmin(address addr) external; // Set [addr] to be enabled on the precompile contract. function setEnabled(address addr) external; // Set [addr] to have the manager role over the precompile contract. function setManager(address addr) external; // Set [addr] to have no role for the precompile contract. function setNone(address addr) external; // Read the status of [addr]. function readAllowList(address addr) external view returns (uint256 role); } ``` ## Permissions Management The Transaction Allowlist uses the [AllowList interface](/docs/avalanche-l1s/precompiles/allowlist-interface) to manage permissions. This provides a consistent way to: - Assign and revoke transaction permissions - Manage admin and manager roles - Control who can submit transactions For detailed information about the role-based permission system and available functions, see the [AllowList interface documentation](/docs/avalanche-l1s/precompiles/allowlist-interface). ## Best Practices 1. **Initial Setup**: Always configure at least one admin address in the genesis file to ensure you can manage permissions after deployment. 2. **Role Management**: - Use Admin roles sparingly and secure their private keys - Assign Manager roles to trusted entities who need to manage user access - Regularly audit the list of enabled addresses 3. **Security Considerations**: - Keep private keys of admin addresses secure - Implement a multi-sig wallet as an admin for additional security - Maintain an off-chain record of role assignments 4. **Monitoring**: - Monitor the `RoleSet` events to track permission changes - Regularly audit the enabled addresses list - Keep documentation of why each address was granted permissions ## Implementation You can find the implementation in the [subnet-evm repository](https://github.com/ava-labs/subnet-evm/blob/master/precompile/contracts/txallowlist/contract.go). ## Interacting with the Precompile For information on how to interact with this precompile, see: - [Interacting with Precompiles](/docs/avalanche-l1s/precompiles/interacting-with-precompiles) - [Transactor Allowlist Console](/console/l1-access-restrictions/transactor-allowlist) # Warp Messenger (/docs/avalanche-l1s/precompiles/warp-messenger) --- title: Warp Messenger description: Enable cross-chain communication between Avalanche L1s using Avalanche Warp Messaging. edit_url: https://github.com/ava-labs/avalanchego/edit/master/graft/subnet-evm/precompile/contracts/warp/README.md --- ## Overview Avalanche Warp Messaging offers a basic primitive to enable Cross-L1 communication on the Avalanche Network. It is intended to allow communication between arbitrary Custom Virtual Machines (including, but not limited to Subnet-EVM and Coreth). | Property | Value | |----------|-------| | **Address** | `0x0200000000000000000000000000000000000005` | | **ConfigKey** | `warpConfig` | ## How does Avalanche Warp Messaging Work? Avalanche Warp Messaging uses BLS Multi-Signatures with Public-Key Aggregation where every Avalanche validator registers a public key alongside its NodeID on the Avalanche P-Chain. Every node tracking an Avalanche L1 has read access to the Avalanche P-Chain. This provides weighted sets of BLS Public Keys that correspond to the validator sets of each L1 on the Avalanche Network. Avalanche Warp Messaging provides a basic primitive for signing and verifying messages between L1s: the receiving network can verify whether an aggregation of signatures from a set of source L1 validators represents a threshold of stake large enough for the receiving network to process the message. For more details on Avalanche Warp Messaging, see the AvalancheGo [Warp README](https://docs.avax.network/build/cross-chain/awm/deep-dive). ## Configuration The Warp Messenger precompile is enabled by default on all Avalanche L1s and does not require explicit configuration in the genesis file. However, you can configure it if needed: ```json { "config": { "warpConfig": { "blockTimestamp": 0, "quorumNumerator": 67 } } } ``` ### Configuration Parameters - `blockTimestamp`: The timestamp when the precompile should be activated (0 for genesis) - `quorumNumerator`: The percentage of stake weight required to verify a message (default: 67, meaning 67%) Unlike other precompiles, Warp Messenger does not use the AllowList interface - it is available to all addresses by default. ## Interface The Warp Messenger precompile provides the following Solidity interface: ```solidity //SPDX-License-Identifier: MIT pragma solidity ^0.8.24; interface IWarpMessenger { event SendWarpMessage(address indexed sender, bytes32 indexed messageID, bytes message); // sendWarpMessage emits a request for the subnet to sign a Warp message with the provided payload // The message emitted in the log is the unsigned Warp message function sendWarpMessage(bytes calldata payload) external returns (bytes32 messageID); // getVerifiedWarpMessage returns the verified Warp message if it exists, otherwise returns (WarpMessage, false) function getVerifiedWarpMessage(uint32 index) external view returns (WarpMessage memory message, bool valid); // getBlockchainID returns the blockchainID of the current chain function getBlockchainID() external view returns (bytes32 blockchainID); } struct WarpMessage { bytes32 sourceChainID; address originSenderAddress; bytes payload; } ``` ## Flow of Sending / Receiving a Warp Message within the EVM The Avalanche Warp Precompile enables this flow to send a message from blockchain A to blockchain B: 1. Call the Warp Precompile `sendWarpMessage` function with the arguments for the `UnsignedMessage` 2. Warp Precompile emits an event / log containing the `UnsignedMessage` specified by the caller of `sendWarpMessage` 3. Network accepts the block containing the `UnsignedMessage` in the log, so that validators are willing to sign the message 4. An off-chain relayer queries the validators for their signatures of the message and aggregates the signatures to create a `SignedMessage` 5. The off-chain relayer encodes the `SignedMessage` as the [predicate](#predicate-encoding) in the AccessList of a transaction to deliver on blockchain B 6. The transaction is delivered on blockchain B, the signature is verified prior to executing the block, and the message is accessible via the Warp Precompile's `getVerifiedWarpMessage` during the execution of that transaction ## Warp Precompile Functions The Warp Precompile is broken down into three functions defined in the Solidity interface file [here](https://github.com/ava-labs/subnet-evm/blob/1ab7114c339f866b65cc02dfd586b2ed9041dd0b/contracts/contracts/interfaces/IWarpMessenger.sol). ### sendWarpMessage `sendWarpMessage` is used to send a verifiable message. Calling this function results in sending a message with the following contents: - `SourceChainID` - blockchainID of the sourceChain on the Avalanche P-Chain - `SourceAddress` - `msg.sender` encoded as a 32 byte value that calls `sendWarpMessage` - `Payload` - `payload` argument specified in the call to `sendWarpMessage` emitted as the unindexed data of the resulting log Calling this function will issue a `SendWarpMessage` event from the Warp Precompile. Since the EVM limits the number of topics to 4 including the EventID, this message includes only the topics that would be expected to help filter messages emitted from the Warp Precompile the most. Specifically, the `payload` is not emitted as a topic because each topic must be encoded as a hash. Therefore, we opt to take advantage of each possible topic to maximize the possible filtering for emitted Warp Messages. Additionally, the `SourceChainID` is excluded because anyone parsing the chain can be expected to already know the blockchainID. Therefore, the `SendWarpMessage` event includes the indexable attributes: - `sender` - The `messageID` of the unsigned message (sha256 of the unsigned message) The actual `message` is the entire [Avalanche Warp Unsigned Message](https://github.com/ava-labs/avalanchego/blob/master/vms/platformvm/warp/unsigned_message.go#L14) including an [AddressedCall](https://github.com/ava-labs/avalanchego/tree/master/vms/platformvm/warp/payload#readme). The unsigned message is emitted as the unindexed data in the log. ### getVerifiedMessage `getVerifiedMessage` is used to read the contents of the delivered Avalanche Warp Message into the expected format. It returns the message if present and a boolean indicating if a message is present. To use this function, the transaction must include the signed Avalanche Warp Message encoded in the [predicate](#predicate-encoding) of the transaction. Prior to executing a block, the VM iterates through transactions and pre-verifies all predicates. If a transaction's predicate is invalid, then it is considered invalid to include in the block and dropped. This leads to the following advantages: 1. The EVM execution does not need to verify the Warp Message at runtime (no signature verification or external calls to the P-Chain) 2. The EVM can deterministically re-execute and re-verify blocks assuming the predicate was verified by the network (e.g., in bootstrapping) This pre-verification is performed using the ProposerVM Block header during [block verification](https://github.com/ava-labs/subnet-evm/blob/1ab7114c339f866b65cc02dfd586b2ed9041dd0b/plugin/evm/block.go#L220) and [block building](https://github.com/ava-labs/subnet-evm/blob/1ab7114c339f866b65cc02dfd586b2ed9041dd0b/miner/worker.go#L200). ### getBlockchainID `getBlockchainID` returns the blockchainID of the blockchain that the VM is running on. This is different from the conventional Ethereum ChainID registered to [ChainList](https://chainlist.org/). The `blockchainID` in Avalanche refers to the txID that created the blockchain on the Avalanche P-Chain ([docs](https://docs.avax.network/specs/platform-transaction-serialization#unsigned-create-chain-tx)). ## Predicate Encoding Avalanche Warp Messages are encoded as a signed Avalanche [Warp Message](https://github.com/ava-labs/avalanchego/blob/master/vms/platformvm/warp/message.go) where the [UnsignedMessage](https://github.com/ava-labs/avalanchego/blob/master/vms/platformvm/warp/unsigned_message.go)'s payload includes an [AddressedPayload](https://github.com/ava-labs/avalanchego/blob/master/vms/platformvm/warp/payload/payload.go). Since the predicate is encoded into the [Transaction Access List](https://eips.ethereum.org/EIPS/eip-2930), it is packed into 32 byte hashes intended to declare storage slots that should be pre-warmed into the cache prior to transaction execution. Therefore, we use the [Predicate Utils](https://github.com/ava-labs/subnet-evm/blob/master/predicate/Predicate.md) package to encode the actual byte slice of size N into the access list. ## Performance Optimization: Primary Network to Avalanche L1 The Primary Network has a large validator set compared to most Subnets and L1s, which makes Warp signature collection and verification from the entire Primary Network validator set costly. All Subnets and L1s track at least one blockchain of the Primary Network, so we can instead optimize this by using the validator set of the receiving L1 instead of the Primary Network for certain Warp messages. ### Subnets Recall that Avalanche Subnet validators must also validate the Primary Network, so it tracks all of the blockchains in the Primary Network (X, C, and P-Chains). When an Avalanche Subnet receives a message from a blockchain on the Primary Network, we use the validator set of the receiving Subnet instead of the entire network when validating the message. Sending messages from the X, C, or P-Chain remains unchanged. However, when the Subnet receives the message, it changes the semantics to the following: 1. Read the `SourceChainID` of the signed message 2. Look up the `SubnetID` that validates `SourceChainID`. In this case it will be the Primary Network's `SubnetID` 3. Look up the validator set of the Subnet (instead of the Primary Network) and the registered BLS Public Keys of the Subnet validators at the P-Chain height specified by the ProposerVM header 4. Continue Warp Message verification using the validator set of the Subnet instead of the Primary Network This means that Primary Network to Subnet communication only requires a threshold of stake on the receiving Subnet to sign the message instead of a threshold of stake for the entire Primary Network. Since the security of the Subnet is provided by trust in its validator set, requiring a threshold of stake from the receiving Subnet's validator set instead of the whole Primary Network does not meaningfully change the security of the receiving L1. Note: this special case is ONLY applied during Warp Message verification. The message sent by the Primary Network will still contain the blockchainID of the Primary Network chain that sent the message as the sourceChainID and signatures will be served by querying the source chain directly. ### L1s Avalanche L1s are only required to sync the P-Chain, but are not required to validate the Primary Network. Therefore, **for L1s, this optimization only applies to Warp messages sent by the P-Chain.** The rest of the description of this optimization in the above section applies to L1s. Note that **in order to properly verify messages from the C-Chain and X-Chain, the Warp precompile must be configured with `requirePrimaryNetworkSigners` set to `true`**. Otherwise, we will attempt to verify the message signature against the receiving L1's validator set, which is not required to track the C-Chain or X-Chain, and therefore will not in general be able to produce a valid Warp message. ## Design Considerations ### Re-Processing Historical Blocks Avalanche Warp Messaging depends on the Avalanche P-Chain state at the P-Chain height specified by the ProposerVM block header. Verifying a message requires looking up the validator set of the source L1 on the P-Chain. To support this, Avalanche Warp Messaging uses the ProposerVM header, which includes the P-Chain height it was issued at as the canonical point to lookup the source L1's validator set. This means verifying the Warp Message and therefore the state transition on a block depends on state that is external to the blockchain itself: the P-Chain. The Avalanche P-Chain tracks only its current state and reverse diff layers (reversing the changes from past blocks) in order to re-calculate the validator set at a historical height. This means calculating a very old validator set that is used to verify a Warp Message in an old block may become prohibitively expensive. Therefore, we need a heuristic to ensure that the network can correctly re-process old blocks (note: re-processing old blocks is a requirement to perform bootstrapping and is used in some VMs to serve or verify historical data). As a result, we require that the block itself provides a deterministic hint which determines which Avalanche Warp Messages were considered valid/invalid during the block's execution. This ensures that we can always re-process blocks and use the hint to decide whether an Avalanche Warp Message should be treated as valid/invalid even after the P-Chain state that was used at the original execution time may no longer support fast lookups. To provide that hint, we've explored two designs: 1. Include a predicate in the transaction to ensure any referenced message is valid 2. Append the results of checking whether a Warp Message is valid/invalid to the block data itself The current implementation uses option (1). The original reason for this was that the notion of predicates for precompiles was designed with Shared Memory in mind. In the case of shared memory, there is no canonical "P-Chain height" in the block which determines whether or not Avalanche Warp Messages are valid. Instead, the VM interprets a shared memory import operation as valid as soon as the UTXO is available in shared memory. This means that if it were up to the block producer to staple the valid/invalid results of whether or not an attempted atomic operation should be treated as valid, a byzantine block producer could arbitrarily report that such atomic operations were invalid and cause a griefing attack to burn the gas of users that attempted to perform an import. Therefore, a transaction specified predicate is required to implement the shared memory precompile to prevent such a griefing attack. In contrast, Avalanche Warp Messages are validated within the context of an exact P-Chain height. Therefore, if a block producer attempted to lie about the validity of such a message, the network would interpret that block as invalid. ### Guarantees Offered by Warp Precompile vs. Built on Top #### Guarantees Offered by Warp Precompile The Warp Precompile was designed with the intention of minimizing the trusted computing base for the VM as much as possible. Therefore, it makes several tradeoffs which encourage users to use protocols built ON TOP of the Warp Precompile itself as opposed to directly using the Warp Precompile. The Warp Precompile itself provides ONLY the following ability: - Emit a verifiable message from (Address A, Blockchain A) to (Address B, Blockchain B) that can be verified by the destination chain #### Explicitly Not Provided / Built on Top The Warp Precompile itself does not provide any guarantees of: - Eventual message delivery (may require re-send on blockchain A and additional assumptions about off-chain relayers and chain progress) - Ordering of messages (requires ordering provided a layer above) - Replay protection (requires replay protection provided a layer above) # Complex Golang VM (/docs/avalanche-l1s/golang-vms/complex-golang-vm) --- title: Complex Golang VM description: In this tutorial, we'll walk through how to build a virtual machine by referencing the BlobVM. --- The [BlobVM](https://github.com/ava-labs/blobvm) is a virtual machine that can be used to implement a decentralized key-value store. A blob (shorthand for "binary large object") is an arbitrary piece of data. BlobVM stores a key-value pair by breaking it apart into multiple chunks stored with their hashes as their keys in the blockchain. A root key-value pair has references to these chunks for lookups. By default, the maximum chunk size is set to 200 KiB. ## Components A VM defines how a blockchain should be built. A block is populated with a set of transactions which mutate the state of the blockchain when executed. When a block with a set of transactions is applied to a given state, a state transition occurs by executing all of the transactions in the block in-order and applying it to the previous block of the blockchain. By executing a series of blocks chronologically, anyone can verify and reconstruct the state of the blockchain at an arbitrary point in time. The BlobVM repository has a few components to handle the lifecycle of tasks from a transaction being issued to a block being accepted across the network: - **Transaction**: A state transition - **Mempool**: Stores pending transactions that haven't been finalized yet - **Network**: Propagates transactions from the mempool other nodes in the network - **Block**: Defines the block format, how to verify it, and how it should be accepted or rejected across the network - **Block Builder**: Builds blocks by including transactions from the mempool - **Virtual Machine**: Application-level logic. Implements the VM interface needed to interact with Avalanche consensus and defines the blueprint for the blockchain. - **Service**: Exposes APIs so users can interact with the VM - **Factory**: Used to initialize the VM ## Lifecycle of a Transaction A VM will often times expose a set of APIs so users can interact with the it. In the blockchain, blocks can contain a set of transactions which mutate the blockchain's state. Let's dive into the lifecycle of a transaction from its issuance to its finalization on the blockchain. - A user makes an API request to `service.IssueRawTx` to issue their transaction. This API will deserialize the user's transaction and forward it to the VM - The transaction is submitted to the VM which is then added to the VM's mempool - The VM asynchronously periodically gossips new transactions in its mempool to other nodes in the network so they can learn about them - The VM sends the Avalanche consensus engine a message to indicate that it has transactions in the mempool that are ready to be built into a block - The VM proposes the block with to consensus - Consensus verifies that the block is valid and well-formed - Consensus gets the network to vote on whether the block should be accepted or rejected. If a block is rejected, its transactions are reclaimed by the mempool so they can be included in a future block. If a block is accepted, it's finalized by writing it to the blockchain. ## Coding the Virtual Machine We'll dive into a few of the packages that are in the The BlobVM repository to learn more about how they work: 1. [`vm`](https://github.com/ava-labs/blobvm/tree/master/vm) - `block_builder.go` - `chain_vm.go` - `network.go` - `service.go` - `vm.go` 2. [`chain`](https://github.com/ava-labs/blobvm/tree/master/chain) - `unsigned_tx.go` - `base_tx.go` - `transfer_tx.go` - `set_tx.go` - `tx.go` - `block.go` - `mempool.go` - `storage.go` - `builder.go` 3. [`mempool`](https://github.com/ava-labs/blobvm/tree/master/mempool) - `mempool.go` ### Transactions The state the blockchain can only be mutated by getting the network to accept a signed transaction. A signed transaction contains the transaction to be executed alongside the signature of the issuer. The signature is required to cryptographically verify the sender's identity. A VM can define an arbitrary amount of unique transactions types to support different operations on the blockchain. The BlobVM implements two different transactions types: - [TransferTx](https://github.com/ava-labs/blobvm/blob/master/chain/transfer_tx.go) - Transfers coins between accounts. - [SetTx](https://github.com/ava-labs/blobvm/blob/master/chain/set_tx.go) - Stores a key-value pair on the blockchain. #### UnsignedTransaction All transactions in the BlobVM implement the common [`UnsignedTransaction`](https://github.com/ava-labs/blobvm/blob/master/chain/unsigned_tx.go) interface, which exposes shared functionality for all transaction types. ```go type UnsignedTransaction interface { Copy() UnsignedTransaction GetBlockID() ids.ID GetMagic() uint64 GetPrice() uint64 SetBlockID(ids.ID) SetMagic(uint64) SetPrice(uint64) FeeUnits(*Genesis) uint64 // number of units to mine tx LoadUnits(*Genesis) uint64 // units that should impact fee rate ExecuteBase(*Genesis) error Execute(*TransactionContext) error TypedData() *tdata.TypedData Activity() *Activity } ``` #### BaseTx Common functionality and metadata for transaction types are implemented by [`BaseTx`](https://github.com/ava-labs/blobvm/blob/master/chain/base_tx.go). - [`SetBlockID`](https://github.com/ava-labs/blobvm/blob/master/chain/base_tx.go#L26) sets the transaction's block ID. - [`GetBlockID`](https://github.com/ava-labs/blobvm/blob/master/chain/base_tx.go#L22) returns the transaction's block ID. - [`SetMagic`](https://github.com/ava-labs/blobvm/blob/master/chain/base_tx.go#L34) sets the magic number. The magic number is used to differentiate chains to prevent replay attacks - [`GetMagic`](https://github.com/ava-labs/blobvm/blob/master/chain/base_tx.go#L30) returns the magic number. Magic number is defined in genesis. - [`SetPrice`](https://github.com/ava-labs/blobvm/blob/master/chain/base_tx.go#L42) sets the price per fee unit for this transaction. - [`GetPrice`](https://github.com/ava-labs/blobvm/blob/master/chain/base_tx.go#L38) returns the price for this transaction. - [`FeeUnits`](https://github.com/ava-labs/blobvm/blob/master/chain/base_tx.go#L59) returns the fee units this transaction will consume. - [`LoadUnits`](https://github.com/ava-labs/blobvm/blob/master/chain/base_tx.go#L63) identical to `FeeUnits` - [`ExecuteBase`](https://github.com/ava-labs/blobvm/blob/master/chain/base_tx.go#L46) executes common validation checks across different transaction types. This validates the transaction contains a valid block ID, magic number, and gas price as defined by genesis. #### TransferTx [`TransferTx`](https://github.com/ava-labs/blobvm/blob/master/chain/transfer_tx.go#L16) supports the transfer of tokens from one account to another. ```go type TransferTx struct { *BaseTx `serialize:"true" json:"baseTx"` // To is the recipient of the [Units]. To common.Address `serialize:"true" json:"to"` // Units are transferred to [To]. Units uint64 `serialize:"true" json:"units"` } ``` `TransferTx` embeds `BaseTx` to avoid re-implementing common operations with other transactions, but implements its own [`Execute`](https://github.com/ava-labs/blobvm/blob/master/chain/transfer_tx.go#L26) to support token transfers. This performs a few checks to ensure that the transfer is valid before transferring the tokens between the two accounts. ```go func (t *TransferTx) Execute(c *TransactionContext) error { // Must transfer to someone if bytes.Equal(t.To[:], zeroAddress[:]) { return ErrNonActionable } // This prevents someone from transferring to themselves. if bytes.Equal(t.To[:], c.Sender[:]) { return ErrNonActionable } if t.Units == 0 { return ErrNonActionable } if _, err := ModifyBalance(c.Database, c.Sender, false, t.Units); err != nil { return err } if _, err := ModifyBalance(c.Database, t.To, true, t.Units); err != nil { return err } return nil } ``` #### SetTx `SetTx` is used to assign a value to a key. ```go type SetTx struct { *BaseTx `serialize:"true" json:"baseTx"` Value []byte `serialize:"true" json:"value"` } ``` `SetTx` implements its own [`FeeUnits`](https://github.com/ava-labs/blobvm/blob/master/chain/set_tx.go#L48) method to compensate the network according to the size of the blob being stored. ```go func (s *SetTx) FeeUnits(g *Genesis) uint64 { // We don't subtract by 1 here because we want to charge extra for any // value-based interaction (even if it is small or a delete). return s.BaseTx.FeeUnits(g) + valueUnits(g, uint64(len(s.Value))) } ``` `SetTx`'s [`Execute`](https://github.com/ava-labs/blobvm/blob/master/chain/set_tx.go#L21) method performs a few safety checks to validate that the blob meets the size constraints enforced by genesis and doesn't overwrite an existing key before writing it to the blockchain. ```go func (s *SetTx) Execute(t *TransactionContext) error { g := t.Genesis switch { case len(s.Value) == 0: return ErrValueEmpty case uint64(len(s.Value)) > g.MaxValueSize: return ErrValueTooBig } k := ValueHash(s.Value) // Do not allow duplicate value setting _, exists, err := GetValueMeta(t.Database, k) if err != nil { return err } if exists { return ErrKeyExists } return PutKey(t.Database, k, &ValueMeta{ Size: uint64(len(s.Value)), TxID: t.TxID, Created: t.BlockTime, }) } ``` #### Signed Transaction The unsigned transactions mentioned previously can't be issued to the network without first being signed. BlobVM implements signed transactions by embedding the unsigned transaction alongside its signature in [`Transaction`](https://github.com/ava-labs/blobvm/blob/master/chain/tx.go). In BlobVM, a signature is defined as the [ECDSA signature](https://en.wikipedia.org/wiki/Elliptic_Curve_Digital_Signature_Algorithm) of the issuer's private key of the [KECCAK256](https://keccak.team/keccak.html) hash of the unsigned transaction's data ([digest hash](https://eips.ethereum.org/EIPS/eip-712)). ```go type Transaction struct { UnsignedTransaction `serialize:"true" json:"unsignedTransaction"` Signature []byte `serialize:"true" json:"signature"` digestHash []byte bytes []byte id ids.ID size uint64 sender common.Address } ``` The `Transaction` type wraps any unsigned transaction. When a `Transaction` is executed, it calls the `Execute` method of the underlying embedded `UnsignedTx` and performs the following sanity checks: 1. The underlying `UnsignedTx` must meet the requirements set by genesis. This includes checks to make sure that the transaction contains the correct magic number and meets the minimum gas price as defined by genesis 2. The transaction's block ID must be a recently accepted block 3. The transaction must not be a recently issued transaction 4. The issuer of the transaction must have enough gas 5. The transaction's gas price must be meet the next expected block's minimum gas price 6. The transaction must execute without error If the transaction is successfully verified, it's submitted as a pending write to the blockchain. ```go func (t *Transaction) Execute(g *Genesis, db database.Database, blk *StatelessBlock, context *Context) error { if err := t.UnsignedTransaction.ExecuteBase(g); err != nil { return err } if !context.RecentBlockIDs.Contains(t.GetBlockID()) { // Hash must be recent to be any good // Should not happen because of mempool cleanup return ErrInvalidBlockID } if context.RecentTxIDs.Contains(t.ID()) { // Tx hash must not be recently executed (otherwise could be replayed) // // NOTE: We only need to keep cached tx hashes around as long as the // block hash referenced in the tx is valid return ErrDuplicateTx } // Ensure sender has balance if _, err := ModifyBalance(db, t.sender, false, t.FeeUnits(g)*t.GetPrice()); err != nil { return err } if t.GetPrice() < context.NextPrice { return ErrInsufficientPrice } if err := t.UnsignedTransaction.Execute(&TransactionContext{ Genesis: g, Database: db, BlockTime: uint64(blk.Tmstmp), TxID: t.id, Sender: t.sender, }); err != nil { return err } if err := SetTransaction(db, t); err != nil { return err } return nil } ``` ##### Example Let's walk through an example on how to issue a `SetTx` transaction to the BlobVM to write a key-value pair. 1. Create the unsigned transaction for `SetTx` ```go utx := &chain.SetTx{ BaseTx: &chain.BaseTx{}, Value: []byte("data"), } utx.SetBlockID(lastAcceptedID) utx.SetMagic(genesis.Magic) utx.SetPrice(price + blockCost/utx.FeeUnits(genesis)) ``` 2. Calculate the [digest hash](https://github.com/ava-labs/blobvm/blob/master/chain/tx.go#L41) for the transaction. ```go digest, err := chain.DigestHash(utx) ``` 3. [Sign](https://github.com/ava-labs/blobvm/blob/master/chain/crypto.go#L17) the digest hash with the issuer's private key. ```go signature, err := chain.Sign(digest, privateKey) ``` 4. Create and initialize the new signed transaction. ```go tx := chain.NewTx(utx, sig) if err := tx.Init(g); err != nil { return ids.Empty, 0, err } ``` 5. Issue the request with the client ``` txID, err = cli.IssueRawTx(ctx, tx.Bytes()) ``` ### Mempool #### Overview The [mempool](https://github.com/ava-labs/blobvm/blob/master/mempool/mempool.go) is a buffer of volatile memory that stores pending transactions. Transactions are stored in the mempool whenever a node learns about a new transaction either through gossip with other nodes or through an API call issued by a user. The mempool is implemented as a min-max [heap](https://en.wikipedia.org/wiki/Heap_data_structure) ordered by each transaction's gas price. The mempool is created during the [initialization](https://github.com/ava-labs/blobvm/blob/master/vm/vm.go#L93) of VM. ```go vm.mempool = mempool.New(vm.genesis, vm.config.MempoolSize) ``` Whenever a transaction is submitted to VM, it first gets initialized, verified, and executed locally. If the transaction looks valid, then it's added to the [mempool](https://github.com/ava-labs/blobvm/blob/master/vm/vm.go#L414). #### Add Method When a transaction is added to the mempool, [`Add`](https://github.com/ava-labs/blobvm/blob/master/mempool/mempool.go#L43) is called. This performs the following: - Checks if the transaction being added already exists in the mempool or not - The transaction is added to the min-max heap - If the mempool's heap size is larger than the maximum configured value, then the lowest paying transaction is evicted - The transaction is added to the list of transactions that are able to be gossiped to other peers - A notification is sent through the in the `mempool.Pending` channel to indicate that the consensus engine should build a new block ### Block Builder #### Overview The [`TimeBuilder`](https://github.com/ava-labs/blobvm/blob/master/vm/block_builder.go) implementation for `BlockBuilder` acts as an intermediary notification service between the mempool and the consensus engine. It serves the following functions: - Periodically gossips new transactions to other nodes in the network - Periodically notifies the consensus engine that new transactions from the mempool are ready to be built into blocks `TimeBuilder` and can exist in 3 states: - `dontBuild` - There are no transactions in the mempool that are ready to be included in a block - `building` - The consensus engine has been notified that it should build a block and there are currently transactions in the mempool that are eligible to be included into a block - `mayBuild` - There are transactions in the mempool that are eligible to be included into a block, but the consensus engine has not been notified yet #### Gossip Method The [`Gossip`](https://github.com/ava-labs/blobvm/blob/master/vm/block_builder.go#L183) method initiates the gossip of new transactions from the mempool at periodically as defined by `vm.config.GossipInterval`. #### Build Method The [`Build`](https://github.com/ava-labs/blobvm/blob/master/vm/block_builder.go#L166) method consumes transactions from the mempool and signals the consensus engine when it's ready to build a block. If the mempool signals the `TimeBuilder` that it has available transactions, `TimeBuilder` will signal consensus that the VM is ready to build a block by sending the consensus engine a `common.PendingTxs` message. When the consensus engine receives the `common.PendingTxs` message it calls the VM's `BuildBlock` method. The VM will then build a block from eligible transactions in the mempool. If there are still remaining transactions in the mempool after a block is built, then the `TimeBuilder` is put into the `mayBuild` state to indicate that there are still transactions that are eligible to be built into block, but the consensus engine isn't aware of it yet. ### Network [Network](https://github.com/ava-labs/blobvm/blob/master/vm/network.go) handles the workflow of gossiping transactions from a node's mempool to other nodes in the network. #### GossipNewTxs Method `GossipNewTxs` sends a list of transactions to other nodes in the network. `TimeBuilder` calls the network's `GossipNewTxs` function to gossip new transactions in the mempool. ```go func (n *PushNetwork) GossipNewTxs(newTxs []*chain.Transaction) error { txs := []*chain.Transaction{} // Gossip at most the target units of a block at once for _, tx := range newTxs { if _, exists := n.gossipedTxs.Get(tx.ID()); exists { log.Debug("already gossiped, skipping", "txId", tx.ID()) continue } n.gossipedTxs.Put(tx.ID(), nil) txs = append(txs, tx) } return n.sendTxs(txs) } ``` Recently gossiped transactions are maintained in a cache to avoid DDoSing a node from repeated gossip failures. Other nodes in the network will receive the gossiped transactions through their `AppGossip` handler. Once a gossip message is received, it's deserialized and the new transactions are submitted to the VM. ```go func (vm *VM) AppGossip(nodeID ids.NodeID, msg []byte) error { txs := make([]*chain.Transaction, 0) if _, err := chain.Unmarshal(msg, &txs); err != nil { return nil } // submit incoming gossip log.Debug("AppGossip transactions are being submitted", "txs", len(txs)) if errs := vm.Submit(txs...); len(errs) > 0 { for _, err := range errs { } } return nil } ``` ### Block Blocks go through a lifecycle of being proposed by a validator, verified, and decided by consensus. Upon acceptance, a block will be committed and will be finalized on the blockchain. BlobVM implements two types of blocks, `StatefulBlock` and `StatelessBlock`. #### StatefulBlock A [`StatefulBlock`](https://github.com/ava-labs/blobvm/blob/master/chain/block.go#L27) contains strictly the metadata about the block that needs to be written to the database. ```go type StatefulBlock struct { Prnt ids.ID `serialize:"true" json:"parent"` Tmstmp int64 `serialize:"true" json:"timestamp"` Hght uint64 `serialize:"true" json:"height"` Price uint64 `serialize:"true" json:"price"` Cost uint64 `serialize:"true" json:"cost"` AccessProof common.Hash `serialize:"true" json:"accessProof"` Txs []*Transaction `serialize:"true" json:"txs"` } ``` #### StatelessBlock [StatelessBlock](https://github.com/ava-labs/blobvm/blob/master/chain/block.go#L40) is a superset of `StatefulBlock` and additionally contains fields that are needed to support block-level operations like verification and acceptance throughout its lifecycle in the VM. ```go type StatelessBlock struct { *StatefulBlock `serialize:"true" json:"block"` id ids.ID st choices.Status t time.Time bytes []byte vm VM children []*StatelessBlock onAcceptDB *versiondb.Database } ``` Let's have a look at the fields of StatelessBlock: - `StatefulBlock`: The metadata about the block that will be written to the database upon acceptance - `bytes`: The serialized form of the `StatefulBlock`. - `id`: The Keccak256 hash of `bytes`. - `st`: The status of the block in consensus (i.e `Processing`, `Accepted`, or `Rejected`) - `children`: The children of this block - `onAcceptDB`: The database this block should be written to upon acceptance. When the consensus engine tries to build a block by calling the VM's `BuildBlock`, the VM calls the [`block.NewBlock`](https://github.com/ava-labs/blobvm/blob/master/chain/block.go#L53) function to get a new block that is a child of the currently preferred block. ```go func NewBlock(vm VM, parent snowman.Block, tmstp int64, context *Context) *StatelessBlock { return &StatelessBlock{ StatefulBlock: &StatefulBlock{ Tmstmp: tmstp, Prnt: parent.ID(), Hght: parent.Height() + 1, Price: context.NextPrice, Cost: context.NextCost, }, vm: vm, st: choices.Processing, } } ``` Some `StatelessBlock` fields like the block ID, byte representation, and timestamp aren't populated immediately. These are set during the `StatelessBlock`'s [`init`](https://github.com/ava-labs/blobvm/blob/master/chain/block.go#L113) method, which initializes these fields once the block has been populated with transactions. ```go func (b *StatelessBlock) init() error { bytes, err := Marshal(b.StatefulBlock) if err != nil { return err } b.bytes = bytes id, err := ids.ToID(crypto.Keccak256(b.bytes)) if err != nil { return err } b.id = id b.t = time.Unix(b.StatefulBlock.Tmstmp, 0) g := b.vm.Genesis() for _, tx := range b.StatefulBlock.Txs { if err := tx.Init(g); err != nil { return err } } return nil } ``` To build the block, the VM will try to remove as many of the highest-paying transactions from the mempool to include them in the new block until the maximum block fee set by genesis is reached. A block once built, can exist in two states: 1. Rejected: The block was not accepted by consensus. In this case, the mempool will reclaim the rejected block's transactions so they can be included in a future block. 2. Accepted: The block was accepted by consensus. In this case, we write the block to the blockchain by committing it to the database. When the consensus engine receives the built block, it calls the block's [`Verify`](https://github.com/ava-labs/blobvm/blob/master/chain/block.go#L228) method to validate that the block is well-formed. In BlobVM, the following constraints are placed on valid blocks: 1. A block must contain at least one transaction and the block's timestamp must be within 10s into the future. ```go if len(b.Txs) == 0 { return nil, nil, ErrNoTxs } if b.Timestamp().Unix() >= time.Now().Add(futureBound).Unix() { return nil, nil, ErrTimestampTooLate } ``` 2. The sum of the gas units consumed by the transactions in the block must not exceed the gas limit defined by genesis. ```go blockSize := uint64(0) for _, tx := range b.Txs { blockSize += tx.LoadUnits(g) if blockSize > g.MaxBlockSize { return nil, nil, ErrBlockTooBig } } ``` 3. The parent block of the proposed block must exist and have an earlier timestamp. ```go parent, err := b.vm.GetStatelessBlock(b.Prnt) if err != nil { log.Debug("could not get parent", "id", b.Prnt) return nil, nil, err } if b.Timestamp().Unix() < parent.Timestamp().Unix() { return nil, nil, ErrTimestampTooEarly } ``` 4. The target block price and minimum gas price must meet the minimum enforced by the VM. ```go context, err := b.vm.ExecutionContext(b.Tmstmp, parent) if err != nil { return nil, nil, err } if b.Cost != context.NextCost { return nil, nil, ErrInvalidCost } if b.Price != context.NextPrice { return nil, nil, ErrInvalidPrice } ``` After the results of consensus are complete, the block is either accepted by committing the block to the database or rejected by returning the block's transactions into the mempool. ```go // implements "snowman.Block.choices.Decidable" func (b *StatelessBlock) Accept() error { if err := b.onAcceptDB.Commit(); err != nil { return err } for _, child := range b.children { if err := child.onAcceptDB.SetDatabase(b.vm.State()); err != nil { return err } } b.st = choices.Accepted b.vm.Accepted(b) return nil } // implements "snowman.Block.choices.Decidable" func (b *StatelessBlock) Reject() error { b.st = choices.Rejected b.vm.Rejected(b) return nil } ``` ### API [Service](https://github.com/ava-labs/blobvm/blob/master/vm/public_service.go) implements an API server so users can interact with the VM. The VM implements the interface method [`CreateHandlers`](https://github.com/ava-labs/blobvm/blob/master/vm/vm.go#L267) that exposes the VM's RPC API. ```go func (vm *VM) CreateHandlers() (map[string]*common.HTTPHandler, error) { apis := map[string]*common.HTTPHandler{} public, err := newHandler(Name, &PublicService{vm: vm}) if err != nil { return nil, err } apis[PublicEndpoint] = public return apis, nil } ``` One API that's exposed is [`IssueRawTx`](https://github.com/ava-labs/blobvm/blob/master/vm/public_service.go#L63) to allow users to issue transactions to the BlobVM. It accepts an `IssueRawTxArgs` that contains the transaction the user wants to issue and forwards it to the VM. ```go func (svc *PublicService) IssueRawTx(_ *http.Request, args *IssueRawTxArgs, reply *IssueRawTxReply) error { tx := new(chain.Transaction) if _, err := chain.Unmarshal(args.Tx, tx); err != nil { return err } // otherwise, unexported tx.id field is empty if err := tx.Init(svc.vm.genesis); err != nil { return err } reply.TxID = tx.ID() errs := svc.vm.Submit(tx) if len(errs) == 0 { return nil } if len(errs) == 1 { return errs[0] } return fmt.Errorf("%v", errs) } ``` ### Virtual Machine We have learned about all the components used in the BlobVM. Most of these components are referenced in the `vm.go` file, which acts as the entry point for the consensus engine as well as users interacting with the blockchain. For example, the engine calls `vm.BuildBlock()`, that in turn calls `chain.BuildBlock()`. Another example is when a user issues a raw transaction through service APIs, the `vm.Submit()` method is called. Let's look at some of the important methods of `vm.go` that must be implemented: #### Initialize Method [Initialize](https://github.com/ava-labs/blobvm/blob/master/vm/vm.go#L93) is invoked by `avalanchego` when creating the blockchain. `avalanchego` passes some parameters to the implementing VM. - `ctx` - Metadata about the VM's execution - `dbManager` - The database that the VM can write to - `genesisBytes` - The serialized representation of the genesis state of this VM - `upgradeBytes` - The serialized representation of network upgrades - `configBytes` - The serialized VM-specific [configuration](https://github.com/ava-labs/blobvm/blob/master/vm/config.go#L10) - `toEngine` - The channel used to send messages to the consensus engine - `fxs` - Feature extensions that attach to this VM - `appSender` - Used to send messages to other nodes in the network BlobVM upon initialization persists these fields in its own state to use them throughout the lifetime of its execution. ```go // implements "snowmanblock.ChainVM.common.VM" func (vm *VM) Initialize( ctx *snow.Context, dbManager manager.Manager, genesisBytes []byte, upgradeBytes []byte, configBytes []byte, toEngine chan<- common.Message, _ []*common.Fx, appSender common.AppSender, ) error { log.Info("initializing blobvm", "version", version.Version) // Load config vm.config.SetDefaults() if len(configBytes) > 0 { if err := ejson.Unmarshal(configBytes, &vm.config); err != nil { return fmt.Errorf("failed to unmarshal config %s: %w", string(configBytes), err) } } vm.ctx = ctx vm.db = dbManager.Current().Database vm.activityCache = make([]*chain.Activity, vm.config.ActivityCacheSize) // Init channels before initializing other structs vm.stop = make(chan struct{}) vm.builderStop = make(chan struct{}) vm.doneBuild = make(chan struct{}) vm.doneGossip = make(chan struct{}) vm.appSender = appSender vm.network = vm.NewPushNetwork() vm.blocks = &cache.LRU{Size: blocksLRUSize} vm.verifiedBlocks = make(map[ids.ID]*chain.StatelessBlock) vm.toEngine = toEngine vm.builder = vm.NewTimeBuilder() // Try to load last accepted has, err := chain.HasLastAccepted(vm.db) if err != nil { log.Error("could not determine if have last accepted") return err } // Parse genesis data vm.genesis = new(chain.Genesis) if err := ejson.Unmarshal(genesisBytes, vm.genesis); err != nil { log.Error("could not unmarshal genesis bytes") return err } if err := vm.genesis.Verify(); err != nil { log.Error("genesis is invalid") return err } targetUnitsPerSecond := vm.genesis.TargetBlockSize / uint64(vm.genesis.TargetBlockRate) vm.targetRangeUnits = targetUnitsPerSecond * uint64(vm.genesis.LookbackWindow) log.Debug("loaded genesis", "genesis", string(genesisBytes), "target range units", vm.targetRangeUnits) vm.mempool = mempool.New(vm.genesis, vm.config.MempoolSize) if has { //nolint:nestif blkID, err := chain.GetLastAccepted(vm.db) if err != nil { log.Error("could not get last accepted", "err", err) return err } blk, err := vm.GetStatelessBlock(blkID) if err != nil { log.Error("could not load last accepted", "err", err) return err } vm.preferred, vm.lastAccepted = blkID, blk log.Info("initialized blobvm from last accepted", "block", blkID) } else { genesisBlk, err := chain.ParseStatefulBlock( vm.genesis.StatefulBlock(), nil, choices.Accepted, vm, ) if err != nil { log.Error("unable to init genesis block", "err", err) return err } // Set Balances if err := vm.genesis.Load(vm.db, vm.AirdropData); err != nil { log.Error("could not set genesis allocation", "err", err) return err } if err := chain.SetLastAccepted(vm.db, genesisBlk); err != nil { log.Error("could not set genesis as last accepted", "err", err) return err } gBlkID := genesisBlk.ID() vm.preferred, vm.lastAccepted = gBlkID, genesisBlk log.Info("initialized blobvm from genesis", "block", gBlkID) } vm.AirdropData = nil } ``` After initializing its own state, BlobVM also starts asynchronous workers to build blocks and gossip transactions to the rest of the network. ``` { go vm.builder.Build() go vm.builder.Gossip() return nil } ``` #### GetBlock Method [`GetBlock`](https://github.com/ava-labs/blobvm/blob/master/vm/vm.go#L318) returns the block with the provided ID. GetBlock will attempt to fetch the given block from the database, and return an non-nil error if it wasn't able to get it. ```go func (vm *VM) GetBlock(id ids.ID) (snowman.Block, error) { b, err := vm.GetStatelessBlock(id) if err != nil { log.Warn("failed to get block", "err", err) } return b, err } ``` #### ParseBlock Method [`ParseBlock`](https://github.com/ava-labs/blobvm/blob/master/vm/vm.go#L373) deserializes a block. ```go func (vm *VM) ParseBlock(source []byte) (snowman.Block, error) { newBlk, err := chain.ParseBlock( source, choices.Processing, vm, ) if err != nil { log.Error("could not parse block", "err", err) return nil, err } log.Debug("parsed block", "id", newBlk.ID()) // If we have seen this block before, return it with the most // up-to-date info if oldBlk, err := vm.GetBlock(newBlk.ID()); err == nil { log.Debug("returning previously parsed block", "id", oldBlk.ID()) return oldBlk, nil } return newBlk, nil } ``` #### BuildBlock Method Avalanche consensus calls [`BuildBlock`](https://github.com/ava-labs/blobvm/blob/master/vm/vm.go#L397) when it receives a notification from the VM that it has pending transactions that are ready to be issued into a block. ```go func (vm *VM) BuildBlock() (snowman.Block, error) { log.Debug("BuildBlock triggered") blk, err := chain.BuildBlock(vm, vm.preferred) vm.builder.HandleGenerateBlock() if err != nil { log.Debug("BuildBlock failed", "error", err) return nil, err } sblk, ok := blk.(*chain.StatelessBlock) if !ok { return nil, fmt.Errorf("unexpected snowman.Block %T, expected *StatelessBlock", blk) } log.Debug("BuildBlock success", "blkID", blk.ID(), "txs", len(sblk.Txs)) return blk, nil } ``` #### SetPreference Method [`SetPreference`](https://github.com/ava-labs/blobvm/blob/master/vm/vm.go#L457) sets the block ID preferred by this node. A node votes to accept or reject a block based on its current preference in consensus. ```go func (vm *VM) SetPreference(id ids.ID) error { log.Debug("set preference", "id", id) vm.preferred = id return nil } ``` #### LastAccepted Method [LastAccepted](https://github.com/ava-labs/blobvm/blob/master/vm/vm.go#L465) returns the block ID of the block that was most recently accepted by this node. ```go func (vm *VM) LastAccepted() (ids.ID, error) { return vm.lastAccepted.ID(), nil } ``` ### CLI BlobVM implements a generic key-value store, but support to read and write arbitrary files into the BlobVM blockchain is implemented in the `blob-cli` To write a file, BlobVM breaks apart an arbitrarily large file into many small chunks. Each chunk is submitted to the VM in a `SetTx`. A root key is generated which contains all of the hashes of the chunks. ```go func Upload( ctx context.Context, cli client.Client, priv *ecdsa.PrivateKey, f io.Reader, chunkSize int, ) (common.Hash, error) { hashes := []common.Hash{} chunk := make([]byte, chunkSize) shouldExit := false opts := []client.OpOption{client.WithPollTx()} totalCost := uint64(0) uploaded := map[common.Hash]struct{}{} for !shouldExit { read, err := f.Read(chunk) if errors.Is(err, io.EOF) || read == 0 { break } if err != nil { return common.Hash{}, fmt.Errorf("%w: read error", err) } if read < chunkSize { shouldExit = true chunk = chunk[:read] // Use small file optimization if len(hashes) == 0 { break } } k := chain.ValueHash(chunk) if _, ok := uploaded[k]; ok { color.Yellow("already uploaded k=%s, skipping", k) } else if exists, _, _, err := cli.Resolve(ctx, k); err == nil && exists { color.Yellow("already on-chain k=%s, skipping", k) uploaded[k] = struct{}{} } else { tx := &chain.SetTx{ BaseTx: &chain.BaseTx{}, Value: chunk, } txID, cost, err := client.SignIssueRawTx(ctx, cli, tx, priv, opts...) if err != nil { return common.Hash{}, err } totalCost += cost color.Yellow("uploaded k=%s txID=%s cost=%d totalCost=%d", k, txID, cost, totalCost) uploaded[k] = struct{}{} } hashes = append(hashes, k) } r := &Root{} if len(hashes) == 0 { if len(chunk) == 0 { return common.Hash{}, ErrEmpty } r.Contents = chunk } else { r.Children = hashes } rb, err := json.Marshal(r) if err != nil { return common.Hash{}, err } rk := chain.ValueHash(rb) tx := &chain.SetTx{ BaseTx: &chain.BaseTx{}, Value: rb, } txID, cost, err := client.SignIssueRawTx(ctx, cli, tx, priv, opts...) if err != nil { return common.Hash{}, err } totalCost += cost color.Yellow("uploaded root=%v txID=%s cost=%d totalCost=%d", rk, txID, cost, totalCost) return rk, nil } ``` #### Example 1 ```bash blob-cli set-file ~/Downloads/computer.gif -> 6fe5a52f52b34fb1e07ba90bad47811c645176d0d49ef0c7a7b4b22013f676c8 ``` Given the root hash, a file can be looked up by deserializing all of its children chunk values and reconstructing the original file. ```go // TODO: make multi-threaded func Download(ctx context.Context, cli client.Client, root common.Hash, f io.Writer) error { exists, rb, _, err := cli.Resolve(ctx, root) if err != nil { return err } if !exists { return fmt.Errorf("%w:%v", ErrMissing, root) } var r Root if err := json.Unmarshal(rb, &r); err != nil { return err } // Use small file optimization if contentLen := len(r.Contents); contentLen > 0 { if _, err := f.Write(r.Contents); err != nil { return err } color.Yellow("downloaded root=%v size=%fKB", root, float64(contentLen)/units.KiB) return nil } if len(r.Children) == 0 { return ErrEmpty } amountDownloaded := 0 for _, h := range r.Children { exists, b, _, err := cli.Resolve(ctx, h) if err != nil { return err } if !exists { return fmt.Errorf("%w:%s", ErrMissing, h) } if _, err := f.Write(b); err != nil { return err } size := len(b) color.Yellow("downloaded chunk=%v size=%fKB", h, float64(size)/units.KiB) amountDownloaded += size } color.Yellow("download complete root=%v size=%fMB", root, float64(amountDownloaded)/units.MiB) return nil } ``` #### Example 2 ```bash blob-cli resolve-file 6fe5a52f52b34fb1e07ba90bad47811c645176d0d49ef0c7a7b4b22013f676c8 computer_copy.gif ``` ## Conclusion This documentation covers concepts about Virtual Machine by walking through a VM that implements a decentralized key-value store. You can learn more about the BlobVM by referencing the [README](https://github.com/ava-labs/blobvm/blob/master/README.md) in the GitHub repository. # Simple Golang VM (/docs/avalanche-l1s/golang-vms/simple-golang-vm) --- title: Simple Golang VM description: In this tutorial, we will learn how to build a virtual machine by referencing the TimestampVM. --- In this tutorial, we'll create a very simple VM called the [TimestampVM](https://github.com/ava-labs/timestampvm/tree/v1.2.1). Each block in the TimestampVM's blockchain contains a strictly increasing timestamp when the block was created and a 32-byte payload of data. Such a server is useful because it can be used to prove a piece of data existed at the time the block was created. Suppose you have a book manuscript, and you want to be able to prove in the future that the manuscript exists today. You can add a block to the blockchain where the block's payload is a hash of your manuscript. In the future, you can prove that the manuscript existed today by showing that the block has the hash of your manuscript in its payload (this follows from the fact that finding the pre-image of a hash is impossible). ## TimestampVM Implementation Now we know the interface our VM must implement and the libraries we can use to build a VM. Let's write our VM, which implements `block.ChainVM` and whose blocks implement `snowman.Block`. You can also follow the code in the [TimestampVM repository](https://github.com/ava-labs/timestampvm/tree/main). ### Codec `Codec` is required to encode/decode the block into byte representation. TimestampVM uses the default codec and manager. ```go title="timestampvm/codec.go" const ( // CodecVersion is the current default codec version CodecVersion = 0 ) // Codecs do serialization and deserialization var ( Codec codec.Manager ) func init() { // Create default codec and manager c := linearcodec.NewDefault() Codec = codec.NewDefaultManager() // Register codec to manager with CodecVersion if err := Codec.RegisterCodec(CodecVersion, c); err != nil { panic(err) } } ``` ### State The `State` interface defines the database layer and connections. Each VM should define their own database methods. `State` embeds the `BlockState` which defines block-related state operations. ```go title="timestampvm/state.go" var ( // These are prefixes for db keys. // It's important to set different prefixes for each separate database objects. singletonStatePrefix = []byte("singleton") blockStatePrefix = []byte("block") _ State = &state{} ) // State is a wrapper around avax.SingleTonState and BlockState // State also exposes a few methods needed for managing database commits and close. type State interface { // SingletonState is defined in avalanchego, // it is used to understand if db is initialized already. avax.SingletonState BlockState Commit() error Close() error } type state struct { avax.SingletonState BlockState baseDB *versiondb.Database } func NewState(db database.Database, vm *VM) State { // create a new baseDB baseDB := versiondb.New(db) // create a prefixed "blockDB" from baseDB blockDB := prefixdb.New(blockStatePrefix, baseDB) // create a prefixed "singletonDB" from baseDB singletonDB := prefixdb.New(singletonStatePrefix, baseDB) // return state with created sub state components return &state{ BlockState: NewBlockState(blockDB, vm), SingletonState: avax.NewSingletonState(singletonDB), baseDB: baseDB, } } // Commit commits pending operations to baseDB func (s *state) Commit() error { return s.baseDB.Commit() } // Close closes the underlying base database func (s *state) Close() error { return s.baseDB.Close() } ``` #### Block State This interface and implementation provides storage functions to VM to store and retrieve blocks. ```go title="timestampvm/block_state.go" const ( lastAcceptedByte byte = iota ) const ( // maximum block capacity of the cache blockCacheSize = 8192 ) // persists lastAccepted block IDs with this key var lastAcceptedKey = []byte{lastAcceptedByte} var _ BlockState = &blockState{} // BlockState defines methods to manage state with Blocks and LastAcceptedIDs. type BlockState interface { GetBlock(blkID ids.ID) (*Block, error) PutBlock(blk *Block) error GetLastAccepted() (ids.ID, error) SetLastAccepted(ids.ID) error } // blockState implements BlocksState interface with database and cache. type blockState struct { // cache to store blocks blkCache cache.Cacher // block database blockDB database.Database lastAccepted ids.ID // vm reference vm *VM } // blkWrapper wraps the actual blk bytes and status to persist them together type blkWrapper struct { Blk []byte `serialize:"true"` Status choices.Status `serialize:"true"` } // NewBlockState returns BlockState with a new cache and given db func NewBlockState(db database.Database, vm *VM) BlockState { return &blockState{ blkCache: &cache.LRU{Size: blockCacheSize}, blockDB: db, vm: vm, } } // GetBlock gets Block from either cache or database func (s *blockState) GetBlock(blkID ids.ID) (*Block, error) { // Check if cache has this blkID if blkIntf, cached := s.blkCache.Get(blkID); cached { // there is a key but value is nil, so return an error if blkIntf == nil { return nil, database.ErrNotFound } // We found it return the block in cache return blkIntf.(*Block), nil } // get block bytes from db with the blkID key wrappedBytes, err := s.blockDB.Get(blkID[:]) if err != nil { // we could not find it in the db, let's cache this blkID with nil value // so next time we try to fetch the same key we can return error // without hitting the database if err == database.ErrNotFound { s.blkCache.Put(blkID, nil) } // could not find the block, return error return nil, err } // first decode/unmarshal the block wrapper so we can have status and block bytes blkw := blkWrapper{} if _, err := Codec.Unmarshal(wrappedBytes, &blkw); err != nil { return nil, err } // now decode/unmarshal the actual block bytes to block blk := &Block{} if _, err := Codec.Unmarshal(blkw.Blk, blk); err != nil { return nil, err } // initialize block with block bytes, status and vm blk.Initialize(blkw.Blk, blkw.Status, s.vm) // put block into cache s.blkCache.Put(blkID, blk) return blk, nil } // PutBlock puts block into both database and cache func (s *blockState) PutBlock(blk *Block) error { // create block wrapper with block bytes and status blkw := blkWrapper{ Blk: blk.Bytes(), Status: blk.Status(), } // encode block wrapper to its byte representation wrappedBytes, err := Codec.Marshal(CodecVersion, &blkw) if err != nil { return err } blkID := blk.ID() // put actual block to cache, so we can directly fetch it from cache s.blkCache.Put(blkID, blk) // put wrapped block bytes into database return s.blockDB.Put(blkID[:], wrappedBytes) } // DeleteBlock deletes block from both cache and database func (s *blockState) DeleteBlock(blkID ids.ID) error { s.blkCache.Put(blkID, nil) return s.blockDB.Delete(blkID[:]) } // GetLastAccepted returns last accepted block ID func (s *blockState) GetLastAccepted() (ids.ID, error) { // check if we already have lastAccepted ID in state memory if s.lastAccepted != ids.Empty { return s.lastAccepted, nil } // get lastAccepted bytes from database with the fixed lastAcceptedKey lastAcceptedBytes, err := s.blockDB.Get(lastAcceptedKey) if err != nil { return ids.ID{}, err } // parse bytes to ID lastAccepted, err := ids.ToID(lastAcceptedBytes) if err != nil { return ids.ID{}, err } // put lastAccepted ID into memory s.lastAccepted = lastAccepted return lastAccepted, nil } // SetLastAccepted persists lastAccepted ID into both cache and database func (s *blockState) SetLastAccepted(lastAccepted ids.ID) error { // if the ID in memory and the given memory are same don't do anything if s.lastAccepted == lastAccepted { return nil } // put lastAccepted ID to memory s.lastAccepted = lastAccepted // persist lastAccepted ID to database with fixed lastAcceptedKey return s.blockDB.Put(lastAcceptedKey, lastAccepted[:]) } ``` ### Block Let's look at our block implementation. The type declaration is: ```go title="timestampvm/block.go" // Block is a block on the chain. // Each block contains: // 1) ParentID // 2) Height // 3) Timestamp // 4) A piece of data (a string) type Block struct { PrntID ids.ID `serialize:"true" json:"parentID"` // parent's ID Hght uint64 `serialize:"true" json:"height"` // This block's height. The genesis block is at height 0. Tmstmp int64 `serialize:"true" json:"timestamp"` // Time this block was proposed at Dt [dataLen]byte `serialize:"true" json:"data"` // Arbitrary data id ids.ID // hold this block's ID bytes []byte // this block's encoded bytes status choices.Status // block's status vm *VM // the underlying VM reference, mostly used for state } ``` The `serialize:"true"` tag indicates that the field should be included in the byte representation of the block used when persisting the block or sending it to other nodes. #### Verify This method verifies that a block is valid and stores it in the memory. It is important to store the verified block in the memory and return them in the `vm.GetBlock` method. ```go title="timestampvm/block.go" // Verify returns nil iff this block is valid. // To be valid, it must be that: // b.parent.Timestamp < b.Timestamp <= [local time] + 1 hour func (b *Block) Verify() error { // Get [b]'s parent parentID := b.Parent() parent, err := b.vm.getBlock(parentID) if err != nil { return errDatabaseGet } // Ensure [b]'s height comes right after its parent's height if expectedHeight := parent.Height() + 1; expectedHeight != b.Hght { return fmt.Errorf( "expected block to have height %d, but found %d", expectedHeight, b.Hght, ) } // Ensure [b]'s timestamp is after its parent's timestamp. if b.Timestamp().Unix() < parent.Timestamp().Unix() { return errTimestampTooEarly } // Ensure [b]'s timestamp is not more than an hour // ahead of this node's time if b.Timestamp().Unix() >= time.Now().Add(time.Hour).Unix() { return errTimestampTooLate } // Put that block to verified blocks in memory b.vm.verifiedBlocks[b.ID()] = b return nil } ``` #### Accept `Accept` is called by the consensus to indicate this block is accepted. ```go title="timestampvm/block.go" // Accept sets this block's status to Accepted and sets lastAccepted to this // block's ID and saves this info to b.vm.DB func (b *Block) Accept() error { b.SetStatus(choices.Accepted) // Change state of this block blkID := b.ID() // Persist data if err := b.vm.state.PutBlock(b); err != nil { return err } // Set last accepted ID to this block ID if err := b.vm.state.SetLastAccepted(blkID); err != nil { return err } // Delete this block from verified blocks as it's accepted delete(b.vm.verifiedBlocks, b.ID()) // Commit changes to database return b.vm.state.Commit() } ``` #### Reject `Reject` is called by the consensus to indicate this block is rejected. ```go title="timestampvm/block.go" // Reject sets this block's status to Rejected and saves the status in state // Recall that b.vm.DB.Commit() must be called to persist to the DB func (b *Block) Reject() error { b.SetStatus(choices.Rejected) // Change state of this block if err := b.vm.state.PutBlock(b); err != nil { return err } // Delete this block from verified blocks as it's rejected delete(b.vm.verifiedBlocks, b.ID()) // Commit changes to database return b.vm.state.Commit() } ``` #### Block Field Methods These methods are required by the `snowman.Block` interface. ```go title="timestampvm/block.go" // ID returns the ID of this block func (b *Block) ID() ids.ID { return b.id } // ParentID returns [b]'s parent's ID func (b *Block) Parent() ids.ID { return b.PrntID } // Height returns this block's height. The genesis block has height 0. func (b *Block) Height() uint64 { return b.Hght } // Timestamp returns this block's time. The genesis block has time 0. func (b *Block) Timestamp() time.Time { return time.Unix(b.Tmstmp, 0) } // Status returns the status of this block func (b *Block) Status() choices.Status { return b.status } // Bytes returns the byte repr. of this block func (b *Block) Bytes() []byte { return b.bytes } ``` #### Helper Functions These methods are convenience methods for blocks, they're not a part of the block interface. ```go // Initialize sets [b.bytes] to [bytes], [b.id] to hash([b.bytes]), // [b.status] to [status] and [b.vm] to [vm] func (b *Block) Initialize(bytes []byte, status choices.Status, vm *VM) { b.bytes = bytes b.id = hashing.ComputeHash256Array(b.bytes) b.status = status b.vm = vm } // SetStatus sets the status of this block func (b *Block) SetStatus(status choices.Status) { b.status = status } ``` ### Virtual Machine Now, let's look at our timestamp VM implementation, which implements the `block.ChainVM` interface. The declaration is: ```go title="timestampvm/vm.go" // This Virtual Machine defines a blockchain that acts as a timestamp server // Each block contains data (a payload) and the timestamp when it was created const ( dataLen = 32 Name = "timestampvm" ) // VM implements the snowman.VM interface // Each block in this chain contains a Unix timestamp // and a piece of data (a string) type VM struct { // The context of this vm ctx *snow.Context dbManager manager.Manager // State of this VM state State // ID of the preferred block preferred ids.ID // channel to send messages to the consensus engine toEngine chan<- common.Message // Proposed pieces of data that haven't been put into a block and proposed yet mempool [][dataLen]byte // Block ID --> Block // Each element is a block that passed verification but // hasn't yet been accepted/rejected verifiedBlocks map[ids.ID]*Block } ``` #### Initialize This method is called when a new instance of VM is initialized. Genesis block is created under this method. ```go title="timestampvm/vm.go" // Initialize this vm // [ctx] is this vm's context // [dbManager] is the manager of this vm's database // [toEngine] is used to notify the consensus engine that new blocks are // ready to be added to consensus // The data in the genesis block is [genesisData] func (vm *VM) Initialize( ctx *snow.Context, dbManager manager.Manager, genesisData []byte, upgradeData []byte, configData []byte, toEngine chan<- common.Message, _ []*common.Fx, _ common.AppSender, ) error { version, err := vm.Version() if err != nil { log.Error("error initializing Timestamp VM: %v", err) return err } log.Info("Initializing Timestamp VM", "Version", version) vm.dbManager = dbManager vm.ctx = ctx vm.toEngine = toEngine vm.verifiedBlocks = make(map[ids.ID]*Block) // Create new state vm.state = NewState(vm.dbManager.Current().Database, vm) // Initialize genesis if err := vm.initGenesis(genesisData); err != nil { return err } // Get last accepted lastAccepted, err := vm.state.GetLastAccepted() if err != nil { return err } ctx.Log.Info("initializing last accepted block as %s", lastAccepted) // Build off the most recently accepted block return vm.SetPreference(lastAccepted) } ``` #### `initGenesis` `initGenesis` is a helper method which initializes the genesis block from given bytes and puts into the state. ```go title="timestampvm/vm.go" // Initializes Genesis if required func (vm *VM) initGenesis(genesisData []byte) error { stateInitialized, err := vm.state.IsInitialized() if err != nil { return err } // if state is already initialized, skip init genesis. if stateInitialized { return nil } if len(genesisData) > dataLen { return errBadGenesisBytes } // genesisData is a byte slice but each block contains an byte array // Take the first [dataLen] bytes from genesisData and put them in an array var genesisDataArr [dataLen]byte copy(genesisDataArr[:], genesisData) // Create the genesis block // Timestamp of genesis block is 0. It has no parent. genesisBlock, err := vm.NewBlock(ids.Empty, 0, genesisDataArr, time.Unix(0, 0)) if err != nil { log.Error("error while creating genesis block: %v", err) return err } // Put genesis block to state if err := vm.state.PutBlock(genesisBlock); err != nil { log.Error("error while saving genesis block: %v", err) return err } // Accept the genesis block // Sets [vm.lastAccepted] and [vm.preferred] if err := genesisBlock.Accept(); err != nil { return fmt.Errorf("error accepting genesis block: %w", err) } // Mark this vm's state as initialized, so we can skip initGenesis in further restarts if err := vm.state.SetInitialized(); err != nil { return fmt.Errorf("error while setting db to initialized: %w", err) } // Flush VM's database to underlying db return vm.state.Commit() } ``` #### CreateHandlers Registered handlers defined in `Service`. See [below](/docs/avalanche-l1s/golang-vms/simple-golang-vm#api) for more on APIs. ```go title="timestampvm/vm.go" // CreateHandlers returns a map where: // Keys: The path extension for this blockchain's API (empty in this case) // Values: The handler for the API // In this case, our blockchain has only one API, which we name timestamp, // and it has no path extension, so the API endpoint: // [Node IP]/ext/bc/[this blockchain's ID] // See API section in documentation for more information func (vm *VM) CreateHandlers() (map[string]*common.HTTPHandler, error) { server := rpc.NewServer() server.RegisterCodec(json.NewCodec(), "application/json") server.RegisterCodec(json.NewCodec(), "application/json;charset=UTF-8") // Name is "timestampvm" if err := server.RegisterService(&Service{vm: vm}, Name); err != nil { return nil, err } return map[string]*common.HTTPHandler{ "": { Handler: server, }, }, nil } ``` #### CreateStaticHandlers Registers static handlers defined in `StaticService`. See [below](/docs/avalanche-l1s/golang-vms/simple-golang-vm#static-api) for more on static APIs. ```go title="timestampvm/vm.go" // CreateStaticHandlers returns a map where: // Keys: The path extension for this VM's static API // Values: The handler for that static API func (vm *VM) CreateStaticHandlers() (map[string]*common.HTTPHandler, error) { server := rpc.NewServer() server.RegisterCodec(json.NewCodec(), "application/json") server.RegisterCodec(json.NewCodec(), "application/json;charset=UTF-8") if err := server.RegisterService(&StaticService{}, Name); err != nil { return nil, err } return map[string]*common.HTTPHandler{ "": { LockOptions: common.NoLock, Handler: server, }, }, nil } ``` #### BuildBock `BuildBlock` builds a new block and returns it. This is mainly requested by the consensus engine. ```go title="timestampvm/vm.go" // BuildBlock returns a block that this vm wants to add to consensus func (vm *VM) BuildBlock() (snowman.Block, error) { if len(vm.mempool) == 0 { // There is no block to be built return nil, errNoPendingBlocks } // Get the value to put in the new block value := vm.mempool[0] vm.mempool = vm.mempool[1:] // Notify consensus engine that there are more pending data for blocks // (if that is the case) when done building this block if len(vm.mempool) > 0 { defer vm.NotifyBlockReady() } // Gets Preferred Block preferredBlock, err := vm.getBlock(vm.preferred) if err != nil { return nil, fmt.Errorf("couldn't get preferred block: %w", err) } preferredHeight := preferredBlock.Height() // Build the block with preferred height newBlock, err := vm.NewBlock(vm.preferred, preferredHeight+1, value, time.Now()) if err != nil { return nil, fmt.Errorf("couldn't build block: %w", err) } // Verifies block if err := newBlock.Verify(); err != nil { return nil, err } return newBlock, nil } ``` #### NotifyBlockReady `NotifyBlockReady` is a helper method that can send messages to the consensus engine through `toEngine` channel. ```go title="timestampvm/vm.go" // NotifyBlockReady tells the consensus engine that a new block // is ready to be created func (vm *VM) NotifyBlockReady() { select { case vm.toEngine <- common.PendingTxs: default: vm.ctx.Log.Debug("dropping message to consensus engine") } } ``` #### GetBlock `GetBlock` returns the block with the given block ID. ```go title="timestampvm/vm.go" // GetBlock implements the snowman.ChainVM interface func (vm *VM) GetBlock(blkID ids.ID) (snowman.Block, error) { return vm.getBlock(blkID) } func (vm *VM) getBlock(blkID ids.ID) (*Block, error) { // If block is in memory, return it. if blk, exists := vm.verifiedBlocks[blkID]; exists { return blk, nil } return vm.state.GetBlock(blkID) } ``` #### `proposeBlock` This method adds a piece of data to the mempool and notifies the consensus layer of the blockchain that a new block is ready to be built and voted on. This is called by API method `ProposeBlock`, which we'll see later. ```go title="timestampvm/vm.go" // proposeBlock appends [data] to [p.mempool]. // Then it notifies the consensus engine // that a new block is ready to be added to consensus // (namely, a block with data [data]) func (vm *VM) proposeBlock(data [dataLen]byte) { vm.mempool = append(vm.mempool, data) vm.NotifyBlockReady() } ``` #### ParseBlock Parse a block from its byte representation. ```go title="timestampvm/vm.go" // ParseBlock parses [bytes] to a snowman.Block // This function is used by the vm's state to unmarshal blocks saved in state // and by the consensus layer when it receives the byte representation of a block // from another node func (vm *VM) ParseBlock(bytes []byte) (snowman.Block, error) { // A new empty block block := &Block{} // Unmarshal the byte repr. of the block into our empty block _, err := Codec.Unmarshal(bytes, block) if err != nil { return nil, err } // Initialize the block block.Initialize(bytes, choices.Processing, vm) if blk, err := vm.getBlock(block.ID()); err == nil { // If we have seen this block before, return it with the most up-to-date // info return blk, nil } // Return the block return block, nil } ``` #### NewBlock `NewBlock` creates a new block with given block parameters. ```go title="timestampvm/vm.go" // NewBlock returns a new Block where: // - the block's parent is [parentID] // - the block's data is [data] // - the block's timestamp is [timestamp] func (vm *VM) NewBlock(parentID ids.ID, height uint64, data [dataLen]byte, timestamp time.Time) (*Block, error) { block := &Block{ PrntID: parentID, Hght: height, Tmstmp: timestamp.Unix(), Dt: data, } // Get the byte representation of the block blockBytes, err := Codec.Marshal(CodecVersion, block) if err != nil { return nil, err } // Initialize the block by providing it with its byte representation // and a reference to this VM block.Initialize(blockBytes, choices.Processing, vm) return block, nil } ``` #### SetPreference `SetPreference` implements the `block.ChainVM`. It sets the preferred block ID. ```go title="timestampvm/vm.go" // SetPreference sets the block with ID [ID] as the preferred block func (vm *VM) SetPreference(id ids.ID) error { vm.preferred = id return nil } ``` #### Other Functions These functions needs to be implemented for `block.ChainVM`. Most of them are just blank functions returning `nil`. ```go title="timestampvm/vm.go" // Bootstrapped marks this VM as bootstrapped func (vm *VM) Bootstrapped() error { return nil } // Bootstrapping marks this VM as bootstrapping func (vm *VM) Bootstrapping() error { return nil } // Returns this VM's version func (vm *VM) Version() (string, error) { return Version.String(), nil } func (vm *VM) Connected(id ids.ShortID, nodeVersion version.Application) error { return nil // noop } func (vm *VM) Disconnected(id ids.ShortID) error { return nil // noop } // This VM doesn't (currently) have any app-specific messages func (vm *VM) AppGossip(nodeID ids.ShortID, msg []byte) error { return nil } // This VM doesn't (currently) have any app-specific messages func (vm *VM) AppRequest(nodeID ids.ShortID, requestID uint32, time time.Time, request []byte) error { return nil } // This VM doesn't (currently) have any app-specific messages func (vm *VM) AppResponse(nodeID ids.ShortID, requestID uint32, response []byte) error { return nil } // This VM doesn't (currently) have any app-specific messages func (vm *VM) AppRequestFailed(nodeID ids.ShortID, requestID uint32) error { return nil } // Health implements the common.VM interface func (vm *VM) HealthCheck() (interface{}, error) { return nil, nil } ``` ### Factory VMs should implement the `Factory` interface. `New` method in the interface returns a new VM instance. ```go title="timestampvm/factory.go" var _ vms.Factory = &Factory{} // Factory ... type Factory struct{} // New ... func (f *Factory) New(*snow.Context) (interface{}, error) { return &VM{}, nil } ``` ### Static API A VM may have a static API, which allows clients to call methods that do not query or update the state of a particular blockchain, but rather apply to the VM as a whole. This is analogous to static methods in computer programming. AvalancheGo uses [Gorilla's RPC library](https://www.gorillatoolkit.org/pkg/rpc) to implement HTTP APIs. `StaticService` implements the static API for our VM. ```go title="timestampvm/static_service.go" // StaticService defines the static API for the timestamp vm type StaticService struct{} ``` #### Encode For each API method, there is: - A struct that defines the method's arguments - A struct that defines the method's return values - A method that implements the API method, and is parameterized on the above 2 structs This API method encodes a string to its byte representation using a given encoding scheme. It can be used to encode data that is then put in a block and proposed as the next block for this chain. ```go title="timestampvm/static_service.go" // EncodeArgs are arguments for Encode type EncodeArgs struct { Data string `json:"data"` Encoding formatting.Encoding `json:"encoding"` Length int32 `json:"length"` } // EncodeReply is the reply from Encoder type EncodeReply struct { Bytes string `json:"bytes"` Encoding formatting.Encoding `json:"encoding"` } // Encoder returns the encoded data func (ss *StaticService) Encode(_ *http.Request, args *EncodeArgs, reply *EncodeReply) error { if len(args.Data) == 0 { return fmt.Errorf("argument Data cannot be empty") } var argBytes []byte if args.Length > 0 { argBytes = make([]byte, args.Length) copy(argBytes, args.Data) } else { argBytes = []byte(args.Data) } bytes, err := formatting.EncodeWithChecksum(args.Encoding, argBytes) if err != nil { return fmt.Errorf("couldn't encode data as string: %s", err) } reply.Bytes = bytes reply.Encoding = args.Encoding return nil } ``` #### Decode This API method is the inverse of `Encode`. ```go title="timestampvm/static_service.go" // DecoderArgs are arguments for Decode type DecoderArgs struct { Bytes string `json:"bytes"` Encoding formatting.Encoding `json:"encoding"` } // DecoderReply is the reply from Decoder type DecoderReply struct { Data string `json:"data"` Encoding formatting.Encoding `json:"encoding"` } // Decoder returns the Decoded data func (ss *StaticService) Decode(_ *http.Request, args *DecoderArgs, reply *DecoderReply) error { bytes, err := formatting.Decode(args.Encoding, args.Bytes) if err != nil { return fmt.Errorf("couldn't Decode data as string: %s", err) } reply.Data = string(bytes) reply.Encoding = args.Encoding return nil } ``` ### API A VM may also have a non-static HTTP API, which allows clients to query and update the blockchain's state. `Service`'s declaration is: ```go title="timestampvm/service.go" // Service is the API service for this VM type Service struct{ vm *VM } ``` Note that this struct has a reference to the VM, so it can query and update state. This VM's API has two methods. One allows a client to get a block by its ID. The other allows a client to propose the next block of this blockchain. The blockchain ID in the endpoint changes, since every blockchain has an unique ID. #### `timestampvm.getBlock` Get a block by its ID. If no ID is provided, get the latest block. ##### `getBlock` Signature ``` timestampvm.getBlock({id: string}) -> { id: string, data: string, timestamp: int, parentID: string } ``` - `id` is the ID of the block being retrieved. If omitted from arguments, gets the latest block - `data` is the base 58 (with checksum) representation of the block's 32 byte payload - `timestamp` is the Unix timestamp when this block was created - `parentID` is the block's parent ##### `getBlock` Example Call ```bash curl -X POST --data '{ "jsonrpc": "2.0", "method": "timestampvm.getBlock", "params":{ "id":"xqQV1jDnCXDxhfnNT7tDBcXeoH2jC3Hh7Pyv4GXE1z1hfup5K" }, "id": 1 }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/sw813hGSWH8pdU9uzaYy9fCtYFfY7AjDd2c9rm64SbApnvjmk ``` ##### `getBlock` Example Response ```json { "jsonrpc": "2.0", "result": { "timestamp": "1581717416", "data": "11111111111111111111111111111111LpoYY", "id": "xqQV1jDnCXDxhfnNT7tDBcXeoH2jC3Hh7Pyv4GXE1z1hfup5K", "parentID": "22XLgiM5dfCwTY9iZnVk8ZPuPe3aSrdVr5Dfrbxd3ejpJd7oef" }, "id": 1 } ``` ##### `getBlock` Implementation ```go title="timestampvm/service.go" // GetBlockArgs are the arguments to GetBlock type GetBlockArgs struct { // ID of the block we're getting. // If left blank, gets the latest block ID *ids.ID `json:"id"` } // GetBlockReply is the reply from GetBlock type GetBlockReply struct { Timestamp json.Uint64 `json:"timestamp"` // Timestamp of most recent block Data string `json:"data"` // Data in the most recent block. Base 58 repr. of 5 bytes. ID ids.ID `json:"id"` // String repr. of ID of the most recent block ParentID ids.ID `json:"parentID"` // String repr. of ID of the most recent block's parent } // GetBlock gets the block whose ID is [args.ID] // If [args.ID] is empty, get the latest block func (s *Service) GetBlock(_ *http.Request, args *GetBlockArgs, reply *GetBlockReply) error { // If an ID is given, parse its string representation to an ids.ID // If no ID is given, ID becomes the ID of last accepted block var ( id ids.ID err error ) if args.ID == nil { id, err = s.vm.state.GetLastAccepted() if err != nil { return errCannotGetLastAccepted } } else { id = *args.ID } // Get the block from the database block, err := s.vm.getBlock(id) if err != nil { return errNoSuchBlock } // Fill out the response with the block's data reply.ID = block.ID() reply.Timestamp = json.Uint64(block.Timestamp().Unix()) reply.ParentID = block.Parent() data := block.Data() reply.Data, err = formatting.EncodeWithChecksum(formatting.CB58, data[:]) return err } ``` #### `timestampvm.proposeBlock` Propose the next block on this blockchain. ##### `proposeBlock` Signature ```go timestampvm.proposeBlock({data: string}) -> {success: bool} ``` - `data` is the base 58 (with checksum) representation of the proposed block's 32 byte payload. ##### `proposeBlock` Example Call ```bash curl -X POST --data '{ "jsonrpc": "2.0", "method": "timestampvm.proposeBlock", "params":{ "data":"SkB92YpWm4Q2iPnLGCuDPZPgUQMxajqQQuz91oi3xD984f8r" }, "id": 1 }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/sw813hGSWH8pdU9uzaYy9fCtYFfY7AjDd2c9rm64SbApnvjmk ``` ###### `proposeBlock` Example Response ```json { "jsonrpc": "2.0", "result": { "Success": true }, "id": 1 } ``` ##### `proposeBlock` Implementation ```go title="timestampvm/service.go" // ProposeBlockArgs are the arguments to ProposeValue type ProposeBlockArgs struct { // Data for the new block. Must be base 58 encoding (with checksum) of 32 bytes. Data string } // ProposeBlockReply is the reply from function ProposeBlock type ProposeBlockReply struct{ // True if the operation was successful Success bool } // ProposeBlock is an API method to propose a new block whose data is [args].Data. // [args].Data must be a string repr. of a 32 byte array func (s *Service) ProposeBlock(_ *http.Request, args *ProposeBlockArgs, reply *ProposeBlockReply) error { bytes, err := formatting.Decode(formatting.CB58, args.Data) if err != nil || len(bytes) != dataLen { return errBadData } var data [dataLen]byte // The data as an array of bytes copy(data[:], bytes[:dataLen]) // Copy the bytes in dataSlice to data s.vm.proposeBlock(data) reply.Success = true return nil } ``` ### Plugin In order to make this VM compatible with `go-plugin`, we need to define a `main` package and method, which serves our VM over gRPC so that AvalancheGo can call its methods. `main.go`'s contents are: ```go title="main/main.go" func main() { log.Root().SetHandler(log.LvlFilterHandler(log.LvlDebug, log.StreamHandler(os.Stderr, log.TerminalFormat()))) plugin.Serve(&plugin.ServeConfig{ HandshakeConfig: rpcchainvm.Handshake, Plugins: map[string]plugin.Plugin{ "vm": rpcchainvm.New(×tampvm.VM{}), }, // A non-nil value here enables gRPC serving for this plugin... GRPCServer: plugin.DefaultGRPCServer, }) } ``` Now AvalancheGo's `rpcchainvm` can connect to this plugin and calls its methods. ### Executable Binary This VM has a [build script](https://github.com/ava-labs/timestampvm/blob/v1.2.1/scripts/build.sh) that builds an executable of this VM (when invoked, it runs the `main` method from above.) The path to the executable, as well as its name, can be provided to the build script via arguments. For example: ```bash ./scripts/build.sh ../avalanchego/build/plugins timestampvm ``` If no argument is given, the path defaults to a binary named with default VM ID: `$GOPATH/src/github.com/ava-labs/avalanchego/build/plugins/tGas3T58KzdjLHhBDMnH2TvrddhqTji5iZAMZ3RXs2NLpSnhH` This name `tGas3T58KzdjLHhBDMnH2TvrddhqTji5iZAMZ3RXs2NLpSnhH` is the CB58 encoded 32 byte identifier for the VM. For the timestampvm, this is the string "timestamp" zero-extended in a 32 byte array and encoded in CB58. ### VM Aliases Each VM has a predefined, static ID. For instance, the default ID of the TimestampVM is: `tGas3T58KzdjLHhBDMnH2TvrddhqTji5iZAMZ3RXs2NLpSnhH`. It's possible to give an alias for these IDs. For example, we can alias `TimestampVM` by creating a JSON file at `~/.avalanchego/configs/vms/aliases.json` with: The name of the VM binary is also its static ID and should not be changed manually. Changing the name of the VM binary will result in AvalancheGo failing to start the VM. To reference a VM by another name, define a VM alias as described below. ```json { "tGas3T58KzdjLHhBDMnH2TvrddhqTji5iZAMZ3RXs2NLpSnhH": [ "timestampvm", "timestamp" ] } ``` ### Installing a VM AvalancheGo searches for and registers plugins under the `plugins` [directory](/docs/nodes/configure/configs-flags#--plugin-dir-string). To install the virtual machine onto your node, you need to move the built virtual machine binary under this directory. Virtual machine executable names must be either a full virtual machine ID (encoded in CB58), or a VM alias. Copy the binary into the plugins directory. ```bash cp -n $GOPATH/src/github.com/ava-labs/avalanchego/build/plugins/ ``` #### Node Is Not Running If your node isn't running yet, you can install all virtual machines under your `plugin` directory by starting the node. #### Node Is Already Running Load the binary with the `loadVMs` API. ```bash curl -sX POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"admin.loadVMs", "params" :{} }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/admin ``` Confirm the response of `loadVMs` contains the newly installed virtual machine `tGas3T58KzdjLHhBDMnH2TvrddhqTji5iZAMZ3RXs2NLpSnhH`. You'll see this virtual machine as well as any others that weren't already installed previously in the response. ```json { "jsonrpc": "2.0", "result": { "newVMs": { "tGas3T58KzdjLHhBDMnH2TvrddhqTji5iZAMZ3RXs2NLpSnhH": [ "timestampvm", "timestamp" ], "spdxUxVJQbX85MGxMHbKw1sHxMnSqJ3QBzDyDYEP3h6TLuxqQ": [] } }, "id": 1 } ``` Now, this VM's static API can be accessed at endpoints `/ext/vm/timestampvm` and `/ext/vm/timestamp`. For more details about VM configs, see [here](/docs/nodes/configure/configs-flags#virtual-machine-vm-configs). In this tutorial, we used the VM's ID as the executable name to simplify the process. However, AvalancheGo would also accept `timestampvm` or `timestamp` since those are registered aliases in previous step. ## Wrapping Up That's it! That's the entire implementation of a VM which defines a blockchain-based timestamp server. In this tutorial, we learned: - The `block.ChainVM` interface, which all VMs that define a linear chain must implement - The `snowman.Block` interface, which all blocks that are part of a linear chain must implement - The `rpcchainvm` type, which allows blockchains to run in their own processes. - An actual implementation of `block.ChainVM` and `snowman.Block`. # Customize an Avalanche L1 (/docs/avalanche-l1s/evm-configuration/customize-avalanche-l1) --- title: Customize an Avalanche L1 description: Learn how to customize your EVM-powered Avalanche L1. --- All Avalanche L1s can be customized by utilizing [`L1s Configs`](#avalanche-l1-configs). an Avalanche L1 can have one or more blockchains. For example, the Primary Network, which is an Avalanche L1, a special one nonetheless, has 3 blockchains. Each chain can be further customized using chain specific configuration file. See [here](/docs/nodes/configure/configs-flags) for detailed explanation. An Avalanche L1 created by or forked from [Subnet-EVM](https://github.com/ava-labs/subnet-evm) can be customized by utilizing one or more of the following methods: - [Genesis](#genesis) - [Precompile](#precompiles) - [Upgrade Configs](#network-upgrades-enabledisable-precompiles) - [Chain Configs](#avalanchego-chain-configs) Avalanche L1 Configs[​](#avalanche-l1-configs "Direct link to heading") ----------------------------------------------------------- an Avalanche L1 can customized by setting parameters for the following: - [Validator-only communication to create a private Avalanche L1](/docs/nodes/configure/avalanche-l1-configs#validatoronly-bool) - [Consensus](/docs/nodes/configure/avalanche-l1-configs#consensus-parameters) - [Gossip](/docs/nodes/configure/avalanche-l1-configs#gossip-configs) See [here](/docs/nodes/configure/avalanche-l1-configs) for more info. Genesis[​](#genesis "Direct link to heading") --------------------------------------------- Each blockchain has some genesis state when it's created. Each Virtual Machine defines the format and semantics of its genesis data. The default genesis Subnet-EVM provided below has some well defined parameters: ```json { "config": { "chainId": 43214, "homesteadBlock": 0, "eip150Block": 0, "eip150Hash": "0x2086799aeebeae135c246c65021c82b4e15a2c451340993aacfd2751886514f0", "eip155Block": 0, "eip158Block": 0, "byzantiumBlock": 0, "constantinopleBlock": 0, "petersburgBlock": 0, "istanbulBlock": 0, "muirGlacierBlock": 0, "feeConfig": { "gasLimit": 15000000, "minBaseFee": 25000000000, "targetGas": 15000000, "baseFeeChangeDenominator": 36, "minBlockGasCost": 0, "maxBlockGasCost": 1000000, "targetBlockRate": 2, "blockGasCostStep": 200000 }, "allowFeeRecipients": false }, "alloc": { "8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC": { "balance": "0x295BE96E64066972000000" } }, "nonce": "0x0", "timestamp": "0x0", "extraData": "0x00", "gasLimit": "0xe4e1c0", "difficulty": "0x0", "mixHash": "0x0000000000000000000000000000000000000000000000000000000000000000", "coinbase": "0x0000000000000000000000000000000000000000", "number": "0x0", "gasUsed": "0x0", "parentHash": "0x0000000000000000000000000000000000000000000000000000000000000000" } ``` ### Chain Config[​](#chain-config "Direct link to heading") `chainID`: Denotes the ChainID of to be created chain. Must be picked carefully since a conflict with other chains can cause issues. One suggestion is to check with [chainlist.org](https://chainlist.org/) to avoid ID collision, reserve and publish your ChainID properly. You can use `eth_getChainConfig` RPC call to get the current chain config. See [here](/docs/rpcs/subnet-evm#eth_getchainconfig) for more info. #### Hard Forks[​](#hard-forks "Direct link to heading") `homesteadBlock`, `eip150Block`, `eip150Hash`, `eip155Block`, `byzantiumBlock`, `constantinopleBlock`, `petersburgBlock`, `istanbulBlock`, `muirGlacierBlock` are EVM hard fork activation times. Changing these may cause issues, so treat them carefully. #### Fee Config[​](#fee-config "Direct link to heading") `gasLimit`: Sets the max amount of gas consumed per block. This restriction puts a cap on the amount of computation that can be done in a single block, which in turn sets a limit on the maximum gas usage allowed for a single transaction. For reference, C-Chain value is set to `15,000,000`. `targetBlockRate`: Sets the target rate of block production in seconds. A target of 2 will target producing a block every 2 seconds. If the network starts producing blocks at a faster rate, it indicates that more blocks than anticipated are being issued to the network, resulting in an increase in base fees. For C-chain this value is set to `1`. `minBaseFee`: Sets a lower bound on the EIP-1559 base fee of a block. Since the block's base fee sets the minimum gas price for any transaction included in that block, this effectively sets a minimum gas price for any transaction. `targetGas`: Specifies the targeted amount of gas (including block gas cost) to consume within a rolling 10-seconds window. When the dynamic fee algorithm observes that network activity is above/below the `targetGas`, it increases/decreases the base fee proportionally to how far above/below the target actual network activity is. If the network starts producing blocks with gas cost higher than this, base fees are increased accordingly. `baseFeeChangeDenominator`: Divides the difference between actual and target utilization to determine how much to increase/decrease the base fee. A larger denominator indicates a slower changing, stickier base fee, while a lower denominator allows the base fee to adjust more quickly. For reference, the C-chain value is set to `36`. This value sets the base fee to increase or decrease by a factor of `1/36` of the parent block's base fee. `minBlockGasCost`: Sets the minimum amount of gas to charge for the production of a block. This value is set to `0` in C-Chain. `maxBlockGasCost`: Sets the maximum amount of gas to charge for the production of a block. `blockGasCostStep`: Determines how much to increase/decrease the block gas cost depending on the amount of time elapsed since the previous block. If the block is produced at the target rate, the block gas cost will stay the same as the block gas cost for the parent block. If it is produced faster/slower, the block gas cost will be increased/decreased by the step value for each second faster/slower than the target block rate accordingly. If the `blockGasCostStep` is set to a very large number, it effectively requires block production to go no faster than the `targetBlockRate`. For example, if a block is produced two seconds faster than the target block rate, the block gas cost will increase by `2 * blockGasCostStep`. #### Custom Fee Recipients[​](#custom-fee-recipients "Direct link to heading") See section [Setting a Custom Fee Recipient](#setting-a-custom-fee-recipient) ### Alloc[​](#alloc "Direct link to heading") The fields `nonce`, `timestamp`, `extraData`, `gasLimit`, `difficulty`, `mixHash`, `coinbase`, `number`, `gasUsed`, `parentHash` defines the genesis block header. The field `gasLimit` should be set to match the `gasLimit` set in the `feeConfig`. You do not need to change any of the other genesis header fields. `nonce`, `mixHash` and `difficulty` are remnant parameters from Proof of Work systems. For Avalanche, these don't play any relevant role, so you should just leave them as their default values: `nonce`: The result of the mining process iteration is this value. It can be any value in the genesis block. Default value is `0x0`. `mixHash`: The combination of `nonce` and `mixHash` allows to verify that the Block has really been cryptographically mined, thus, from this aspect, is valid. Default value is `0x0000000000000000000000000000000000000000000000000000000000000000`. `difficulty`: The difficulty level applied during the nonce discovering process of this block. Default value is `0x0`. `timestamp`: The timestamp of the creation of the genesis block. This is commonly set to `0x0`. `extraData`: Optional extra data that can be included in the genesis block. This is commonly set to `0x`. `gasLimit`: The total amount of gas that can be used in a single block. It should be set to the same value as in the [fee config](#fee-config). The value `e4e1c0` is hexadecimal and is equal to `15,000,000`. `coinbase`: Refers to the address of the block producers. This also means it represents the recipient of the block reward. It is usually set to `0x0000000000000000000000000000000000000000` for the genesis block. To allow fee recipients in Subnet-EVM, refer to [this section.](#setting-a-custom-fee-recipient) `parentHash`: This is the Keccak 256-bit hash of the entire parent block's header. It is usually set to `0x0000000000000000000000000000000000000000000000000000000000000000` for the genesis block. `gasUsed`: This is the amount of gas used by the genesis block. It is usually set to `0x0`. `number`: This is the number of the genesis block. This required to be `0x0` for the genesis. Otherwise it will error. ### Genesis Examples[​](#genesis-examples "Direct link to heading") Another example of a genesis file can be found in the [networks folder](https://github.com/ava-labs/public-chain-assets/blob/1951594346dcc91682bdd8929bcf8c1bf6a04c33/chains/11111/genesis.json). Please remove `airdropHash` and `airdropAmount` fields if you want to start with it. Here are a few examples on how a genesis file is used: [scripts/run.sh](https://github.com/ava-labs/subnet-evm/blob/master/scripts/run.sh#L99) ### Setting the Genesis Allocation[​](#setting-the-genesis-allocation "Direct link to heading") Alloc defines addresses and their initial balances. This should be changed accordingly for each chain. If you don't provide any genesis allocation, you won't be able to interact with your new chain (all transactions require a fee to be paid from the sender's balance). The `alloc` field expects key-value pairs. Keys of each entry must be a valid `address`. The `balance` field in the value can be either a `hexadecimal` or `number` to indicate initial balance of the address. The default value contains `8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC` with `50000000000000000000000000` balance in it. Default: ```json "alloc": { "8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC": { "balance": "0x295BE96E64066972000000" } } ``` To specify a different genesis allocation, populate the `alloc` field in the genesis JSON as follows: ```json "alloc": { "8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC": { "balance": "0x52B7D2DCC80CD2E4000000" }, "Ab5801a7D398351b8bE11C439e05C5B3259aeC9B": { "balance": "0xa796504b1cb5a7c0000" } }, ``` The keys in the allocation are [hex](https://en.wikipedia.org/wiki/Hexadecimal) addresses **without the canonical `0x` prefix**. The balances are denominated in Wei ([10^18 Wei = 1 Whole Unit of Native Token](https://eth-converter.com/)) and expressed as hex strings **with the canonical `0x` prefix**. You can use [this converter](https://www.rapidtables.com/convert/number/hex-to-decimal.html) to translate between decimal and hex numbers. The above example yields the following genesis allocations (denominated in whole units of the native token, that is 1 AVAX/1 WAGMI): ```bash 0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC: 100000000 (0x52B7D2DCC80CD2E4000000=100000000000000000000000000 Wei) 0xAb5801a7D398351b8bE11C439e05C5B3259aeC9B: 49463 (0xa796504b1cb5a7c0000=49463000000000000000000 Wei) ``` ### Setting a Custom Fee Recipient[​](#setting-a-custom-fee-recipient "Direct link to heading") By default, all fees are burned (sent to the black hole address with `"allowFeeRecipients": false`). However, it is possible to enable block producers to set a fee recipient (who will get compensated for blocks they produce). To enable this feature, you'll need to add the following to your genesis file (under the `"config"` key): ```json { "config": { "allowFeeRecipients": true } } ``` #### Fee Recipient Address[​](#fee-recipient-address "Direct link to heading") With `allowFeeRecipients` enabled, your validators can specify their addresses to collect fees. They need to update their EVM [chain config](#avalanchego-chain-configs) with the following to specify where the fee should be sent to. ```json { "feeRecipient": "" } ``` If `allowFeeRecipients` feature is enabled on the Avalanche L1, but a validator doesn't specify a "feeRecipient", the fees will be burned in blocks it produces. This mechanism can be also activated as a precompile. See [Changing Fee Reward Mechanisms](#changing-fee-reward-mechanisms) section for more details. Precompiles[​](#precompiles "Direct link to heading") ----------------------------------------------------- Subnet-EVM can provide custom functionalities with precompiled contracts. These precompiled contracts can be activated through `ChainConfig` (in genesis or as an upgrade). ### AllowList Interface[​](#allowlist-interface "Direct link to heading") The `AllowList` interface is used by precompiles to check if a given address is allowed to use a precompiled contract. `AllowList` consist of three roles, `Admin`, `Manager` and `Enabled`. `Admin` can add/remove other `Admin` and `Enabled` addresses. `Manager` is introduced with Durango upgrade and can add/remove `Enabled` addresses, without the ability to add/remove `Admin` or `Manager` addresses. `Enabled` addresses can use the precompiled contract, but cannot modify other roles. `AllowList` adds `adminAddresses`, `managerAddresses`, `enabledAddresses` fields to precompile contract configurations. For instance fee manager precompile contract configuration looks like this: ```json { "feeManagerConfig": { "blockTimestamp": 0, "adminAddresses": [], "managerAddresses": [], "enabledAddresses": [], } } ``` `AllowList` configuration affects only the related precompile. For instance, the admin address in `feeManagerConfig` does not affect admin addresses in other activated precompiles. The `AllowList` solidity interface is defined as follows, and can be found in [IAllowList.sol](https://github.com/ava-labs/subnet-evm/blob/helloworld-official-tutorial-v2/contracts/contracts/interfaces/IAllowList.sol): ```solidity //SPDX-License-Identifier: MIT pragma solidity ^0.8.0; interface IAllowList { event RoleSet( uint256 indexed role, address indexed account, address indexed sender, uint256 oldRole ); // Set [addr] to have the admin role over the precompile contract. function setAdmin(address addr) external; // Set [addr] to be enabled on the precompile contract. function setEnabled(address addr) external; // Set [addr] to have the manager role over the precompile contract. function setManager(address addr) external; // Set [addr] to have no role for the precompile contract. function setNone(address addr) external; // Read the status of [addr]. function readAllowList(address addr) external view returns (uint256 role); } ``` `readAllowList(addr)` will return a uint256 with a value of 0, 1, or 2, corresponding to the roles `None`, `Enabled`, and `Admin` respectively. `RoleSet` is an event that is emitted when a role is set for an address. It includes the role, the modified address, the sender as indexed parameters and the old role as non-indexed parameter. Events in precompiles are activated after Durango upgrade. Note: `AllowList` is not an actual contract but just an interface. It's not callable by itself. This is used by other precompiles. Check other precompile sections to see how this works. ### Restricting Smart Contract Deployers[​](#restricting-smart-contract-deployers "Direct link to heading") If you'd like to restrict who has the ability to deploy contracts on your Avalanche L1, you can provide an `AllowList` configuration in your genesis or upgrade file: ```json { "contractDeployerAllowListConfig": { "blockTimestamp": 0, "adminAddresses": ["0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC"] } } ``` In this example, `0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC` is named as the `Admin` of the `ContractDeployerAllowList`. This enables it to add other `Admin` or to add `Enabled` addresses. Both `Admin` and `Enabled` can deploy contracts. To provide a great UX with factory contracts, the `tx.Origin` is checked for being a valid deployer instead of the caller of `CREATE`. This means that factory contracts will still be able to create new contracts as long as the sender of the original transaction is an allow listed deployer. The `Stateful Precompile` contract powering the `ContractDeployerAllowList` adheres to the [AllowList Solidity interface](#allowlist-interface) at `0x0200000000000000000000000000000000000000` (you can load this interface and interact directly in Remix): - If you attempt to add a `Enabled` and you are not an `Admin`, you will see something like: ![admin fail](/images/customize1.png) - If you attempt to deploy a contract but you are not an `Admin` not a `Enabled`, you will see something like: ![deploy fail](/images/customize2.png) - If you call `readAllowList(addr)` then you can read the current role of `addr`, which will return a uint256 with a value of 0, 1, or 2, corresponding to the roles `None`, `Enabled`, and `Admin` respectively. If you remove all of the admins from the allow list, it will no longer be possible to update the allow list without modifying the Subnet-EVM to schedule a network upgrade. #### Initial Contract Allow List Configuration[​](#initial-contract-allow-list-configuration "Direct link to heading") It's possible to enable this precompile with an initial configuration to activate its effect on activation timestamp. This provides a way to enable the precompile without an admin address to manage the deployer list. With this, you can define a list of addresses that are allowed to deploy contracts. Since there will be no admin address to manage the deployer list, it can only be modified through a network upgrade. To use initial configuration, you need to specify addresses in `enabledAddresses` field in your genesis or upgrade file: ```json { "contractDeployerAllowListConfig": { "blockTimestamp": 0, "enabledAddresses": ["0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC"] } } ``` This will allow only `0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC` to deploy contracts. For further information about precompile initial configurations see [Initial Precompile Configurations](#initial-precompile-configurations). ### Restricting Who Can Submit Transactions[​](#restricting-who-can-submit-transactions "Direct link to heading") Similar to restricting contract deployers, this precompile restricts which addresses may submit transactions on chain. Like the previous section, you can activate the precompile by including an `AllowList` configuration in your genesis file: ```json { "config": { "txAllowListConfig": { "blockTimestamp": 0, "adminAddresses": ["0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC"] } } } ``` In this example, `0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC` is named as the `Admin` of the `TransactionAllowList`. This enables them to add other `Admins` or to add `Allowed`. `Admins`, `Manager` and `Enabled` can submit transactions to the chain. The `Stateful Precompile` contract powering the `TxAllowList` adheres to the [AllowList Solidity interface](#allowlist-interface) at `0x0200000000000000000000000000000000000002` (you can load this interface and interact directly in Remix): - If you attempt to add an `Enabled` and you are not an `Admin`, you will see something like: ![admin fail](/images/customize3.png) - If you attempt to submit a transaction but you are not an `Admin`, `Manager` or not `Enabled`, you will see something like: `cannot issue transaction from non-allow listed address` - If you call `readAllowList(addr)` then you can read the current role of `addr`, which will return a `uint256` with a value of 0, 1, 2 or 3 corresponding to the roles `None`, `Allowed`, `Admin` and `Manager` respectively. If you remove all of the admins and managers from the allow list, it will no longer be possible to update the allow list without modifying the Subnet-EVM to schedule a network upgrade. #### Initial TX Allow List Configuration[​](#initial-tx-allow-list-configuration "Direct link to heading") It's possible to enable this precompile with an initial configuration to activate its effect on activation timestamp. This provides a way to enable the precompile without an admin address to manage the TX allow list. With this, you can define a list of addresses that are allowed to submit transactions. Since there will be no admin address to manage the TX list, it can only be modified through a network upgrade. To use initial configuration, you need to specify addresses in `enabledAddresses` field in your genesis or upgrade file: ```json { "txAllowListConfig": { "blockTimestamp": 0, "enabledAddresses": ["0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC"] } } ``` This will allow only `0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC` to submit transactions. For further information about precompile initial configurations see [Initial Precompile Configurations](#initial-precompile-configurations). ### Minting Native Coins[​](#minting-native-coins "Direct link to heading") You can mint native(gas) coins with a precompiled contract. In order to activate this feature, you can provide `nativeMinterConfig` in genesis: ```json { "config": { "contractNativeMinterConfig": { "blockTimestamp": 0, "adminAddresses": ["0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC"] } } } ``` `adminAddresses` denotes admin accounts who can add other `Admin`, `Manager` or `Enabled` accounts. `Admin`, `Manager` and `Enabled` are both eligible to mint native coins for other addresses. `ContractNativeMinter` uses same methods as in `ContractDeployerAllowList`. The `Stateful Precompile` contract powering the `ContractNativeMinter` adheres to the following Solidity interface at `0x0200000000000000000000000000000000000001` (you can load this interface and interact directly in Remix): ```solidity // (c) 2022-2023, Ava Labs, Inc. All rights reserved. // See the file LICENSE for licensing terms. pragma solidity ^0.8.0; import "./IAllowList.sol"; interface INativeMinter is IAllowList { event NativeCoinMinted( address indexed sender, address indexed recipient, uint256 amount ); // Mint [amount] number of native coins and send to [addr] function mintNativeCoin(address addr, uint256 amount) external; } ``` `mintNativeCoin` takes an address and amount of native coins to be minted. The amount denotes the amount in minimum denomination of native coins (10^18). For example, if you want to mint 1 native coin (in AVAX), you need to pass 1 \* 10^18 as the amount. A `NativeCoinMinted` event is emitted with the sender, recipient and amount when a native coin is minted. Note that this uses `IAllowList` interface directly, meaning that it uses the same `AllowList` interface functions like `readAllowList` and `setAdmin`, `setManager`, `setEnabled`, `setNone`. For more information see [AllowList Solidity interface](#allowlist-interface). EVM does not prevent overflows when storing the address balance. Overflows in balance opcodes are handled by setting the balance to maximum. However the same won't apply for API calls. If you try to mint more than the maximum balance, API calls will return the overflowed hex-balance. This can break external tooling. Make sure the total supply of native coins is always less than 2^256-1. #### Initial Native Minter Configuration[​](#initial-native-minter-configuration "Direct link to heading") It's possible to enable this precompile with an initial configuration to activate its effect on activation timestamp. This provides a way to enable the precompile without an admin address to mint native coins. With this, you can define a list of addresses that will receive an initial mint of the native coin when this precompile activates. This can be useful for networks that require a one-time mint without specifying any admin addresses. To use initial configuration, you need to specify a map of addresses with their corresponding mint amounts in `initialMint` field in your genesis or upgrade file: ```json { "contractNativeMinterConfig": { "blockTimestamp": 0, "initialMint": { "0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC": "1000000000000000000", "0x10037Fb06Ec4aB8c870a92AE3f00cD58e5D484b3": "0xde0b6b3a7640000" } } } ``` In the amount field you can specify either decimal or hex string. This will mint 1000000000000000000 (equivalent of 1 Native Coin denominated as 10^18) to both addresses. Note that these are both in string format. "0xde0b6b3a7640000" hex is equivalent to 1000000000000000000. For further information about precompile initial configurations see [Initial Precompile Configurations](#initial-precompile-configurations). ### Configuring Dynamic Fees[​](#configuring-dynamic-fees "Direct link to heading") You can configure the parameters of the dynamic fee algorithm on chain using the `FeeConfigManager`. In order to activate this feature, you will need to provide the `FeeConfigManager` in the genesis: ```json { "config": { "feeManagerConfig": { "blockTimestamp": 0, "adminAddresses": ["0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC"] } } } ``` The precompile implements the `FeeManager` interface which includes the same `AllowList` interface used by ContractNativeMinter, TxAllowList, etc. For an example of the `AllowList` interface, see the [TxAllowList](#allowlist-interface) above. The `Stateful Precompile` contract powering the `FeeConfigManager` adheres to the following Solidity interface at `0x0200000000000000000000000000000000000003` (you can load this interface and interact directly in Remix). It can be also found in [IFeeManager.sol](https://github.com/ava-labs/subnet-evm/blob/5faabfeaa021a64c2616380ed2d6ec0a96c8f96d/contract-examples/contracts/IFeeManager.sol): ```solidity //SPDX-License-Identifier: MIT pragma solidity ^0.8.0; import "./IAllowList.sol"; interface IFeeManager is IAllowList { struct FeeConfig { uint256 gasLimit; uint256 targetBlockRate; uint256 minBaseFee; uint256 targetGas; uint256 baseFeeChangeDenominator; uint256 minBlockGasCost; uint256 maxBlockGasCost; uint256 blockGasCostStep; } event FeeConfigChanged( address indexed sender, FeeConfig oldFeeConfig, FeeConfig newFeeConfig ); // Set fee config fields to contract storage function setFeeConfig( uint256 gasLimit, uint256 targetBlockRate, uint256 minBaseFee, uint256 targetGas, uint256 baseFeeChangeDenominator, uint256 minBlockGasCost, uint256 maxBlockGasCost, uint256 blockGasCostStep ) external; // Get fee config from the contract storage function getFeeConfig() external view returns ( uint256 gasLimit, uint256 targetBlockRate, uint256 minBaseFee, uint256 targetGas, uint256 baseFeeChangeDenominator, uint256 minBlockGasCost, uint256 maxBlockGasCost, uint256 blockGasCostStep ); // Get the last block number changed the fee config from the contract storage function getFeeConfigLastChangedAt() external view returns (uint256 blockNumber); } ``` FeeConfigManager precompile uses `IAllowList` interface directly, meaning that it uses the same `AllowList` interface functions like `readAllowList` and `setAdmin`, `setManager`, `setEnabled`, `setNone`. For more information see [AllowList Solidity interface](#allowlist-interface). In addition to the `AllowList` interface, the FeeConfigManager adds the following capabilities: - `getFeeConfig`: retrieves the current dynamic fee config - `getFeeConfigLastChangedAt`: retrieves the timestamp of the last block where the fee config was updated - `setFeeConfig`: sets the dynamic fee config on chain (see [here](#fee-config) for details on the fee config parameters). This function can only be called by an `Admin`, `Manager` or `Enabled` address. - `FeeConfigChanged`: an event that is emitted when the fee config is updated. Topics include the sender, the old fee config, and the new fee config. You can also get the fee configuration at a block with the `eth_feeConfig` RPC method. For more information see [here](/docs/rpcs/subnet-evm#eth_feeconfig). #### Initial Fee Config Configuration[​](#initial-fee-config-configuration "Direct link to heading") It's possible to enable this precompile with an initial configuration to activate its effect on activation timestamp. This provides a way to define your fee structure to take effect at the activation. To use the initial configuration, you need to specify the fee config in `initialFeeConfig` field in your genesis or upgrade file: ```json { "feeManagerConfig": { "blockTimestamp": 0, "initialFeeConfig": { "gasLimit": 20000000, "targetBlockRate": 2, "minBaseFee": 1000000000, "targetGas": 100000000, "baseFeeChangeDenominator": 48, "minBlockGasCost": 0, "maxBlockGasCost": 10000000, "blockGasCostStep": 500000 } } } ``` This will set the fee config to the values specified in the `initialFeeConfig` field. For further information about precompile initial configurations see [Initial Precompile Configurations](#initial-precompile-configurations). ### Avalanche Warp Messaging[​](#avalanche-warp-messaging "Direct link to heading") Currently Warp Precompile can only be activated in Mainnet after Durango occurs. Durango in Mainnet is set at 11 AM ET (4 PM UTC) on Wednesday, March 6th, 2024. If you plan to use Warp messaging in your own Subnet-EVM chain in Mainnet you should upgrade to AvalancheGo 1.11.11 or later and coordinate your precompile upgrade. Warp Config's "blockTimestamp" must be set after `1709740800`, Durango date (11 AM ET (4 PM UTC) on Wednesday, March 6th, 2024). Contract Examples[​](#contract-examples "Direct link to heading") ----------------------------------------------------------------- Subnet-EVM contains example contracts for precompiles under `/contracts`. It's a hardhat project with tests and tasks. For more information see [contract examples README](https://github.com/ava-labs/subnet-evm/tree/master/contracts#subnet-evm-contracts). Network Upgrades: Enable/Disable Precompiles[​](#network-upgrades-enabledisable-precompiles "Direct link to heading") --------------------------------------------------------------------------------------------------------------------- Performing a network upgrade requires coordinating the upgrade network-wide. A network upgrade changes the rule set used to process and verify blocks, such that any node that upgrades incorrectly or fails to upgrade by the time that upgrade goes into effect may become out of sync with the rest of the network. Any mistakes in configuring network upgrades or coordinating them on validators may cause the network to halt and recovering may be difficult. In addition to specifying the configuration for each of the above precompiles in the genesis chain config, they can be individually enabled or disabled at a given timestamp as a network upgrade. Disabling a precompile disables calling the precompile and destructs its storage so it can be enabled at a later timestamp with a new configuration if desired. These upgrades must be specified in a file named `upgrade.json` placed in the same directory where [`config.json`](#avalanchego-chain-configs) resides: `{chain-config-dir}/{blockchainID}/upgrade.json`. For example, `WAGMI Subnet` upgrade should be placed in `~/.avalanchego/configs/chains/2ebCneCbwthjQ1rYT41nhd7M76Hc6YmosMAQrTFhBq8qeqh6tt/upgrade.json`. The content of the `upgrade.json` should be formatted according to the following: ```json { "precompileUpgrades": [ { "[PRECOMPILE_NAME]": { "blockTimestamp": "[ACTIVATION_TIMESTAMP]", // unix timestamp precompile should activate at "[PARAMETER]": "[VALUE]" // precompile specific configuration options, eg. "adminAddresses" } } ] } ``` An invalid `blockTimestamp` in an upgrade file results the update failing. The `blockTimestamp` value should be set to a valid Unix timestamp value which is in the _future_ relative to the _head of the chain_. If the node encounters a `blockTimestamp` which is in the past, it will fail on startup. To disable a precompile, the following format should be used: ```json { "precompileUpgrades": [ { "": { "blockTimestamp": "[DEACTIVATION_TIMESTAMP]", // unix timestamp the precompile should deactivate at "disable": true } } ] } ``` Each item in `precompileUpgrades` must specify exactly one precompile to enable or disable and the block timestamps must be in increasing order. Once an upgrade has been activated (a block after the specified timestamp has been accepted), it must always be present in `upgrade.json` exactly as it was configured at the time of activation (otherwise the node will refuse to start). Enabling and disabling a precompile is a network upgrade and should always be done with caution. For safety, you should always treat `precompileUpgrades` as append-only. As a last resort measure, it is possible to abort or reconfigure a precompile upgrade that has not been activated since the chain is still processing blocks using the prior rule set. If aborting an upgrade becomes necessary, you can remove the precompile upgrade from `upgrade.json` from the end of the list of upgrades. As long as the blockchain has not accepted a block with a timestamp past that upgrade's timestamp, it will abort the upgrade for that node. ### Example[​](#example "Direct link to heading") ```json { "precompileUpgrades": [ { "feeManagerConfig": { "blockTimestamp": 1668950000, "adminAddresses": ["0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC"] } }, { "txAllowListConfig": { "blockTimestamp": 1668960000, "adminAddresses": ["0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC"] } }, { "feeManagerConfig": { "blockTimestamp": 1668970000, "disable": true } } ] } ``` This example enables the `feeManagerConfig` at the first block with timestamp >= `1668950000`, enables `txAllowListConfig` at the first block with timestamp >= `1668960000`, and disables `feeManagerConfig` at the first block with timestamp >= `1668970000`. When a precompile disable takes effect (that is, after its `blockTimestamp` has passed), its storage will be wiped. If you want to reenable it, you will need to treat it as a new configuration. After you have created the `upgrade.json` and placed it in the chain config directory, you need to restart the node for the upgrade file to be loaded (again, make sure you don't restart all Avalanche L1 validators at once!). On node restart, it will print out the configuration of the chain, where you can double-check that the upgrade has loaded correctly. In our example: ```bash INFO [08-15|15:09:36.772] <2ebCneCbwthjQ1rYT41nhd7M76Hc6YmosMAQrTFhBq8qeqh6tt Chain> github.com/ava-labs/subnet-evm/eth/backend.go:155: Initialised chain configuration config=“{ChainID: 11111 Homestead: 0 EIP150: 0 EIP155: 0 EIP158: 0 Byzantium: 0 Constantinople: 0 Petersburg: 0 Istanbul: 0, Muir Glacier: 0, Subnet EVM: 0, FeeConfig: {\“gasLimit\“:20000000,\“targetBlockRate\“:2,\“minBaseFee\“:1000000000,\“targetGas\ “:100000000,\“baseFeeChangeDenominator\“:48,\“minBlockGasCost\“:0,\“maxBlockGasCost\ “:10000000,\“blockGasCostStep\“:500000}, AllowFeeRecipients: false, NetworkUpgrades: {\ “subnetEVMTimestamp\“:0}, PrecompileUpgrade: {}, UpgradeConfig: {\"precompileUpgrades\":[{\"feeManagerConfig\":{\"adminAddresses\":[\"0x8db97c7cece249c2b98bdc0226cc4c2a57bf52fc\"],\"enabledAddresses\":null,\"blockTimestamp\":1668950000}},{\"txAllowListConfig\":{\"adminAddresses\":[\"0x8db97c7cece249c2b98bdc0226cc4c2a57bf52fc\"],\"enabledAddresses\":null,\"blockTimestamp\":1668960000}},{\"feeManagerConfig\":{\"adminAddresses\":null,\"enabledAddresses\":null,\"blockTimestamp\":1668970000,\"disable\":true}}]}, Engine: Dummy Consensus Engine}" ``` Notice that `precompileUpgrades` entry correctly reflects the changes. You can also check the activated precompiles at a timestamp with the [`eth_getActivePrecompilesAt`](/docs/rpcs/subnet-evm#eth_getactiveprecompilesat) RPC method. The [`eth_getChainConfig`](/docs/rpcs/subnet-evm#eth_getchainconfig) RPC method will also return the configured upgrades in the response. That's it, your Avalanche L1 is all set and the desired upgrades will be activated at the indicated timestamp! ### Initial Precompile Configurations[​](#initial-precompile-configurations "Direct link to heading") Precompiles can be managed by some privileged addresses to change their configurations and activate their effects. For example, the `feeManagerConfig` precompile can have `adminAddresses` which can change the fee structure of the network. ```json { "precompileUpgrades": [ { "feeManagerConfig": { "blockTimestamp": 1668950000, "adminAddresses": ["0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC"] } } ] } ``` In this example, only the address `0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC` is allowed to change the fee structure of the network. The admin address has to call the precompile in order to activate its effect; that is it needs to send a transaction with a new fee config to perform the update. This is a very powerful feature, but it also gives a large amount of power to the admin address. If the address `0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC` is compromised, the network is compromised. With the initial configurations, precompiles can immediately activate their effect on the activation timestamp. With this way admin addresses can be omitted from the precompile configuration. For example, the `feeManagerConfig` precompile can have `initialFeeConfig` to setup the fee configuration on the activation: ```json { "precompileUpgrades": [ { "feeManagerConfig": { "blockTimestamp": 1668950000, "initialFeeConfig": { "gasLimit": 20000000, "targetBlockRate": 2, "minBaseFee": 1000000000, "targetGas": 100000000, "baseFeeChangeDenominator": 48, "minBlockGasCost": 0, "maxBlockGasCost": 10000000, "blockGasCostStep": 500000 } } } ] } ``` Notice that there is no `adminAddresses` field in the configuration. This means that there will be no admin addresses to manage the fee structure with this precompile. The precompile will simply update the fee configuration to the specified fee config when it activates on the `blockTimestamp` `1668950000`. It's still possible to add `adminAddresses` or `enabledAddresses` along with these initial configurations. In this case, the precompile will be activated with the initial configuration, and admin/enabled addresses can access to the precompiled contract normally. If you want to change the precompile initial configuration, you will need to first disable it then activate the precompile again with the new configuration. See every precompile initial configuration in their relevant `Initial Configuration` sections under [Precompiles](#precompiles). AvalancheGo Chain Configs[​](#avalanchego-chain-configs "Direct link to heading") --------------------------------------------------------------------------------- As described in [this doc](/docs/nodes/configure/configs-flags#avalanche-l1-chain-configs), each blockchain of Avalanche L1s can have its own custom configuration. If an Avalanche L1's ChainID is `2ebCneCbwthjQ1rYT41nhd7M76Hc6YmosMAQrTFhBq8qeqh6tt`, the config file for this chain is located at `{chain-config-dir}/2ebCneCbwthjQ1rYT41nhd7M76Hc6YmosMAQrTFhBq8qeqh6tt/config.json`. For blockchains created by or forked from Subnet-EVM, most [C-Chain configs](/docs/nodes/chain-configs/primary-network/c-chain) are applicable except [Avalanche Specific APIs](/docs/nodes/chain-configs/primary-network/c-chain#enabling-avalanche-specific-apis). ### Priority Regossip[​](#priority-regossip "Direct link to heading") A transaction is "regossiped" when the node does not find the transaction in a block after `priority-regossip-frequency` (defaults to `1m`). By default, up to 16 transactions (max 1 per address) are regossiped to validators per minute. Operators can use "priority regossip" to more aggressively "regossip" transactions for a set of important addresses (like bridge relayers). To do so, you'll need to update your [chain config](/docs/nodes/configure/configs-flags#avalanche-l1-chain-configs) with the following: ```json { "priority-regossip-addresses": [""] } ``` By default, up to 32 transactions from priority addresses (max 16 per address) are regossipped to validators per second. You can override these defaults with the following config: ```json { "priority-regossip-frequency": "1s", "priority-regossip-max-txs": 32, "priority-regossip-addresses": [""], "priority-regossip-txs-per-address": 16 } ``` ### Fee Recipient[​](#fee-recipient "Direct link to heading") This works together with [`allowFeeRecipients`](#setting-a-custom-fee-recipient) and [RewardManager precompile](/docs/avalanche-l1s/precompiles/reward-manager) to specify where the fees should be sent to. With `allowFeeRecipients` enabled, validators can specify their addresses to collect fees. ```json { "feeRecipient": "" } ``` If `allowFeeRecipients` or `RewardManager` precompile is enabled on the Avalanche L1, but a validator doesn't specify a "feeRecipient", the fees will be burned in blocks it produces. ### Archival Node Configuration[​](#archival-node-configuration "Direct link to heading") Running an archival node that retains all historical state data requires specific configuration settings. Incorrect configuration can lead to historical data being pruned despite attempts to run in archival mode. Here are the key settings to configure: #### Disabling Pruning To retain all historical state, you must disable pruning. For EVM chains (like C-Chain or Subnet-EVM chains), add the following to your chain's `config.json`: ```json { "pruning-enabled": false } ``` #### State Sync Considerations State sync allows nodes to quickly sync by downloading recent state without processing all historical blocks. This can lead to missing historical data. For archival nodes, either disable state sync or ensure you start from genesis: ```json { "state-sync-enabled": false } ``` #### Transaction History Settings To maintain access to all historical transactions, you might need to configure these additional settings: ```json { "transaction-history": 0 } ``` #### Database Considerations Important: An already synced database cannot be fully converted to an archival node retroactively. The cleanest and most reliable way to set up an archival node is to start from scratch with the proper configuration. When switching between database types (e.g., from LevelDB to PebbleDB), historical data does not carry over. If you need to change the database type for your archival node, you must start a fresh sync from genesis. For information about all available configuration options and directory structures, see the [AvalancheGo Config Flags documentation](https://build.avax.network/docs/nodes/configure/configs-flags). Network Upgrades: State Upgrades[​](#network-upgrades-state-upgrades "Direct link to heading") ---------------------------------------------------------------------------------------------- Subnet-EVM allows the network operators to specify a modification to state that will take place at the beginning of the first block with a timestamp greater than or equal to the one specified in the configuration. This provides a last resort path to updating non-upgradeable contracts via a network upgrade (for example, to fix issues when you are running your own blockchain). This should only be used as a last resort alternative to forking `subnet-evm` and specifying the network upgrade in code. Using a network upgrade to modify state is not part of normal operations of the EVM. You should ensure the modifications do not invalidate any of the assumptions of deployed contracts or cause incompatibilities with downstream infrastructure such as block explorers. The timestamps for upgrades in `stateUpgrades` must be in increasing order. `stateUpgrades` can be specified along with `precompileUpgrades` or by itself. The following three state modifications are supported: - `balanceChange`: adds a specified amount to the balance of a given account. This amount can be specified as hex or decimal and must be positive. - `storage`: modifies the specified storage slots to the specified values. Keys and values must be 32 bytes specified in hex, with a `0x` prefix. - `code`: modifies the code stored in the specified account. The code must _only_ be the runtime portion of a code. The code must start with a `0x` prefix. If modifying the code, _only_ the runtime portion of the bytecode should be provided in `upgrades.json`. Do not use the bytecode that would be used for deploying a new contract, as this includes the constructor code as well. Refer to your compiler's documentation for information on how to find the runtime portion of the contract you wish to modify. The `upgrades.json` file shown below describes a network upgrade that will make the following state modifications at the first block after (or at) `March 8, 2023 1:30:00 AM GMT`: - Sets the code for the account at `0x71562b71999873DB5b286dF957af199Ec94617F7`, - And adds `100` wei to the balance of the account at `0xb794f5ea0ba39494ce839613fffba74279579268`, - Sets the storage slot `0x1234` to the value `0x6666` for the account at `0xb794f5ea0ba39494ce839613fffba74279579268`. ```json { "stateUpgrades": [ { "blockTimestamp": 1678239000, "accounts": { "0x71562b71999873DB5b286dF957af199Ec94617F7": { "code": "0x6080604052348015600f57600080fd5b506004361060285760003560e01c80632e64cec114602d575b600080fd5b60336047565b604051603e91906067565b60405180910390f35b60008054905090565b6000819050919050565b6061816050565b82525050565b6000602082019050607a6000830184605a565b9291505056fea26469706673582212209421042a1fdabcfa2486fb80942da62c28e61fc8362a3f348c4a96a92bccc63c64736f6c63430008120033" }, "0xb794f5ea0ba39494ce839613fffba74279579268": { "balanceChange": "0x64", "storage": { "0x0000000000000000000000000000000000000000000000000000000000001234": "0x0000000000000000000000000000000000000000000000000000000000006666" } } } } ] } ``` Network Upgrades: Rescheduling Mandatory Network Upgrades[​](#network-upgrades-rescheduling-mandatory-network-upgrades "Direct link to heading") ------------------------------------------------------------------------------------------------------------------------------------------------ A typical case when a network misses any mandatory activation would result in a network that is not able to operate. This is because validators/nodes running the old version would process transactions differently than nodes running the new version and end up different state. This would result in a fork in the network and new nodes would not be able to sync with the network. Normally this puts the chain in a halt and requires a hard fork to fix the issue. Starting with Subnet-EVM v0.6.3, you can reschedule mandatory activations like Durango via upgrade configs (upgrade.json in chain directory). This is a very advanced operation and should be done only if your network cannot operate going forward. The reschedule operation should be coordinated with your entire network nodes. Network upgrade overrides can be defined in the `upgrade.json` as follows: ```json { "networkUpgradeOverrides": { "{networkUpgrade1}": timestamp1, "{networkUpgrade2}": timestamp2, } } ``` The `timestamp` should be a Unix timestamp in seconds. For instance, if you missed the Durango activation in Fuji (February 13th, 2024, 16:00 UTC) or Mainnet (March 6th, 2024, 16:00 UTC) and having issues in your network, you can reschedule the Durango activation via upgrades. In order to do this, you need to prepare a new upgrade.json including following: ```json { "networkUpgradeOverrides": { "durangoTimestamp": 1712419200 } } ``` This reschedules the Durango activation to 2024-11-06 16:00:00 UTC (one month later than the actual activation). After preparing the upgrade.json, you need to update the chain directory with the new upgrade.json and restart your nodes. You should see logs similar to the following: ```bash INFO [03-22|14:04:48.284] github.com/ava-labs/subnet-evm/plugin/evm/vm.go:367: Applying network upgrade overrides overrides="{\"durangoTimestamp\":1712419200}" ... INFO [03-22|14:04:48.288] github.com/ava-labs/subnet-evm/core/blockchain.go:335: Avalanche Upgrades (timestamp based): INFO [03-22|14:04:48.288] github.com/ava-labs/subnet-evm/core/blockchain.go:335: - SubnetEVM Timestamp: @0 (https://github.com/ava-labs/avalanchego/releases/tag/v1.10.0) INFO [03-22|14:04:48.288] github.com/ava-labs/subnet-evm/core/blockchain.go:335: - Durango Timestamp: @1712419200 (https://github.com/ava-labs/avalanchego/releases/tag/v1.11.0) ... ``` This means your node is lock and loaded for the new Durango activation. After the new timestamp is reached, your node will activate Durango and start processing transactions with the new Durango features. Nodes running non-compatible version (running pre-Durango version after Durango activation), should be updated to most recent version of Subnet-EVM (v0.6.3+) and must have the new upgrade.json to reschedule the Durango activation. Running a new version without the rescheduling upgrade.json might create a fork in the network. All of network nodes, even ones correctly upgraded to Durango and running the correct version since Durango activation, should be restarted with the new upgrade.json to reschedule the Durango activation. This is a network-wide operation and should be coordinated with all network nodes. # Introduction (/docs/avalanche-l1s/evm-configuration/evm-l1-customization) --- title: Introduction description: Learn how to customize the Ethereum Virtual Machine with EVM and Precompiles. root: true --- Welcome to the EVM configuration guide. This documentation explores how to extend and customize your Avalanche L1 using **EVM** and **precompiles**. Building upon the Validator Manager capabilities we discussed in the previous section, we'll now dive into other powerful customization features available in EVM. ## Overview of EVM EVM is Avalanche's customized version of the Ethereum Virtual Machine, tailored to run on Avalanche L1s. It allows developers to deploy Solidity smart contracts with enhanced capabilities, benefiting from Avalanche's high throughput and low latency. EVM enables more flexibility and performance optimizations compared to the standard EVM. Beyond the Validator Manager functionality we've covered, EVM provides additional configuration options through precompiles, allowing you to extend your L1's capabilities even further. ## Genesis Configuration Each blockchain has some genesis state when it's created. Each Virtual Machine defines the format and semantics of its genesis data. The genesis configuration is crucial for setting up your Avalanche L1's initial state and behavior. ### Chain Configuration The chain configuration section in your genesis file defines fundamental parameters of your blockchain: ```json { "config": { "chainId": 43214, "homesteadBlock": 0, "eip150Block": 0, "eip150Hash": "0x2086799aeebeae135c246c65021c82b4e15a2c451340993aacfd2751886514f0", "eip155Block": 0, "eip158Block": 0, "byzantiumBlock": 0, "constantinopleBlock": 0, "petersburgBlock": 0, "istanbulBlock": 0, "muirGlacierBlock": 0 } } ``` #### Chain ID `chainID`: Denotes the ChainID of to be created chain. Must be picked carefully since a conflict with other chains can cause issues. One suggestion is to check with [chainlist.org](https://chainlist.org/) to avoid ID collision, reserve and publish your ChainID properly. You can use `eth_getChainConfig` RPC call to get the current chain config. See [here](/docs/rpcs/subnet-evm#eth_getchainconfig) for more info. #### Hard Forks The following parameters define EVM hard fork activation times. These should be handled with care as changes may cause compatibility issues: - `homesteadBlock` - `eip150Block` - `eip150Hash` - `eip155Block` - `byzantiumBlock` - `constantinopleBlock` - `petersburgBlock` - `istanbulBlock` - `muirGlacierBlock` ### Genesis Block Header The genesis block header is defined by several parameters that set the initial state of your blockchain: ```json { "nonce": "0x0", "timestamp": "0x0", "extraData": "0x00", "difficulty": "0x0", "mixHash": "0x0000000000000000000000000000000000000000000000000000000000000000", "coinbase": "0x0000000000000000000000000000000000000000", "number": "0x0", "gasUsed": "0x0", "parentHash": "0x0000000000000000000000000000000000000000000000000000000000000000" } ``` These parameters have specific roles: - `nonce`, `mixHash`, `difficulty`: These are remnants from Proof of Work systems. For Avalanche, they don't play any relevant role and should be left as their default values. - `timestamp`: The creation timestamp of the genesis block (commonly set to `0x0`). - `extraData`: Optional extra data field (commonly set to `0x`). - `coinbase`: The address of block producers (usually set to zero address for genesis). - `parentHash`: The hash of the parent block (set to zero hash for genesis). - `gasUsed`: Amount of gas used by the genesis block (usually `0x0`). - `number`: The block number (must be `0x0` for genesis). ## Precompiles Precompiles are specialized smart contracts that execute native Go code within the EVM context. They act as a bridge between Solidity and lower-level functionalities, allowing for performance optimizations and access to features not available in Solidity alone. ### Default Precompiles in EVM EVM comes with a set of default precompiles that extend the EVM's functionality: - **[AllowList Interface](/docs/avalanche-l1s/precompiles/allowlist-interface)**: Interface that manages access control by allowing or restricting specific addresses, inherited by all precompiles. - **[Deployer AllowList](/docs/avalanche-l1s/precompiles/deployer-allowlist)**: Restricts which addresses can deploy smart contracts. - **[Transaction AllowList](/docs/avalanche-l1s/precompiles/transaction-allowlist)**: Controls which addresses can submit transactions. - **[Native Minter](/docs/avalanche-l1s/precompiles/native-minter)**: Manages the minting and burning of native tokens. - **[Fee Manager](/docs/avalanche-l1s/precompiles/fee-manager)**: Controls gas fee parameters and fee markets. - **[Reward Manager](/docs/avalanche-l1s/precompiles/reward-manager)**: Handles the distribution of staking rewards to validators. - **[Warp Messenger](/docs/avalanche-l1s/precompiles/warp-messenger)**: Enables cross-chain communication between Avalanche L1s. ### Precompile Addresses and Configuration If a precompile is enabled within the `genesis.json` using the respective `ConfigKey`, you can interact with the precompile using Foundry or other tools such as Remix. Below are the addresses and `ConfigKey` values of default precompiles available in EVM. The address and `ConfigKey` [are defined in the `module.go` of each precompile contract](https://github.com/ava-labs/subnet-evm/tree/master/precompile/contracts). | Precompile | ConfigKey | Address | | ---------------------- | --------------------------------- | -------------------------------------------- | | [Deployer AllowList](/docs/avalanche-l1s/precompiles/deployer-allowlist) | `contractDeployerAllowListConfig` | `0x0200000000000000000000000000000000000000` | | [Native Minter](/docs/avalanche-l1s/precompiles/native-minter) | `contractNativeMinterConfig` | `0x0200000000000000000000000000000000000001` | | [Transaction AllowList](/docs/avalanche-l1s/precompiles/transaction-allowlist) | `txAllowListConfig` | `0x0200000000000000000000000000000000000002` | | [Fee Manager](/docs/avalanche-l1s/precompiles/fee-manager) | `feeManagerConfig` | `0x0200000000000000000000000000000000000003` | | [Reward Manager](/docs/avalanche-l1s/precompiles/reward-manager) | `rewardManagerConfig` | `0x0200000000000000000000000000000000000004` | | [Warp Messenger](/docs/avalanche-l1s/precompiles/warp-messenger) | `warpConfig` | `0x0200000000000000000000000000000000000005` | #### Example Interaction For example, if `contractDeployerAllowListConfig` is enabled in the `genesis.json`: ```json title="genesis.json" "contractDeployerAllowListConfig": { "adminAddresses": [ // Addresses that can manage (add/remove) enabled addresses. They are also enabled themselves for contract deployment. "0x4f9e12d407b18ad1e96e4f139ef1c144f4058a4e", "0x4b9e5977a46307dd93674762f9ddbe94fb054def" ], "blockTimestamp": 0, "enabledAddresses": [ "0x09c6fa19dd5d41ec6d0f4ca6f923ec3d941cc569" // Addresses that can only deploy contracts ] }, ``` We can then add an `Enabled` address to the Deployer AllowList by interacting with the `IAllowList` interface at `0x0200000000000000000000000000000000000000`: ```bash cast send 0x0200000000000000000000000000000000000000 setEnabled(address addr) --rpc-url $MY_L1_RPC --private-key $ADMIN_PRIVATE_KEY ``` # WarpMessenger Precompile - Technical Details (/docs/avalanche-l1s/evm-configuration/warpmessenger) --- title: "WarpMessenger Precompile - Technical Details" description: "Technical documentation for the WarpMessenger precompile implementation in subnet-evm." edit_url: https://github.com/ava-labs/avalanchego/edit/master/graft/subnet-evm/precompile/contracts/warp/README.md --- # Integrating Avalanche Warp Messaging into the EVM Avalanche Warp Messaging offers a basic primitive to enable Cross-L1 communication on the Avalanche Network. It is intended to allow communication between arbitrary Custom Virtual Machines (including, but not limited to Subnet-EVM and Coreth). ## How does Avalanche Warp Messaging Work? Avalanche Warp Messaging uses BLS Multi-Signatures with Public-Key Aggregation where every Avalanche validator registers a public key alongside its NodeID on the Avalanche P-Chain. Every node tracking an Avalanche L1 has read access to the Avalanche P-Chain. This provides weighted sets of BLS Public Keys that correspond to the validator sets of each L1 on the Avalanche Network. Avalanche Warp Messaging provides a basic primitive for signing and verifying messages between L1s: the receiving network can verify whether an aggregation of signatures from a set of source L1 validators represents a threshold of stake large enough for the receiving network to process the message. For more details on Avalanche Warp Messaging, see the AvalancheGo [Warp README](https://github.com/ava-labs/avalanchego/blob/master/vms/platformvm/warp/README.md). ### Flow of Sending / Receiving a Warp Message within the EVM The Avalanche Warp Precompile enables this flow to send a message from blockchain A to blockchain B: 1. Call the Warp Precompile `sendWarpMessage` function with the arguments for the `UnsignedMessage` 2. Warp Precompile emits an event / log containing the `UnsignedMessage` specified by the caller of `sendWarpMessage` 3. Network accepts the block containing the `UnsignedMessage` in the log, so that validators are willing to sign the message 4. An off-chain relayer queries the validators for their signatures of the message and aggregates the signatures to create a `SignedMessage` 5. The off-chain relayer encodes the `SignedMessage` as the [predicate](#predicate-encoding) in the AccessList of a transaction to deliver on blockchain B 6. The transaction is delivered on blockchain B, the signature is verified prior to executing the block, and the message is accessible via the Warp Precompile's `getVerifiedWarpMessage` during the execution of that transaction ### Warp Precompile The Warp Precompile is broken down into three functions defined in the Solidity interface file [IWarpMessenger.sol](https://github.com/ava-labs/avalanchego/blob/master/graft/subnet-evm/precompile/contracts/warp/warpbindings/IWarpMessenger.sol). #### sendWarpMessage `sendWarpMessage` is used to send a verifiable message. Calling this function results in sending a message with the following contents: - `SourceChainID` - blockchainID of the sourceChain on the Avalanche P-Chain - `SourceAddress` - `msg.sender` encoded as a 32 byte value that calls `sendWarpMessage` - `Payload` - `payload` argument specified in the call to `sendWarpMessage` emitted as the unindexed data of the resulting log Calling this function will issue a `SendWarpMessage` event from the Warp Precompile. Since the EVM limits the number of topics to 4 including the EventID, this message includes only the topics that would be expected to help filter messages emitted from the Warp Precompile the most. Specifically, the `payload` is not emitted as a topic because each topic must be encoded as a hash. Therefore, we opt to take advantage of each possible topic to maximize the possible filtering for emitted Warp Messages. Additionally, the `SourceChainID` is excluded because anyone parsing the chain can be expected to already know the blockchainID. Therefore, the `SendWarpMessage` event includes the indexable attributes: - `sender` - The `messageID` of the unsigned message (sha256 of the unsigned message) The actual `message` is the entire [Avalanche Warp Unsigned Message](https://github.com/ava-labs/avalanchego/blob/master/vms/platformvm/warp/unsigned_message.go#L14) including an [AddressedCall](https://github.com/ava-labs/avalanchego/tree/master/vms/platformvm/warp/payload#addressedcall). The unsigned message is emitted as the unindexed data in the log. #### getVerifiedMessage `getVerifiedMessage` is used to read the contents of the delivered Avalanche Warp Message into the expected format. It returns the message if present and a boolean indicating if a message is present. To use this function, the transaction must include the signed Avalanche Warp Message encoded in the [predicate](#predicate-encoding) of the transaction. Prior to executing a block, the VM iterates through transactions and pre-verifies all predicates. If a transaction's predicate is invalid, then it is considered invalid to include in the block and dropped. This leads to the following advantages: 1. The EVM execution does not need to verify the Warp Message at runtime (no signature verification or external calls to the P-Chain) 2. The EVM can deterministically re-execute and re-verify blocks assuming the predicate was verified by the network (e.g., in bootstrapping) This pre-verification is performed using the ProposerVM Block header during [block verification](https://github.com/ava-labs/avalanchego/blob/master/graft/subnet-evm/plugin/evm/wrapped_block.go) & [block building](https://github.com/ava-labs/avalanchego/blob/master/graft/subnet-evm/miner/worker.go). #### getBlockchainID `getBlockchainID` returns the blockchainID of the blockchain that the VM is running on. This is different from the conventional Ethereum ChainID registered to [ChainList](https://chainlist.org/). The `sourceChainID` in Avalanche refers to the txID that created the blockchain on the Avalanche P-Chain ([docs](https://build.avax.network/docs/cross-chain/avalanche-warp-messaging/deep-dive#icm-serialization)). ### Predicate Encoding Avalanche Warp Messages are encoded as a signed Avalanche [Warp Message](https://github.com/ava-labs/avalanchego/blob/master/vms/platformvm/warp/message.go) where the [UnsignedMessage](https://github.com/ava-labs/avalanchego/blob/master/vms/platformvm/warp/unsigned_message.go)'s payload includes an [AddressedPayload](https://github.com/ava-labs/avalanchego/blob/master/vms/platformvm/warp/payload/payload.go). Since the predicate is encoded into the [Transaction Access List](https://eips.ethereum.org/EIPS/eip-2930), it is packed into 32 byte hashes intended to declare storage slots that should be pre-warmed into the cache prior to transaction execution. Therefore, we use the [`predicate`](https://github.com/ava-labs/avalanchego/tree/master/vms/evm/predicate) package to encode the actual byte slice of size N into the access list. ### Performance Optimization: Primary Network to Avalanche L1 The Primary Network has a large validator set compared to most Subnets and L1s, which makes Warp signature collection and verification from the entire Primary Network validator set costly. All Subnets and L1s track at least one blockchain of the Primary Network, so we can instead optimize this by using the validator set of the receiving L1 instead of the Primary Network for certain Warp messages. #### Subnets Recall that Avalanche Subnet validators must also validate the Primary Network, so it tracks all of the blockchains in the Primary Network (X, C, and P-Chains). When an Avalanche Subnet receives a message from a blockchain on the Primary Network, we use the validator set of the receiving Subnet instead of the entire network when validating the message. Sending messages from the X, C, or P-Chain remains unchanged. However, when the Subnet receives the message, it changes the semantics to the following: 1. Read the `SourceChainID` of the signed message 2. Look up the `SubnetID` that validates `SourceChainID`. In this case it will be the Primary Network's `SubnetID` 3. Look up the validator set of the Subnet (instead of the Primary Network) and the registered BLS Public Keys of the Subnet validators at the P-Chain height specified by the ProposerVM header 4. Continue Warp Message verification using the validator set of the Subnet instead of the Primary Network This means that Primary Network to Subnet communication only requires a threshold of stake on the receiving Subnet to sign the message instead of a threshold of stake for the entire Primary Network. Since the security of the Subnet is provided by trust in its validator set, requiring a threshold of stake from the receiving Subnet's validator set instead of the whole Primary Network does not meaningfully change the security of the receiving L1. Note: this special case is ONLY applied during Warp Message verification. The message sent by the Primary Network will still contain the blockchainID of the Primary Network chain that sent the message as the sourceChainID and signatures will be served by querying the source chain directly. #### L1s Avalanche L1s are only required to sync the P-Chain, but are not required to validate the Primary Network. Therefore, **for L1s, this optimization only applies to Warp messages sent by the P-Chain.** The rest of the description of this optimization in the above section applies to L1s. Note that **in order to properly verify messages from the C-Chain and X-Chain, the Warp precompile must be configured with `requirePrimaryNetworkSigners` set to `true`**. Otherwise, we will attempt to verify the message signature against the receiving L1's validator set, which is not required to track the C-Chain or X-Chain, and therefore will not in general be able to produce a valid Warp message. ## Design Considerations ### Re-Processing Historical Blocks Avalanche Warp Messaging depends on the Avalanche P-Chain state at the P-Chain height specified by the ProposerVM block header. Verifying a message requires looking up the validator set of the source L1 on the P-Chain. To support this, Avalanche Warp Messaging uses the ProposerVM header, which includes the P-Chain height it was issued at as the canonical point to lookup the source L1's validator set. This means verifying the Warp Message and therefore the state transition on a block depends on state that is external to the blockchain itself: the P-Chain. The Avalanche P-Chain tracks only its current state and reverse diff layers (reversing the changes from past blocks) in order to re-calculate the validator set at a historical height. This means calculating a very old validator set that is used to verify a Warp Message in an old block may become prohibitively expensive. Therefore, we need a heuristic to ensure that the network can correctly re-process old blocks (note: re-processing old blocks is a requirement to perform bootstrapping and is used in some VMs to serve or verify historical data). As a result, we require that the block itself provides a deterministic hint which determines which Avalanche Warp Messages were considered valid/invalid during the block's execution. This ensures that we can always re-process blocks and use the hint to decide whether an Avalanche Warp Message should be treated as valid/invalid even after the P-Chain state that was used at the original execution time may no longer support fast lookups. To provide that hint, we've explored two designs: 1. Include a predicate in the transaction to ensure any referenced message is valid 2. Append the results of checking whether a Warp Message is valid/invalid to the block data itself The current implementation uses option (1). The original reason for this was that the notion of predicates for precompiles was designed with Shared Memory in mind. In the case of shared memory, there is no canonical "P-Chain height" in the block which determines whether or not Avalanche Warp Messages are valid. Instead, the VM interprets a shared memory import operation as valid as soon as the UTXO is available in shared memory. This means that if it were up to the block producer to staple the valid/invalid results of whether or not an attempted atomic operation should be treated as valid, a byzantine block producer could arbitrarily report that such atomic operations were invalid and cause a griefing attack to burn the gas of users that attempted to perform an import. Therefore, a transaction specified predicate is required to implement the shared memory precompile to prevent such a griefing attack. In contrast, Avalanche Warp Messages are validated within the context of an exact P-Chain height. Therefore, if a block producer attempted to lie about the validity of such a message, the network would interpret that block as invalid. ### Guarantees Offered by Warp Precompile vs. Built on Top #### Guarantees Offered by Warp Precompile The Warp Precompile was designed with the intention of minimizing the trusted computing base for the VM as much as possible. Therefore, it makes several tradeoffs which encourage users to use protocols built ON TOP of the Warp Precompile itself as opposed to directly using the Warp Precompile. The Warp Precompile itself provides ONLY the following ability: - Emit a verifiable message from (Address A, Blockchain A) to (Address B, Blockchain B) that can be verified by the destination chain #### Explicitly Not Provided / Built on Top The Warp Precompile itself does not provide any guarantees of: - Eventual message delivery (may require re-send on blockchain A and additional assumptions about off-chain relayers and chain progress) - Ordering of messages (requires ordering provided a layer above) - Replay protection (requires replay protection provided a layer above) # Data API vs RPC (/docs/api-reference/data-api/data-vs-rpc) --- title: Data API vs RPC description: Comparison of the Data API and RPC methods icon: Server --- In the rapidly evolving world of Web3 development, efficiently retrieving token balances for a user's address is a fundamental requirement. Whether you're building DeFi platforms, wallets, analytics tools, or exchanges, displaying accurate token balances is crucial for user engagement and trust. A typical use case involves showing a user's token portfolio in a wallet application, in this case, we have sAvax and USDC. title Developers generally have two options to fetch this data: 1. **Using RPC methods to index blockchain data on their own** 2. **Leveraging an indexer provider like the Data API** While both methods aim to achieve the same goal, the Data API offers a more efficient, scalable, and developer-friendly solution. This article delves into why using the Data API is better than relying on traditional RPC (Remote Procedure Call) methods. ### What Are RPC methods and their challenges? Remote Procedure Call (RPC) methods allow developers to interact directly with blockchain nodes. One of their key advantages is that they are standardized and universally understood by blockchain developers across different platforms. With RPC, you can perform tasks such as querying data, submitting transactions, and interacting with smart contracts. These methods are typically low-level and synchronous, meaning they require a deep understanding of the blockchain’s architecture and specific command structures. You can refer to the [official documentation](https://ethereum.org/en/developers/docs/apis/json-rpc/) to gain a more comprehensive understanding of the JSON-RPC API. Here’s an example using the `eth_getBalance` method to retrieve the native balance of a wallet: ```bash curl --location 'https://api.avax.network/ext/bc/C/rpc' \ --header 'Content-Type: application/json' \ --data '{"method":"eth_getBalance","params":["0x8ae323046633A07FB162043f28Cea39FFc23B50A", "latest"],"id":1,"jsonrpc":"2.0"}' ``` This call returns the following response: ```json { "jsonrpc": "2.0", "id": 1, "result": "0x284476254bc5d594" } ``` The balance in this wallet is 2.9016 AVAX. However, despite the wallet holding multiple tokens such as USDC, the `eth_getBalance` method only returns the AVAX amount and it does so in Wei and in hexadecimal format. This is not particularly human-readable, adding to the challenge for developers who need to manually convert the balance to a more understandable format. #### No direct RPC methods to retrieve token balances Despite their utility, RPC methods come with significant limitations when it comes to retrieving detailed token and transaction data. Currently, RPC methods do not provide direct solutions for the following: * **Listing all tokens held by a wallet**: There is no RPC method that provides a complete list of ERC-20 tokens owned by a wallet. * **Retrieving all transactions for a wallet**: : There is no direct method for fetching all transactions associated with a wallet. * **Getting ERC-20/721/1155 token balances**: The `eth_getBalance` method only returns the balance of the wallet’s native token (such as AVAX on Avalanche) and cannot be used to retrieve ERC-20/721/1155 token balances. To achieve these tasks using RPC methods alone, you would need to: * **Query every block for transaction logs**: Scan the entire blockchain, which is resource-intensive and impractical. * **Parse transaction logs**: Identify and extract ERC-20 token transfer events from each transaction. * **Aggregate data**: Collect and process this data to compute balances and transaction histories. #### Manual blockchain indexing is difficult and costly Using RPC methods to fetch token balances involves an arduous process: 1. You must connect to a node and subscribe to new block events. 2. For each block, parse every transaction to identify ERC-20 token transfers involving the user's address. 3. Extract contract addresses and other relevant data from the parsed transactions. 4. Compute balances by processing transfer events. 5. Store the processed data in a database for quick retrieval and aggregation. #### Why this is difficult: * **Resource-Intensive**: Requires significant computational power and storage to process and store blockchain data. * **Time-consuming**: Processing millions of blocks and transactions can take an enormous amount of time. * **Complexity**: Handling edge cases like contract upgrades, proxy contracts, and non-standard implementations adds layers of complexity. * **Maintenance**: Keeping the indexed data up-to-date necessitates continuous synchronization with new blocks being added to the blockchain. * **High Costs**: Associated with servers, databases, and network bandwidth. ### The Data API Advantage The Data API provides a streamlined, efficient, and scalable solution for fetching token balances. Here's why it's the best choice: With a single API call, you can retrieve all ERC-20 token balances for a user's address: ```javascript avalancheSDK.data.evm.balances.listErc20Balances({ address: "0xYourAddress", }); ``` Sample Response: ```json { "erc20TokenBalances": [ { "ercType": "ERC-20", "chainId": "43114", "address": "0xB97EF9Ef8734C71904D8002F8b6Bc66Dd9c48a6E", "name": "USD Coin", "symbol": "USDC", "decimals": 6, "price": { "value": 1.0, "currencyCode": "usd" }, "balance": "15000000", "balanceValue": { "currencyCode": "usd", "value": 9.6 }, "logoUri": "https://images.ctfassets.net/gcj8jwzm6086/e50058c1-2296-4e7e-91ea-83eb03db95ee/8db2a492ce64564c96de87c05a3756fd/43114-0xB97EF9Ef8734C71904D8002F8b6Bc66Dd9c48a6E.png" } // Additional tokens... ] } ``` As you can see with a single call the API returns an array of token balances for all the wallet tokens, including: * **Token metadata**: Contract address, name, symbol, decimals. * **Balance information**: Token balance in both hexadecimal and decimal formats, Also retrieves balances of native assets like ETH or AVAX. * **Price data**: Current value in USD or other supported currencies, saving you the effort of integrating another API. * **Visual assets**: Token logo URI for better user interface integration. If you’re building a wallet, DeFi app, or any application that requires displaying balances, transaction history, or smart contract interactions, relying solely on RPC methods can be challenging. Just as there’s no direct RPC method to retrieve token balances, there’s also no simple way to fetch all transactions associated with a wallet, especially for ERC-20, ERC-721, or ERC-1155 token transfers. However, by using the Data API, you can retrieve all token transfers for a given wallet **with a single API call**, making the process much more efficient. This approach simplifies tracking and displaying wallet activity without the need to manually scan the entire blockchain. Below are two examples that demonstrate the power of the Data API: in the first, it returns all ERC transfers, including ERC-20, ERC-721, and ERC-1155 tokens, and in the second, it shows all internal transactions, such as when one contract interacts with another. [Lists ERC transfers](/data-api/evm-transactions/list-erc-transfers) for an ERC-20, ERC-721, or ERC-1155 contract address. ```javascript theme={null} import { Avalanche } from "@avalanche-sdk/chainkit"; const avalancheSDK = new Avalanche({ apiKey: "", chainId: "43114", network: "mainnet", }); async function run() { const result = await avalancheSDK.data.evm.transactions.listTransfers({ startBlock: 6479329, endBlock: 6479330, pageSize: 10, address: "0x71C7656EC7ab88b098defB751B7401B5f6d8976F", }); for await (const page of result) { // Handle the page console.log(page); } } run(); ``` Example response ```json theme={null} { "nextPageToken": "", "transfers": [ { "blockNumber": "339", "blockTimestamp": 1648672486, "blockHash": "0x17533aeb5193378b9ff441d61728e7a2ebaf10f61fd5310759451627dfca2e7c", "txHash": "0x3e9303f81be00b4af28515dab7b914bf3dbff209ea10e7071fa24d4af0a112d4", "from": { "name": "Wrapped AVAX", "symbol": "WAVAX", "decimals": 18, "logoUri": "https://images.ctfassets.net/gcj8jwzm6086/5VHupNKwnDYJvqMENeV7iJ/fdd6326b7a82c8388e4ee9d4be7062d4/avalanche-avax-logo.svg", "address": "0x71C7656EC7ab88b098defB751B7401B5f6d8976F" }, "to": { "name": "Wrapped AVAX", "symbol": "WAVAX", "decimals": 18, "logoUri": "https://images.ctfassets.net/gcj8jwzm6086/5VHupNKwnDYJvqMENeV7iJ/fdd6326b7a82c8388e4ee9d4be7062d4/avalanche-avax-logo.svg", "address": "0x71C7656EC7ab88b098defB751B7401B5f6d8976F" }, "logIndex": 123, "value": "10000000000000000000", "erc20Token": { "address": "0x71C7656EC7ab88b098defB751B7401B5f6d8976F", "name": "Wrapped AVAX", "symbol": "WAVAX", "decimals": 18, "logoUri": "https://images.ctfassets.net/gcj8jwzm6086/5VHupNKwnDYJvqMENeV7iJ/fdd6326b7a82c8388e4ee9d4be7062d4/avalanche-avax-logo.svg", "ercType": "ERC-20", "price": { "currencyCode": "usd", "value": "42.42" } } } ] } ``` [Returns a list of internal transactions](/data-api/evm-transactions/list-internal-transactions) for an address and chain. Filterable by block range. ```javascript theme={null} import { Avalanche } from "@avalanche-sdk/chainkit"; const avalancheSDK = new Avalanche({ apiKey: "", chainId: "43114", network: "mainnet", }); async function run() { const result = await avalancheSDK.data.evm.transactions.listInternalTransactions({ startBlock: 6479329, endBlock: 6479330, pageSize: 10, address: "0x71C7656EC7ab88b098defB751B7401B5f6d8976F", }); for await (const page of result) { // Handle the page console.log(page); } } run(); ``` Example response ```json theme={null} { "nextPageToken": "", "transactions": [ { "blockNumber": "339", "blockTimestamp": 1648672486, "blockHash": "0x17533aeb5193378b9ff441d61728e7a2ebaf10f61fd5310759451627dfca2e7c", "txHash": "0x3e9303f81be00b4af28515dab7b914bf3dbff209ea10e7071fa24d4af0a112d4", "from": { "name": "Wrapped AVAX", "symbol": "WAVAX", "decimals": 18, "logoUri": "https://images.ctfassets.net/gcj8jwzm6086/5VHupNKwnDYJvqMENeV7iJ/fdd6326b7a82c8388e4ee9d4be7062d4/avalanche-avax-logo.svg", "address": "0x71C7656EC7ab88b098defB751B7401B5f6d8976F" }, "to": { "name": "Wrapped AVAX", "symbol": "WAVAX", "decimals": 18, "logoUri": "https://images.ctfassets.net/gcj8jwzm6086/5VHupNKwnDYJvqMENeV7iJ/fdd6326b7a82c8388e4ee9d4be7062d4/avalanche-avax-logo.svg", "address": "0x71C7656EC7ab88b098defB751B7401B5f6d8976F" }, "internalTxType": "UNKNOWN", "value": "10000000000000000000", "isReverted": true, "gasUsed": "", "gasLimit": "" } ] } ``` ### Conclusion Using the Data API over traditional RPC methods for fetching token balances offers significant advantages: * **Efficiency**: Retrieve all necessary information in a single API call. * **Simplicity**: Eliminates complex data processing and reduces development time. * **Scalability**: Handles large volumes of data efficiently, suitable for real-time applications. * **Comprehensive Data**: Provides enriched information, including token prices and logos. * **Reliability**: Ensures data accuracy and consistency without the need for extensive error handling. For developers building Web3 applications, leveraging the Data API is the smarter choice. It not only simplifies your codebase but also enhances the user experience by providing accurate and timely data. If you’re building cutting-edge Web3 applications, this API is the key to improving your workflow and performance. Whether you’re developing DeFi solutions, wallets, or analytics platforms, take your project to the next level. [Start today with the Data API](/data-api/getting-started) and experience the difference! # Getting Started (/docs/api-reference/data-api/getting-started) --- title: Getting Started description: Getting Started with the Data API icon: Book --- To begin, create your free account by visiting [Builder Hub Console](https://build.avax.network/login?callbackUrl=%2Fconsole%2Futilities%2Fdata-api-keys). Once the account is created: 1. Navigating to [**Data API Keys**](https://build.avax.network/console/utilities/data-api-keys) 2. Click on **Create API Key** 3. Set an alias and click on **create** 4. Copy the the value Always keep your API keys in a secure environment. Never expose them in public repositories, such as GitHub, or share them with unauthorized individuals. Compromised API keys can lead to unauthorized access and potential misuse of your account. With your API Key you can start making queries, for example to get the latest block on the C-chain(43114): ```bash theme={null} curl --location 'https://data-api.avax.network/v1/chains/43114/blocks' \ --header 'accept: application/json' \ --header 'x-glacier-api-key: ' \ ``` And you should see something like this: ```json theme={null} { "blocks": [ { "blockNumber": "49889407", "blockTimestamp": 1724990250, "blockHash": "0xd34becc82943e3e49048cdd3f75b80a87e44eb3aed6b87cc06867a7c3b9ee213", "txCount": 1, "baseFee": "25000000000", "gasUsed": "53608", "gasLimit": "15000000", "gasCost": "0", "parentHash": "0xf4917efb4628a1d8f4d101b3d15bce9826e62ef2c93c3e16ee898d27cf02f3d4", "feesSpent": "1435117553916960", "cumulativeTransactions": "500325352" }, { "blockNumber": "49889406", "blockTimestamp": 1724990248, "blockHash": "0xf4917efb4628a1d8f4d101b3d15bce9826e62ef2c93c3e16ee898d27cf02f3d4", "txCount": 2, "baseFee": "25000000000", "gasUsed": "169050", "gasLimit": "15000000", "gasCost": "0", "parentHash": "0x2a54f142fa3acee92a839b071bb6c7cca7abc2a797cf4aac68b07f79406ac0cb", "feesSpent": "4226250000000000", "cumulativeTransactions": "500325351" }, { "blockNumber": "49889405", "blockTimestamp": 1724990246, "blockHash": "0x2a54f142fa3acee92a839b071bb6c7cca7abc2a797cf4aac68b07f79406ac0cb", "txCount": 4, "baseFee": "25000000000", "gasUsed": "618638", "gasLimit": "15000000", "gasCost": "0", "parentHash": "0x0cda1bb5c86e790976c9330c9fc26e241a705afbad11a4caa44df1c81058451d", "feesSpent": "16763932426044724", "cumulativeTransactions": "500325349" }, { "blockNumber": "49889404", "blockTimestamp": 1724990244, "blockHash": "0x0cda1bb5c86e790976c9330c9fc26e241a705afbad11a4caa44df1c81058451d", "txCount": 3, "baseFee": "25000000000", "gasUsed": "254544", "gasLimit": "15000000", "gasCost": "0", "parentHash": "0x60e55dd9eacc095c07f50a73e02d81341c406584f7abbf5d10d938776a4c893c", "feesSpent": "6984642298020000", "cumulativeTransactions": "500325345" }, { "blockNumber": "49889403", "blockTimestamp": 1724990242, "blockHash": "0x60e55dd9eacc095c07f50a73e02d81341c406584f7abbf5d10d938776a4c893c", "txCount": 2, "baseFee": "25000000000", "gasUsed": "65050", "gasLimit": "15000000", "gasCost": "0", "parentHash": "0xa3e9f91f45a85ed00b8ebe8e5e976ed1a1f52612143eddd3de9d2588d05398b8", "feesSpent": "1846500000000000", "cumulativeTransactions": "500325342" }, { "blockNumber": "49889402", "blockTimestamp": 1724990240, "blockHash": "0xa3e9f91f45a85ed00b8ebe8e5e976ed1a1f52612143eddd3de9d2588d05398b8", "txCount": 2, "baseFee": "25000000000", "gasUsed": "74608", "gasLimit": "15000000", "gasCost": "0", "parentHash": "0x670db772edfc2fdae322d55473ba0670690aed6358a067a718492c819d63356a", "feesSpent": "1997299851936960", "cumulativeTransactions": "500325340" }, { "blockNumber": "49889401", "blockTimestamp": 1724990238, "blockHash": "0x670db772edfc2fdae322d55473ba0670690aed6358a067a718492c819d63356a", "txCount": 1, "baseFee": "25000000000", "gasUsed": "273992", "gasLimit": "15000000", "gasCost": "0", "parentHash": "0x75742cf45383ce54823690b9dd2e85a743be819281468163d276f145d077902a", "feesSpent": "7334926295195040", "cumulativeTransactions": "500325338" }, { "blockNumber": "49889400", "blockTimestamp": 1724990236, "blockHash": "0x75742cf45383ce54823690b9dd2e85a743be819281468163d276f145d077902a", "txCount": 1, "baseFee": "25000000000", "gasUsed": "291509", "gasLimit": "15000000", "gasCost": "0", "parentHash": "0xe5055eae3e1fd2df24b61e9c691f756c97e5619cfc66b69cbcb6025117d1bde7", "feesSpent": "7724988500000000", "cumulativeTransactions": "500325337" }, { "blockNumber": "49889399", "blockTimestamp": 1724990234, "blockHash": "0xe5055eae3e1fd2df24b61e9c691f756c97e5619cfc66b69cbcb6025117d1bde7", "txCount": 8, "baseFee": "25000000000", "gasUsed": "824335", "gasLimit": "15000000", "gasCost": "0", "parentHash": "0xbcacff928f7dd20cc1522155e7c9b9716997914b53ab94034b813c3f207174ef", "feesSpent": "21983004380692400", "cumulativeTransactions": "500325336" }, { "blockNumber": "49889398", "blockTimestamp": 1724990229, "blockHash": "0xbcacff928f7dd20cc1522155e7c9b9716997914b53ab94034b813c3f207174ef", "txCount": 1, "baseFee": "25000000000", "gasUsed": "21000", "gasLimit": "15000000", "gasCost": "0", "parentHash": "0x0b686812078429d33e4224d2b48bd26b920db8dbb464e7f135d980759ca7e947", "feesSpent": "562182298020000", "cumulativeTransactions": "500325328" } ], "nextPageToken": "9f9e1d25-14a9-49f4-8742-fd4bf12f7cd8" } ``` Congratulations! You’ve successfully set up your account and made your first query to the Data API 🚀🚀🚀 # Data API (/docs/api-reference/data-api) --- title: Data API description: Access comprehensive blockchain data for Avalanche networks icon: Database --- ### What is the Data API? The Data API provides web3 application developers with multi-chain data related to Avalanche's primary network, Avalanche L1s, and Ethereum. With the Data API, you can easily build products that leverage real-time and historical transaction and transfer history, native and token balances, and various types of token metadata. Data API The [Data API](/docs/api-reference/data-api), along with the [Metrics API](/docs/api-reference/metrics-api), are the engines behind the [Avalanche Explorer](https://subnets.avax.network/stats/) and the [Core wallet](https://core.app/en/). They are used to display transactions, logs, balances, NFTs, and more. The data and visualizations presented are all powered by these APIs, offering real-time and historical insights that are essential for building sophisticated, data-driven blockchain products. ### Features * **Extensive L1 Support**: Gain access to data from over 100+ L1s across both mainnet and testnet. If an L1 is listed on the [Avalanche Explorer](https://subnets.avax.network/), you can query its data using the Data API. * **Transactions and UTXOs**: easily retrieve details related to transactions, UTXOs, and token transfers from Avalanche EVMs, Ethereum, and Avalanche's Primary Network - the P-Chain, X-Chain and C-Chain. * **Blocks**: retrieve latest blocks and block details * **Balances**: fetch balances of native, ERC-20, ERC-721, and ERC-1155 tokens along with relevant metadata. * **Tokens**: augment your user experience with asset details. * **Staking**: get staking related data for active and historical validations. ### Supported Chains Avalanche’s architecture supports a diverse ecosystem of interconnected L1 blockchains, each operating independently while retaining the ability to seamlessly communicate with other L1s within the network. Central to this architecture is the Primary Network—Avalanche’s foundational network layer, which all validators are required to validate prior to [ACP-77](/docs/acps/77-reinventing-subnets). The Primary Network runs three essential blockchains: * The Contract Chain (C-Chain) * The Platform Chain (P-Chain) * The Exchange Chain (X-Chain) However, with the implementation of [ACP-77](/docs/acps/77-reinventing-subnets), this requirement will change. Subnet Validators will be able to operate independently of the Primary Network, allowing for more flexible and affordable Subnet creation and management. The **Data API** supports a wide range of L1 blockchains (**over 100**) across both **mainnet** and **testnet**, including popular ones like Beam, DFK, Lamina1, Dexalot, Shrapnel, and Pulsar. In fact, every L1 you see on the [Avalanche Explorer](https://explorer.avax.network/) can be queried through the Data API. This list is continually expanding as we keep adding more L1s. For a full list of supported chains, visit [List chains](/docs/api-reference/data-api/evm-chains/supportedChains). #### The Contract Chain (C-Chain) The C-Chain is an implementation of the Ethereum Virtual Machine (EVM). The primary network endpoints only provide information related to C-Chain atomic memory balances and import/export transactions. For additional data, please reference the [EVM APIs](/docs/rpcs/c-chain/rpc). #### The Platform Chain (P-Chain) The P-Chain is responsible for all validator and L1-level operations. The P-Chain supports the creation of new blockchains and L1s, the addition of validators to L1s, staking operations, and other platform-level operations. #### The Exchange Chain (X-Chain) The X-Chain is responsible for operations on digital smart assets known as Avalanche Native Tokens. A smart asset is a representation of a real-world resource (for example, equity, or a bond) with sets of rules that govern its behavior, like "can’t be traded until tomorrow." The X-Chain supports the creation and trade of Avalanche Native Tokens. | Feature | Description | | :--------------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **Chains** | Utilize this endpoint to retrieve the Primary Network chains that an address has transaction history associated with. | | **Blocks** | Blocks are the container for transactions executed on the Primary Network. Retrieve the latest blocks, a specific block by height or hash, or a list of blocks proposed by a specified NodeID on Primary Network chains. | | **Vertices** | Prior to Avalanche Cortina (v1.10.0), the X-Chain functioned as a DAG with vertices rather than blocks. These endpoints allow developers to retrieve historical data related to that period of chain history. Retrieve the latest vertices, a specific vertex, or a list of vertices at a specific height from the X-Chain. | | **Transactions** | Transactions are a user's primary form of interaction with a chain and provide details around their on-chain activity, including staking-related behavior. Retrieve a list of the latest transactions, a specific transaction, a list of active staking transactions for a specified address, or a list of transactions associated with a provided asset id from Primary Network chains. | | **UTXOs** | UTXOs are fundamental elements that denote the funds a user has available. Get a list of UTXOs for provided addresses from the Primary Network chains. | | **Balances** | User balances are an essential function of the blockchain. Retrieve balances related to the X and P-Chains, as well as atomic memory balances for the C-Chain. | | **Rewards** | Staking is the process where users lock up their tokens to support a blockchain network and, in return, receive rewards. It is an essential part of proof-of-stake (PoS) consensus mechanisms used by many blockchain networks, including Avalanche. Using the Data API, you can easily access pending and historical rewards associated with a set of addresses. | | **Assets** | Get asset details corresponding to the given asset id on the X-Chain. | #### EVM The C-Chain is an instance of the Coreth Virtual Machine, and many Avalanche L1s are instances of the *Subnet-EVM*, which is a Virtual Machine (VM) that defines the L1 Contract Chains. *Subnet-EVM* is a simplified version of *Coreth VM* (C-Chain). | Feature | Description | | :--------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | **Chains** | There are a number of chains supported by the Data API. These endpoints can be used to understand which chains are included/indexed as part of the API and retrieve information related to a specific chain. | | **Blocks** | Blocks are the container for transactions executed within the EVM. Retrieve the latest blocks or a specific block by height or hash. | | **Transactions** | Transactions are a user's primary form of interaction with a chain and provide details around their on-chain activity. These endpoints can be used to retrieve information related to specific transaction details, internal transactions, contract deployments, specific token standard transfers, and more! | | **Balances** | User balances are an essential function of the blockchain. Easily retrieve native token, collectible, and fungible token balances related to an EVM chain with these endpoints. | #### Operations The Operations API allows users to easily access their on-chain history by creating transaction exports returned in a CSV format. This API supports EVMs as well as non-EVM Primary Network chains. # Rate Limits (/docs/api-reference/data-api/rate-limits) --- title: Rate Limits description: Rate Limits for the Data API icon: Clock --- Rate limiting is managed through a weighted scoring system, known as Compute Units (CUs). Each API request consumes a specified number of CUs, determined by the complexity of the request. This system is designed to accommodate basic requests while efficiently handling more computationally intensive operations. ## Rate Limit Tiers The maximum CUs (rate-limiting score) for a user depends on their subscription level and is delineated in the following table: | Subscription Level | Per Minute Limit (CUs) | Per Day Limit (CUs) | | :----------------- | :--------------------- | :------------------ | | Unauthenticated | 6,000 | 1,200,000 | | Free | 8,000 | 2,000,000 | | Base | 10,000 | 3,750,000 | | Growth | 14,000 | 11,200,000 | | Pro | 20,000 | 25,000,000 | To update your subscription level use the [AvaCloud Portal](https://app.avacloud.io/) Note: Rate limits apply collectively across both Webhooks and Data APIs, with usage from each counting toward your total CU limit. ## Rate Limit Categories The CUs for each category are defined in the following table: | Weight | CU Value | | :----- | :------- | | Free | 1 | | Small | 10 | | Medium | 20 | | Large | 50 | | XL | 100 | | XXL | 200 | ## Rate Limits for Data API Endpoints The CUs for each route are defined in the table below: | Endpoint | Method | Weight | CU Value | | :-------------------------------------------------------------------------------- | :----- | :----- | :------- | | `/v1/health-check` | GET | Medium | 20 | | `/v1/address/{address}/chains` | GET | Medium | 20 | | `/v1/transactions` | GET | Medium | 20 | | `/v1/blocks` | GET | Medium | 20 | | `/v1/chains/{chainId}/nfts/collections/{address}/tokens/{tokenId}:reindex` | POST | Small | 10 | | `/v1/chains/{chainId}/nfts/collections/{address}/tokens` | GET | Medium | 20 | | `/v1/chains/{chainId}/nfts/collections/{address}/tokens/{tokenId}` | GET | Medium | 20 | | `/v1/operations/{operationId}` | GET | Small | 10 | | `/v1/operations/transactions:export` | POST | Medium | 20 | | `/v1/networks/{network}/blockchains/{blockchainId}/transactions/{txHash}` | GET | Medium | 20 | | `/v1/networks/{network}/blockchains/{blockchainId}/transactions` | GET | XL | 100 | | `/v1/networks/{network}/blockchains/{blockchainId}/transactions:listStaking` | GET | XL | 100 | | `/v1/networks/{network}/rewards:listPending` | GET | XL | 100 | | `/v1/networks/{network}/rewards` | GET | XL | 100 | | `/v1/networks/{network}/blockchains/{blockchainId}/utxos` | GET | XL | 100 | | `/v1/networks/{network}/blockchains/{blockchainId}/balances` | GET | XL | 100 | | `/v1/networks/{network}/blockchains/{blockchainId}/blocks/{blockId}` | GET | XL | 100 | | `/v1/networks/{network}/blockchains/{blockchainId}/nodes/{nodeId}/blocks` | GET | Medium | 20 | | `/v1/networks/{network}/blockchains/{blockchainId}/blocks` | GET | Medium | 20 | | `/v1/networks/{network}/blockchains/{blockchainId}/vertices` | GET | Medium | 20 | | `/v1/networks/{network}/blockchains/{blockchainId}/vertices/{vertexHash}` | GET | Medium | 20 | | `/v1/networks/{network}/blockchains/{blockchainId}/vertices:listByHeight` | GET | Medium | 20 | | `/v1/networks/{network}/blockchains/{blockchainId}/assets/{assetId}` | GET | XL | 100 | | `/v1/networks/{network}/blockchains/{blockchainId}/assets/{assetId}/transactions` | GET | XL | 100 | | `/v1/networks/{network}/addresses:listChainIds` | GET | XL | 100 | | `/v1/networks/{network}` | GET | XL | 100 | | `/v1/networks/{network}/blockchains` | GET | Medium | 20 | | `/v1/networks/{network}/subnets` | GET | Medium | 20 | | `/v1/networks/{network}/subnets/{subnetId}` | GET | Medium | 20 | | `/v1/networks/{network}/validators` | GET | Medium | 20 | | `/v1/networks/{network}/validators/{nodeId}` | GET | Medium | 20 | | `/v1/networks/{network}/delegators` | GET | Medium | 20 | | `/v1/networks/{network}/l1Validators` | GET | Medium | 20 | | `/v1/teleporter/messages/{messageId}` | GET | Medium | 20 | | `/v1/teleporter/messages` | GET | Medium | 20 | | `/v1/teleporter/addresses/{address}/messages` | GET | Medium | 20 | | `/v1/icm/messages/{messageId}` | GET | Medium | 20 | | `/v1/icm/messages` | GET | Medium | 20 | | `/v1/icm/addresses/{address}/messages` | GET | Medium | 20 | | `/v1/apiUsageMetrics` | GET | XXL | 200 | | `/v1/apiLogs` | GET | XXL | 200 | | `/v1/subnetRpcUsageMetrics` | GET | XXL | 200 | | `/v1/rpcUsageMetrics` | GET | XXL | 200 | | `/v1/primaryNetworkRpcUsageMetrics` | GET | XXL | 200 | | `/v1/signatureAggregator/{network}/aggregateSignatures` | POST | Medium | 20 | | `/v1/signatureAggregator/{network}/aggregateSignatures/{txHash}` | GET | Medium | 20 | | `/v1/chains/{chainId}/addresses/{address}/balances:getNative` | GET | Medium | 20 | | `/v1/chains/{chainId}/addresses/{address}/balances:listErc20` | GET | Medium | 20 | | `/v1/chains/{chainId}/addresses/{address}/balances:listErc721` | GET | Medium | 20 | | `/v1/chains/{chainId}/addresses/{address}/balances:listErc1155` | GET | Medium | 20 | | `/v1/chains/{chainId}/addresses/{address}/balances:listCollectibles` | GET | Medium | 20 | | `/v1/chains/{chainId}/blocks` | GET | Small | 10 | | `/v1/chains/{chainId}/blocks/{blockId}` | GET | Small | 10 | | `/v1/chains/{chainId}/contracts/{address}/transactions:getDeployment` | GET | Medium | 20 | | `/v1/chains/{chainId}/contracts/{address}/deployments` | GET | Medium | 20 | | `/v1/chains/{chainId}/addresses/{address}` | GET | Medium | 20 | | `/v1/chains` | GET | Free | 1 | | `/v1/chains/{chainId}` | GET | Free | 1 | | `/v1/chains/address/{address}` | GET | Free | 1 | | `/v1/chains/allTransactions` | GET | Free | 1 | | `/v1/chains/allBlocks` | GET | Free | 1 | | `/v1/chains/{chainId}/tokens/{address}/transfers` | GET | Medium | 20 | | `/v1/chains/{chainId}/addresses/{address}/transactions` | GET | Medium | 20 | | `/v1/chains/{chainId}/addresses/{address}/transactions:listNative` | GET | Medium | 20 | | `/v1/chains/{chainId}/addresses/{address}/transactions:listErc20` | GET | Medium | 20 | | `/v1/chains/{chainId}/addresses/{address}/transactions:listErc721` | GET | Medium | 20 | | `/v1/chains/{chainId}/addresses/{address}/transactions:listErc1155` | GET | Medium | 20 | | `/v1/chains/{chainId}/addresses/{address}/transactions:listInternals` | GET | Medium | 20 | | `/v1/chains/{chainId}/transactions/{txHash}` | GET | Medium | 20 | | `/v1/chains/{chainId}/blocks/{blockId}/transactions` | GET | Medium | 20 | | `/v1/chains/{chainId}/transactions` | GET | Medium | 20 | ## Rate Limits for RPC endpoints The CUs for RPC calls are calculated based on the RPC method(s) within the request. The CUs assigned to each method are defined in the table below: | Method | Weight | CU Value | | :---------------------------------------- | :----- | :------- | | `eth_accounts` | Free | 1 | | `eth_blockNumber` | Small | 10 | | `eth_call` | Small | 10 | | `eth_coinbase` | Small | 10 | | `eth_chainId` | Free | 1 | | `eth_gasPrice` | Small | 10 | | `eth_getBalance` | Small | 10 | | `eth_getBlockByHash` | Small | 10 | | `eth_getBlockByNumber` | Small | 10 | | `eth_getBlockTransactionCountByNumber` | Medium | 20 | | `eth_getCode` | Medium | 20 | | `eth_getLogs` | XXL | 200 | | `eth_getStorageAt` | Medium | 20 | | `eth_getTransactionByBlockNumberAndIndex` | Medium | 20 | | `eth_getTransactionByHash` | Small | 10 | | `eth_getTransactionCount` | Small | 10 | | `eth_getTransactionReceipt` | Small | 10 | | `eth_signTransaction` | Medium | 20 | | `eth_sendTransaction` | Medium | 20 | | `eth_sign` | Medium | 20 | | `eth_sendRawTransaction` | Small | 10 | | `eth_syncing` | Free | 1 | | `net_listening` | Free | 1 | | `net_peerCount` | Medium | 20 | | `net_version` | Free | 1 | | `web3_clientVersion` | Small | 10 | | `web3_sha3` | Small | 10 | | `eth_newPendingTransactionFilter` | Medium | 20 | | `eth_maxPriorityFeePerGas` | Small | 10 | | `eth_baseFee` | Small | 10 | | `rpc_modules` | Free | 1 | | `eth_getChainConfig` | Small | 10 | | `eth_feeConfig` | Small | 10 | | `eth_getActivePrecompilesAt` | Small | 10 | All rate limits, weights, and CU values are subject to change. # Snowflake Datashare (/docs/api-reference/data-api/snowflake) --- title: Snowflake Datashare description: Snowflake Datashare for Avalanche blockchain data icon: Snowflake --- Avalanche Primary Network data (C-chain, P-chain, and X-chain blockchains) can be accessed in a sql-based table format via the [Snowflake Data Marketplace.](https://app.snowflake.com/marketplace) Explore the blockchain state since the Genesis Block. These tables provide insights on transaction gas fees, DeFi activity, the historical stake of validators on the primary network, AVAX emissions rewarded to past validators/delegators, and fees paid by Avalanche L1 Validators to the primary network. ## Available Blockchain Data #### Primary Network * **C-chain:** * Blocks * Transactions * Logs * Internal Transactions * Receipts * Messages * **P-chain:** * Blocks * Transactions * UTXOs * **X-chain:** * Blocks * Transactions * Vertices before the [X-chain Linearization](https://www.avax.network/blog/cortina-x-chain-linearization) in the Cortina Upgrade * **Dictionary:** A data dictionary is provided with the listing with column and table descriptions. Example columns include: * `c_blocks.blockchash` * `c_transactions.transactionfrom` * `c_logs.topichex_0` * `p_blocks.block_hash` * `p_blocks.block_index` * `p_blocks.type` * `p_transactions.timestamp` * `p_transactions.transaction_hash` * `utxos.utxo_id` * `utxos.address` * `vertices.vertex_hash` * `vertices.parent_hash` * `x_blocks.timestamp` * `x_blocks.proposer_id` * `x_transactions.transaction_hash` * `x_transactions.type` #### Available Avalanche L1s * **Gunzilla** * **Dexalot** * **DeFi Kingdoms (DFK)** * **Henesys (MapleStory Universe)** #### L1 Data * Blocks * Transactions * Logs * Internal Transactions (currently unavailable for DFK) * Receipts * Messages ## Access Search for "Ava Labs" on the [Snowflake Data Marketplace](https://app.snowflake.com/marketplace). # Usage Guide (/docs/api-reference/data-api/usage) --- title: Usage Guide description: Usage Guide for the Data API icon: Code --- ### Setup and Authentication In order to utilize your accounts rate limits, you will need to make API requests with an API key. You can generate API Keys from the AvaCloud portal. Once you've created and retrieved that, you will be able to make authenticated queries by passing in your API key in the `x-glacier-api-key` header of your HTTP request. An example curl request can be found below: ```bash curl -H "Content-Type: Application/json" -H "x-glacier-api-key: your_api_key" \ "https://glacier-api.avax.network/v1/chains" ``` ### Rate Limits The Data API has rate limits in place to maintain it's stability and protect from bursts of incoming traffic. The rate limits associated with various plans can be found within AvaCloud. When you hit your rate limit, the server will respond with a 429 http response code, and response headers to help you determine when you should start to make additional requests. The response headers follow the standards set in the RateLimit header fields for HTTP draft from the Internet Engineering Task Force. With every response with a valid api key, the server will include the following headers: * `ratelimit-policy` - The rate limit policy tied to your api key. * `ratelimit-limit` - The number of requests you can send according to your policy. * `ratelimit-remaining` - How many request remaining you can send in the period for your policy For any request after the rate limit has been reached, the server will also respond with these headers: * `ratelimit-reset` * `retry-after` Both of these headers are set to the number of seconds until your period is over and requests will start succeeding again. If you start receiving rate limit errors with the 429 response code, we recommend you discontinue sending requests to the server. You should wait to retry requests for the duration specified in the response headers. Alternatively, you can implement an exponential backoff algorithm to prevent continuous errors. Failure to discontinue requests may result in being temporarily blocked from accessing the API. Error Types The Data API generates standard error responses along with error codes based on provided requests and parameters. Typically, response codes within the `2XX` range signifies successful requests, while those within the `4XX` range points to errors originating from the client's side. Meanwhile, response codes within the `5XX` range indicates problems on the server's side. ### Error Types The Glacier API generates standard error responses along with error codes based on provided requests and parameters. Typically, response codes within the `2XX` range signifies successful requests, while those within the `4XX` range points to errors originating from the client's side. Meanwhile, response codes within the `5XX` range indicates problems on the server's side. The error response body is formatted like this: ```json { "message": ["Invalid address format"], // route specific error message "error": "Bad Request", // error type "statusCode": 400 // http response code } ``` Let's go through every error code that we can respond with: | Error Code | Error Type | Description | | :--------- | :-------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **400** | Bad Request | Bad requests generally mean the client has passed invalid or malformed parameters. Error messages in the response could help in evaluating the error. | | **401** | Unauthorized | When a client attempts to access resources that require authorization credentials but the client lacks proper authentication in the request, the server responds with 401. | | **403** | Forbidden | When a client attempts to access resources with valid credentials but doesn't have the privilege to perform that action, the server responds with 403. | | **404** | Not Found | The 404 error is mostly returned when the client requests with either mistyped URL, or the passed resource is moved or deleted, or the resource doesn't exist. | | **500** | Internal Server Error | The 500 error is a generic server-side error that is returned for any uncaught and unexpected issues on the server side. This should be very rare, and you may reach out to us if the problem persists for a longer duration. | | **502** | Bad Gateway | This is an internal error indicating invalid response received by the client-facing proxy or gateway from the upstream server. | | **503** | Service Unavailable | The 503 error is returned for certain routes on a particular Subnet. This indicates an internal problem with our Subnet node, and may not necessarily mean the Subnet is down or affected. | The above list is not exhaustive of all the errors that you'll receive, but is categorized on the basis of error codes. You may see route-specific errors along with detailed error messages for better evaluating the response. Reach out to our team when you see an error in the `5XX` range for a longer duration. These errors should be very rare, but we try to fix them as soon as possible once detected. ### Pagination When utilizing pagination for endpoints that return lists of data such as transactions, UTXOs, or blocks, our API uses a straightforward mechanism to manage the navigation through large datasets. We divide data into pages and each page is limited with a `pageSize` number of elements as passed in the request. Users can navigate to subsequent pages using the page token received in the `nextPageToken` field. This method ensures efficient retrieval. Routes with pagination have a following common response format: ```json { "blocks": [""], // This field name will vary by route "nextPageToken": "3d22deea-ea64-4d30-8a1e-c2a353b67e90" } ``` ### Page Token Structure * If there's more data in the dataset for the request, the API will include a UUID-based page token in the response. This token acts as a pointer to the next page of data. * The UUID page token is generated randomly and uniquely for each pagination scenario, enhancing security and minimizing predictability. * It's important to note that the page token is only returned when a next page is present. If there's no further data to retrieve, a page token will not be included in the response. * The generated page token has an expiration window of 24 hours. Beyond this timeframe, the token will no longer be valid for accessing subsequent pages. ### Integration and Usage: To make use of the pagination system, simply examine the API response. If a UUID page token is present, it indicates the availability of additional data on the next page. You can extract this token and include it in the subsequent request to access the subsequent page of results. Please note that you must ensure that the subsequent request is made within the 24-hour timeframe after the original token's generation. Beyond this duration, the token will expire, and you will need to initiate a fresh request from the initial page. By incorporating UUID page tokens, our API offers a secure, efficient, and user-friendly approach to navigating large datasets, streamlining your data retrieval proces ### Swagger API Reference You can explore the full API definitions and interact with the endpoints in the Swagger documentation at: [https://glacier-api.avax.network/api](https://glacier-api.avax.network/api) # Installing Your VM (/docs/avalanche-l1s/rust-vms/installing-vm) --- title: Installing Your VM description: Learn how to install your VM on your node. --- AvalancheGo searches for and registers VM plugins under the `plugins` [directory](/docs/nodes/configure/configs-flags#--plugin-dir-string). To install the virtual machine onto your node, you need to move the built virtual machine binary under this directory. Virtual machine executable names must be either a full virtual machine ID (encoded in CB58), or a VM alias. Copy the binary into the plugins directory. ```bash cp -n $GOPATH/src/github.com/ava-labs/avalanchego/build/plugins/ ``` ## Node Is Not Running If your node isn't running yet, you can install all virtual machines under your `plugin` directory by starting the node. ## Node Is Already Running Load the binary with the `loadVMs` API. ```bash curl -sX POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"admin.loadVMs", "params" :{} }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/admin ``` Confirm the response of `loadVMs` contains the newly installed virtual machine `tGas3T58KzdjcJ32c6GpePhtqo9rrHJ1oR9wFBtCcMgaosthX`. You'll see this virtual machine as well as any others that weren't already installed previously in the response. ```json { "jsonrpc": "2.0", "result": { "newVMs": { "tGas3T58KzdjcJ32c6GpePhtqo9rrHJ1oR9wFBtCcMgaosthX": [ "timestampvm-rs", "timestamp-rs" ], "spdxUxVJQbX85MGxMHbKw1sHxMnSqJ3QBzDyDYEP3h6TLuxqQ": [] } }, "id": 1 } ``` Now, this VM's static API can be accessed at endpoints `/ext/vm/timestampvm-rs` and `/ext/vm/timestamp-rs`. For more details about VM configs, see [here](/docs/nodes/configure/configs-flags#virtual-machine-vm-configs). In this tutorial, we used the VM's ID as the executable name to simplify the process. However, AvalancheGo would also accept `timestampvm-rs` or `timestamp-rs` since those are registered aliases in previous step. # Introduction to Avalanche-RS (/docs/avalanche-l1s/rust-vms/intro-avalanche-rs) --- title: Introduction to Avalanche-RS description: Learn how to write a simple virtual machine in Rust using Avalanche-RS. --- Since Rust is a language which we can write and implement Proto interfaces, this implies that we can also use Rust to write VMs which can then be deployed on Avalanche. However, rather than build Rust-based from the ground-up, we can utilize Avalanche-RS, a developer toolkit comprised of powerful building blocks and primitive types which allow us to focus exclusively on the business logic of our VM rather than working on low-level logic. ## Structure of Avalanche-RS Although Avalanche-RS is currently primarily used to build Rust-based VMs, Avalanche-RS actually consists of three different frameworks; as per the [GitHub](https://github.com/ava-labs/avalanche-rs) description of the Avalanche-RS repository, the three frameworks are as follows: - Core: framework for core networking components for a P2P Avalanche node - Avalanche-Consensus: a Rust implementation of the novel Avalanche consensus protocol - Avalanche-Types: implements foundational types used in Avalanche and provides an SDK for building Rust-based VMs As the above might make it obvious, the Avalanche-Types crate is the main framework that one would use to build Rust-based VMs. ## Documentation For the most up-to-date information regarding the Avalache-Types library, please refer to the associated [crates.io](https://crates.io/crates/avalanche-types) page for the Avalanche-Types crate. # Setting Up Your Environment (/docs/avalanche-l1s/rust-vms/setting-up-environment) --- title: Setting Up Your Environment description: Learn how to set up your environment to build a Rust VM. --- In this section, we will focus on getting set up with the Rust environment necessary to build with the `avalanche-types` crates (recall that `avalanche-types` contains the SDK we want to use to build our Rust VM). ## Installing Rust First and foremost, we will need to have Rust installed locally. If you do not have Rust installed, you can install `rustup` (the tool that manages your Rust installation) [here](https://www.rust-lang.org/tools/install). ## Adding `avalanche-types` to Your Project Once you have Rust installed and are ready to build, you will want to add the Avalanche-Types crate to your project. Below is a baseline example of how you can do this: ```toml title="Cargo.toml" [dependencies] avalanche-types = "0.1.4" ``` However, if you want to use the [TimestampVM](https://github.com/ava-labs/timestampvm-rs) as a reference for your project, a more appropriate import would be the following: ```toml title="Cargo.toml" [dependencies] avalanche-types = { version = "0.1.4", features = ["subnet", "codec_base64"] } ``` # Considerations (/docs/avalanche-l1s/upgrade/considerations) --- title: Considerations description: Learn about some of the key considerations while upgrading your Avalanche L1. --- In the course of Avalanche L1 operation, you will inevitably need to upgrade or change some part of the software stack that is running your Avalanche L1. If nothing else, you will have to upgrade the AvalancheGo node client. Same goes for the VM plugin binary that is used to run the blockchain on your Avalanche L1, which is most likely the [Subnet-EVM](https://github.com/ava-labs/subnet-evm), the Avalanche L1 implementation of the Ethereum virtual machine. Node and VM upgrades usually don't change the way your Avalanche L1 functions, instead they keep your Avalanche L1 in sync with the rest of the network, bringing security, performance and feature upgrades. Most upgrades are optional, but all of them are recommended, and you should make optional upgrades part of your routine Avalanche L1 maintenance. Some upgrades will be mandatory, and those will be clearly communicated as such ahead of time, you need to pay special attention to those. Besides the upgrades due to new releases, you also may want to change the configuration of the VM, to alter the way Avalanche L1 runs, for various business or operational needs. These upgrades are solely the purview of your team, and you have complete control over the timing of their roll out. Any such change represents a **network upgrade** and needs to be carefully planned and executed. Network Upgrades Permanently Change the Rules of Your Avalanche L1. Procedural mistakes or a botched upgrade can halt your Avalanche L1 or lead to data loss! When performing an Avalanche L1 upgrade, every single validator on the Avalanche L1 will need to perform the identical upgrade. If you are coordinating a network upgrade, you must schedule advance notice to every Avalanche L1 validator so that they have time to perform the upgrade prior to activation. Make sure you have direct line of communication to all your validators! This tutorial will guide you through the process of doing various Avalanche L1 upgrades and changes. We will point out things to watch out for and precautions you need to be mindful about. General Upgrade Considerations[​](#general-upgrade-considerations "Direct link to heading") ------------------------------------------------------------------------------------------- When operating an Avalanche L1, you should always keep in mind that Proof of Stake networks like Avalanche can only make progress if sufficient amount of validating nodes are connected and processing transactions. Each validator on an Avalanche L1 is assigned a certain `weight`, which is a numerical value representing the significance of the node in consensus decisions. On the Primary Network, weight is equal to the amount of AVAX staked on the node. On Avalanche L1s, weight is currently assigned by the Avalanche L1 owners when they issue the transaction adding a validator to the Avalanche L1. Avalanche L1s can operate normally only if validators representing 80% or more of the cumulative validator weight is connected. If the amount of connected stake falls close to or below 80%, Avalanche L1 performance (time to finality) will suffer, and ultimately the Avalanche L1 will halt (stop processing transactions). You as an Avalanche L1 operator need to ensure that whatever you do, at least 80% of the validators' cumulative weight is connected and working at all times. It is mandatory that the cumulative weight of all validators in the Avalanche L1 must be at least the value of [`snow-sample-size`](/docs/nodes/configure/configs-flags#--snow-sample-size-int) (default 20). For example, if there is only one validator in the Avalanche L1, its weight must be at least `snow-sample-size` . Hence, when assigning weight to the nodes, always use values greater than 20. Recall that a validator's weight can't be changed while it is validating, so take care to use an appropriate value. Upgrading Avalanche L1 Validator Nodes[​](#upgrading-avalanche-l1-validator-nodes "Direct link to heading") ----------------------------------------------------------------------------------------------- AvalancheGo, the node client that is running the Avalanche validators is under constant and rapid development. New versions come out often (roughly every two weeks), bringing added capabilities, performance improvements or security fixes. Updates are usually optional, but from time to time (much less frequently than regular updates) there will be an update that includes a mandatory network upgrade. Those upgrades are **MANDATORY** for every node running the Avalanche L1. Any node that does not perform the update before the activation timestamp will immediately stop working when the upgrade activates. That's why having a node upgrade strategy is absolutely vital, and you should always update to the latest AvalancheGo client immediately when it is made available. For a general guide on upgrading AvalancheGo check out [this tutorial](/docs/nodes/maintain/upgrade). When upgrading Avalanche L1 nodes and keeping in mind the previous section, make sure to stagger node upgrades and start a new upgrade only once the previous node has successfully upgraded. Use the [Health API](/docs/rpcs/other/health-rpc#healthhealth) to check that `healthy` value in the response is `true` on the upgraded node, and on other Avalanche L1 validators check that [platform.getCurrentValidators()](/docs/rpcs/p-chain#platformgetcurrentvalidators) has `true` in `connected` attribute for the upgraded node's `nodeID`. Once those two conditions are satisfied, node is confirmed to be online and validating the Avalanche L1 and you can start upgrading another node. Continue the upgrade cycle until all the Avalanche L1 nodes are upgraded. Upgrading Avalanche L1 VM Plugin Binaries[​](#upgrading-avalanche-l1-vm-plugin-binaries "Direct link to heading") ----------------------------------------------------------------------------------------------------- Besides the AvalancheGo client itself, new versions get released for the VM binaries that run the blockchains on the Avalanche L1. On most Avalanche L1s, that is the [Subnet-EVM](https://github.com/ava-labs/subnet-evm), so this tutorial will go through the steps for updating the `subnet-evm` binary. The update process will be similar for updating any VM plugin binary. All the considerations for doing staggered node upgrades as discussed in previous section are valid for VM upgrades as well. In the future, VM upgrades will be handled by the [Avalanche-CLI tool](https://github.com/ava-labs/avalanche-cli), but for now we need to do it manually. Go to the [releases page](https://github.com/ava-labs/subnet-evm/releases) of the Subnet-EVM repository. Locate the latest version, and copy link that corresponds to the OS and architecture of the machine the node is running on (`darwin` = Mac, `amd64` = Intel/AMD processor, `arm64` = Arm processor). Log into the machine where the node is running and download the archive, using `wget` and the link to the archive, like this: ```bash wget https://github.com/ava-labs/subnet-evm/releases/download/v0.2.9/subnet-evm_0.2.9_linux_amd64.tar.gz ``` This will download the archive to the machine. Unpack it like this (use the correct filename, of course): ```bash tar xvf subnet-evm_0.2.9_linux_amd64.tar.gz ``` This will unpack and place the contents of the archive in the current directory, file `subnet-evm` is the plugin binary. You need to stop the node now (if the node is running as a service, use `sudo systemctl stop avalanchego` command). You need to place that file into the plugins directory where the AvalancheGo binary is located. If the node is installed using the install script, the path will be `~/avalanche-node/plugins` Instead of the `subnet-evm` filename, VM binary needs to be named as the VM ID of the chain on the Avalanche L1. For example, for the [WAGMI Avalanche L1](/docs/avalanche-l1s/wagmi-avalanche-l1) that VM ID is `srEXiWaHuhNyGwPUi444Tu47ZEDwxTWrbQiuD7FmgSAQ6X7Dy`. So, the command to copy the new plugin binary would look like: ```bash cp subnet-evm ~/avalanche-node/plugins/srEXiWaHuhNyGwPUi444Tu47ZEDwxTWrbQiuD7FmgSAQ6X7Dy ``` Make sure you use the correct VM ID, otherwise, your VM will not get updated and your Avalanche L1 may halt. After you do that, you can start the node back up (if running as service do `sudo systemctl start avalanchego`). You can monitor the log output on the node to check that everything is OK, or you can use the [info.getNodeVersion()](/docs/rpcs/other/info-rpc#infogetnodeversion) API to check the versions. Example output would look like: ```json { "jsonrpc": "2.0", "result": { "version": "avalanche/1.7.18", "databaseVersion": "v1.4.5", "gitCommit": "b6d5827f1a87e26da649f932ad649a4ea0e429c4", "vmVersions": { "avm": "v1.7.18", "evm": "v0.8.15", "platform": "v1.7.18", "sqja3uK17MJxfC7AN8nGadBw9JK5BcrsNwNynsqP5Gih8M5Bm": "v0.0.7", "srEXiWaHuhNyGwPUi444Tu47ZEDwxTWrbQiuD7FmgSAQ6X7Dy": "v0.2.9" } }, "id": 1 } ``` Note that entry next to the VM ID we upgraded correctly says `v0.2.9`. You have successfully upgraded the VM! Refer to the previous section on how to make sure node is healthy and connected before moving on to upgrading the next Avalanche L1 validator. If you don't get the expected result, you can stop the `AvalancheGo`, examine and follow closely step-by-step of the above. You are free to remove files under `~/avalanche-node/plugins`, however, you should keep in mind that removing files is to remove an existing VM binary. You must put the correct VM plugin in place before you restart AvalancheGo. Network Upgrades[​](#network-upgrades "Direct link to heading") --------------------------------------------------------------- Sometimes you need to do a network upgrade to change the configured rules in the genesis under which the Chain operates. In regular EVM, network upgrades are a pretty involved process that includes deploying the new EVM binary, coordinating the timed upgrade and deploying changes to the nodes. But since [Subnet-EVM v0.2.8](https://github.com/ava-labs/subnet-evm/releases/tag/v0.2.8), we introduced the long awaited feature to perform network upgrades by just using a few lines of JSON. Upgrades can consist of enabling/disabling particular precompiles, or changing their parameters. Currently available precompiles allow you to: - Restrict Smart Contract Deployers - Restrict Who Can Submit Transactions - Mint Native Coins - Configure Dynamic Fees Please refer to [Customize an Avalanche L1](/docs/avalanche-l1s/evm-configuration/customize-avalanche-l1#network-upgrades-enabledisable-precompiles) for a detailed discussion of possible precompile upgrade parameters. Summary[​](#summary "Direct link to heading") --------------------------------------------- Vital part of Avalanche L1 maintenance is performing timely upgrades at all levels of the software stack running your Avalanche L1. We hope this tutorial will give you enough information and context to allow you to do those upgrades with confidence and ease. If you have additional questions or any issues, please reach out to us on [Discord](https://chat.avalabs.org/). # Precompile Upgrades (/docs/avalanche-l1s/upgrade/precompile-upgrades) --- title: Precompile Upgrades description: Learn how to enable, disable, and configure precompiles in your Subnet-EVM. --- # Precompile Upgrades Performing a network upgrade requires coordinating the upgrade network-wide. A network upgrade changes the rule set used to process and verify blocks, such that any node that upgrades incorrectly or fails to upgrade by the time that upgrade goes into effect may become out of sync with the rest of the network. Any mistakes in configuring network upgrades or coordinating them on validators may cause the network to halt and recovering may be difficult. Subnet-EVM precompiles can be individually enabled or disabled at a given timestamp as a network upgrade. When disabling a precompile, it disables calling the precompile and destructs its storage, allowing it to be enabled later with a new configuration if desired. ## Configuration File These upgrades must be specified in a file named `upgrade.json` placed in the same directory where `config.json` resides: `{chain-config-dir}/{blockchainID}/upgrade.json`. For example, `WAGMI Subnet` upgrade should be placed in `~/.avalanchego/configs/chains/2ebCneCbwthjQ1rYT41nhd7M76Hc6YmosMAQrTFhBq8qeqh6tt/upgrade.json`. The content of the `upgrade.json` should be formatted according to the following: ```json { "precompileUpgrades": [ { "[PRECOMPILE_NAME]": { "blockTimestamp": "[ACTIVATION_TIMESTAMP]", // unix timestamp precompile should activate at "[PARAMETER]": "[VALUE]" // precompile specific configuration options, eg. "adminAddresses" } } ] } ``` An invalid `blockTimestamp` in an upgrade file results the update failing. The `blockTimestamp` value should be set to a valid Unix timestamp value which is in the _future_ relative to the _head of the chain_. If the node encounters a `blockTimestamp` which is in the past, it will fail on startup. ## Disabling Precompiles To disable a precompile, use the following format: ```json { "precompileUpgrades": [ { "": { "blockTimestamp": "[DEACTIVATION_TIMESTAMP]", // unix timestamp the precompile should deactivate at "disable": true } } ] } ``` Each item in `precompileUpgrades` must specify exactly one precompile to enable or disable and the block timestamps must be in increasing order. Once an upgrade has been activated (a block after the specified timestamp has been accepted), it must always be present in `upgrade.json` exactly as it was configured at the time of activation (otherwise the node will refuse to start). For safety, you should always treat `precompileUpgrades` as append-only. As a last resort measure, it is possible to abort or reconfigure a precompile upgrade that has not been activated since the chain is still processing blocks using the prior rule set. If aborting an upgrade becomes necessary, you can remove the precompile upgrade from `upgrade.json` from the end of the list of upgrades. As long as the blockchain has not accepted a block with a timestamp past that upgrade's timestamp, it will abort the upgrade for that node. ## Example Configuration Here's a complete example that demonstrates enabling and disabling precompiles: ```json { "precompileUpgrades": [ { "feeManagerConfig": { "blockTimestamp": 1668950000, "adminAddresses": ["0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC"] } }, { "txAllowListConfig": { "blockTimestamp": 1668960000, "adminAddresses": ["0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC"] } }, { "feeManagerConfig": { "blockTimestamp": 1668970000, "disable": true } } ] } ``` This example: 1. Enables the `feeManagerConfig` at timestamp `1668950000` 2. Enables `txAllowListConfig` at timestamp `1668960000` 3. Disables `feeManagerConfig` at timestamp `1668970000` ## Initial Precompile Configurations Precompiles can be managed by privileged addresses to change their configurations and activate their effects. For example, the `feeManagerConfig` precompile can have `adminAddresses` which can change the fee structure of the network: ```json { "precompileUpgrades": [ { "feeManagerConfig": { "blockTimestamp": 1668950000, "adminAddresses": ["0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC"] } } ] } ``` In this example, only the specified address can change the network's fee structure. The admin must call the precompile to activate changes by sending a transaction with a new fee config. ### Initial Configurations Without Admin Precompiles can also activate their effect immediately at the activation timestamp without admin addresses. For example: ```json { "precompileUpgrades": [ { "feeManagerConfig": { "blockTimestamp": 1668950000, "initialFeeConfig": { "gasLimit": 20000000, "targetBlockRate": 2, "minBaseFee": 1000000000, "targetGas": 100000000, "baseFeeChangeDenominator": 48, "minBlockGasCost": 0, "maxBlockGasCost": 10000000, "blockGasCostStep": 500000 } } } ] } ``` It's still possible to add `adminAddresses` or `enabledAddresses` along with these initial configurations. In this case, the precompile will be activated with the initial configuration, and admin/enabled addresses can access to the precompiled contract normally. If you want to change the precompile initial configuration, you will need to first disable it then activate the precompile again with the new configuration. ## Verifying Upgrades After creating or modifying `upgrade.json`, restart your node to load the changes. The node will print the chain configuration on startup, allowing you to verify the upgrade configuration: ```bash INFO [08-15|15:09:36.772] <2ebCneCbwthjQ1rYT41nhd7M76Hc6YmosMAQrTFhBq8qeqh6tt Chain> github.com/ava-labs/subnet-evm/eth/backend.go:155: Initialised chain configuration config="{ChainID: 11111 Homestead: 0 EIP150: 0 EIP155: 0 EIP158: 0 Byzantium: 0 Constantinople: 0 Petersburg: 0 Istanbul: 0, Muir Glacier: 0, Subnet EVM: 0, FeeConfig: {\"gasLimit\":20000000,\"targetBlockRate\":2,\"minBaseFee\":1000000000,\"targetGas\ ":100000000,\"baseFeeChangeDenominator\":48,\"minBlockGasCost\":0,\"maxBlockGasCost\ ":10000000,\"blockGasCostStep\":500000}, AllowFeeRecipients: false, NetworkUpgrades: {\ "subnetEVMTimestamp\":0}, PrecompileUpgrade: {}, UpgradeConfig: {\"precompileUpgrades\":[{\"feeManagerConfig\":{\"adminAddresses\":[\"0x8db97c7cece249c2b98bdc0226cc4c2a57bf52fc\"],\"enabledAddresses\":null,\"blockTimestamp\":1668950000}},{\"txAllowListConfig\":{\"adminAddresses\":[\"0x8db97c7cece249c2b98bdc0226cc4c2a57bf52fc\"],\"enabledAddresses\":null,\"blockTimestamp\":1668960000}},{\"feeManagerConfig\":{\"adminAddresses\":null,\"enabledAddresses\":null,\"blockTimestamp\":1668970000,\"disable\":true}}]}, Engine: Dummy Consensus Engine}" ``` You can also verify precompile configurations using: - [`eth_getActiveRulesAt`](/docs/rpcs/subnet-evm#eth_getactiverulesat) RPC method to check activated precompiles at a timestamp - [`eth_getChainConfig`](/docs/rpcs/subnet-evm#eth_getchainconfig) RPC method to view the complete configuration including upgrades # APIs (/docs/avalanche-l1s/timestamp-vm/apis) --- title: APIs description: Learn how to interact with TimestampVM. --- Throughout this case study, we have been focusing of the functionality of the TimestampVM. However, one thing we haven't discussed is how external users can interact with an instance of TimestampVM. Without a way for users to interact with TimestampVM, the blockchain itself will be stagnant. In this section, we will go over the two types of APIs used in TimestampVM: - Static APIs - Chain APIs ## Precursor: Static and Instance Methods When understanding the static and chain APIs used in TimestampVM, a good way to think about these APIs is to compare them to static and instance methods in object-oriented programming. That is, - **Static Methods**: functions which belong to the class itself, and not any instance of the class - **Instance Methods**: functions which belong to the instance of a class ## Static APIs We can think of the static APIs in TimestampVM as functions which call the VM and are not associated with any specific instance of the TimestampVM. Within TimestampVM, we have just one static API function - the ping function: ```rust title="timestampvm/src/api/static_handlers.rs" /// Defines static handler RPCs for this VM. #[rpc] pub trait Rpc { #[rpc(name = "ping", alias("timestampvm.ping"))] fn ping(&self) -> BoxFuture>; } ``` ## Chain APIs In contrast to the static API, the chain API of TimestampVM is much more rich in the sense that we have functions with read from and write to an instance of TimestampVM. In this case, we have four functions defined in the chain API: - `ping`: when called, this function pings an instance of TimestampVM - `propose_Block`: write function which passes a block to TimestampVM for consideration to be appended to the blockchain - `last_accepted`: read function which returns the last accepted block (that is, the block at the tip of the blockchain) - `get_block`: read function which fetches the requested block We can see the functions included in the chain API here: ```rust title="timestampvm/src/api/chain_handlers.rs" /// Defines RPCs specific to the chain. #[rpc] pub trait Rpc { /// Pings th e VM. #[rpc(name = "ping", alias("timestampvm.ping"))] fn ping(&self) -> BoxFuture>; /// Proposes th e arbitrary data. #[rpc(name = "proposeBlock", alias("timestampvm.proposeBlock"))] fn propose_block(&self,args:ProposeBlockArgs)->BoxFuture>; /// Fetches th e last accepted block. #[rpc(name="lastAccepted",alias("timestampvm.lastAccepted"))] fn last_accepted(&self)->BoxFuture>; /// Fetches th e block. #[rpc(name="getBlock",alias("timestampvm.getBlock"))] fn get_block(&self,args:GetBlockArgs)->BoxFutur e >; } ``` # Blocks (/docs/avalanche-l1s/timestamp-vm/blocks) --- title: Blocks descscription: Learn about the Block data structure in TimestampVM. --- In this section, we will examine the Block data structure. In contrast to the design choice of the TimestampVM state (which was mostly in control of the implementers), blocks in TimestampVM must adhere to the SnowmanVM Block Interface. ## SnowmanVM.Block Interface TimestampVM is designed to be used in tandem with the Snowman Consensus Engine. In particular, this relationship is defined by the usage of blocks - TimestampVM will produce blocks which the Snowman Consensus Engine will use and eventually mark as accepted or rejected. Therefore, the Snowman Consensus Engine requires for all blocks to have the following interface: ```go type Block interface { choices.Decidable // Parent returns the ID of this block's parent. Parent() ids.ID // Verify that the state transition this block would make if accepted is // valid. If the state transition is invalid, a non-nil error should be // returned. // // It is guaranteed that the Parent has been successfully verified. // // If nil is returned, it is guaranteed that either Accept or Reject will be // called on this block, unless the VM is shut down. Verify(context.Context) error // Bytes returns the binary representation of this block. // // This is used for sending blocks to peers. The bytes should be able to be // parsed into the same block on another node. Bytes() []byte // Height returns the height of this block in the chain. Height() uint64 // Time this block was proposed at. This value should be consistent across // all nodes. If this block hasn't been successfully verified, any value can // be returned. If this block is the last accepted block, the timestamp must // be returned correctly. Otherwise, accepted blocks can return any value. Timestamp() time.Time } ``` ## Implementing the Block Data Structure With the above in mind, we now examine the block data structure: ```rust /// Represents a block, specific to `Vm` (crate::vm::Vm). #[serde_as] #[derive(Serialize, Deserialize, Clone, Derivative, Default)] #[derivative(Debug, PartialEq, Eq)] pub struct Block { /// The block Id of the parent block. parent_id: ids::Id, /// This block's height. /// The height of the genesis block is 0. height: u64, /// Unix second when this block was proposed. timestamp: u64, /// Arbitrary data. #[serde_as(as = "Hex0xBytes")] data: Vec, /// Current block status. #[serde(skip)] status: choices::status::Status, /// This block's encoded bytes. #[serde(skip)] bytes: Vec, /// Generated block Id. #[serde(skip)] id: ids::Id, /// Reference to the Vm state manager for blocks. #[derivative(Debug = "ignore", PartialEq = "ignore")] #[serde(skip)] state: state::State, } ``` Notice above that many of the fields of the `Block` struct store the information required to implement the `Snowman.Block` interface we saw previously. Tying the concept of Blocks back to the VM State, notice the last field `state` within the `Block` struct. This is where the `Block` struct stores a copy of the `State` struct from the previous section (and since each field of the `State` struct is wrapped in an `Arc` pointer, this implies that `Block` is really just storing a reference to both the `db` and `verified_blocks` data structures). ## `Block` Functions In this section, we examine some of the functions associated with the `Block` struct: ### `verify` This function verifies that a block is valid and stores it in memory. Note that a verified block does not mean that it has been accepted - rather, a verified block is eligible to be accepted. ```rust /// Verifies [`Block`](Block) properties (e.g., heights), /// and once verified, records it to the `State` (crate::state::State). /// # Errors /// Can fail if the parent block can't be retrieved. pub async fn verify(&mut self) -> io::Result<()> { if self.height == 0 && self.parent_id == ids::Id::empty() { log::debug!( "block {} has an empty parent Id since it's a genesis block -- skipping verify", self.id ); self.state.add_verified(&self.clone()).await; return Ok(()); } // if already exists in database, it means it's already accepted // thus no need to verify once more if self.state.get_block(&self.id).await.is_ok() { log::debug!("block {} already verified", self.id); return Ok(()); } let prnt_blk = self.state.get_block(&self.parent_id).await?; // ensure the height of the block is immediately following its parent if prnt_blk.height != self.height - 1 { return Err(Error::new( ErrorKind::InvalidData, format!( "parent block height {} != current block height {} - 1", prnt_blk.height, self.height ), )); } // ensure block timestamp is after its parent if prnt_blk.timestamp > self.timestamp { return Err(Error::new( ErrorKind::InvalidData, format!( "parent block timestamp {} > current block timestamp {}", prnt_blk.timestamp, self.timestamp ), )); } let one_hour_from_now = Utc::now() + Duration::hours(1); let one_hour_from_now = one_hour_from_now .timestamp() .try_into() .expect("failed to convert timestamp from i64 to u64"); // ensure block timestamp is no more than an hour ahead of this nodes time if self.timestamp >= one_hour_from_now { return Err(Error::new( ErrorKind::InvalidData, format!( "block timestamp {} is more than 1 hour ahead of local time", self.timestamp ), )); } // add newly verified block to memory self.state.add_verified(&self.clone()).await; Ok(()) } ``` ### `reject` When called by the Snowman Consensus Engine, this tells the VM that the particular block has been rejected. ```rust /// Mark this [`Block`](Block) rejected and updates `State` (crate::state::State) accordingly. /// # Errors /// Returns an error if the state can't be updated. pub async fn reject(&mut self) -> io::Result<()> { self.set_status(choices::status::Status::Rejected); // only decided blocks are persistent -- no reorg self.state.write_block(&self.clone()).await?; self.state.remove_verified(&self.id()).await; Ok(()) } ``` ### `accept` When called by the Snowman Consensus Engine, this tells the VM that the particular block has been rejected. ```rust /// Mark this [`Block`](Block) accepted and updates `State` (crate::state::State) accordingly. /// # Errors /// Returns an error if the state can't be updated. pub async fn accept(&mut self) -> io::Result<()> { self.set_status(choices::status::Status::Accepted); // only decided blocks are persistent -- no reorg self.state.write_block(&self.clone()).await?; self.state.set_last_accepted_block(&self.id()).await?; self.state.remove_verified(&self.id()).await; Ok(()) } ``` # Architecture of TimestampVM (/docs/avalanche-l1s/timestamp-vm/defining-vm-itself) --- title: Architecture of TimestampVM description: After examining several of the data structures and functionalities that TimestampVM relies on, it is time that we examine the architecture of the TimestampVM itself. In addition, we will look at some data structures that TimestampVM utilizes. --- ## Aside: SnowmanVM In addition to blocks having to adhere to the `Snowman.Block` interface, VMs which interact with the Snowman Consensus Engine must also implement the `SnowmanVM` interface. In the context of a Rust-based VM, this means that we must satisfy the `ChainVM` trait in `avalanche-types`: ```rust title="avalanche-types/src/subnet/rpc/snowman/block.rs" /// ref. #[tonic::async_trait] pub trait ChainVm: CommonVm + BatchedChainVm + Getter + Parser { type Block: snowman::Block; /// Attempt to create a new block from ChainVm data /// Returns either a block or an error async fn build_block(&self) -> Result<::Block>; /// Issues a transaction to the chain async fn issue_tx(&self) -> Result<::Block>; /// Notify the Vm of the currently preferred block. async fn set_preference(&self, id: Id) -> Result<()>; /// Returns ID of last accepted block. async fn last_accepted(&self) -> Result; } ``` ## Defining TimestampVM Below is definition of VM struct which represents TimestampVM: ```rust title="timestampvm/src/vm/mod.rs" pub struct Vm { /// Maintains Vm-specific states. pub state: Arc>, pub app_sender: Option, /// A queue not yet proposed into a block. pub mempool: Arc>>>, } ``` We see following three fields: - `state`: represents state of VM. Note different than earlier seen State structure. - `app_sender`: channel for receiving and sending requests by our VM - `mempool`: where proposed blocks are kept before being processed. We now examine the `state` data structure mentioned earlier: ```rust title="timestampvm/src/vm/mod.rs" /// Represents VM-specific states. /// Defined separately for interior mutability in [`Vm`](vm). /// Protected with 'Arc' and 'RwLock'. pub struct State { pub ctx : Option>, pub version : Version, pub genesis : Genesis, // Persistent Vm state representation pub state : Option, // Preferred block Id pub preferred : ids::Id, // Channel for messages to snowman consensus engine pub bootstrapped : bool, } ``` Relationship between State and state - contains alongside other fields relevant to Snowman consensus algorithm. # Introduction (/docs/avalanche-l1s/timestamp-vm/introduction) --- title: Introduction description: Learn about the TimestampVM Virtual Machine. --- To really get an understanding of how one can use the `avalanche-types` library to build a Rust-based VM, we will look at [TimeStampVM](https://github.com/ava-labs/timestampvm-rs/tree/main), a basic VM which utilizes the `avalanche-types` library. ## Idea of TimestampVM In contrast to complex VMs like the EVM which provide a general-purpose computing environment, TimestampVM is _much, much_ simpler. In fact, we can describe the goal of TimestampVM in two bullet points: - To store the timestamp when each block was appended to the blockchain - To store arbitrary payloads of data (within each block) Even though the above seems quite simple, this still requires us to define and build out an architecture to support such functionalities. In this case study, we will look at the following pieces of the architecture that define TimestampVM: - State - Blocks - API - The VM itself # State (/docs/avalanche-l1s/timestamp-vm/state) --- title: State description: Learn about the state within the context of TimestampVM. --- Blockchains can be defined as follows: > A linked-list where each list element consists of a block Implementations of blockchains, while adhering to the functionality above from a white-box perspective, are defined much more like databases than regular linked lists themselves. In fact, this is what TimestampVM utilizes! By utilizing a general database, TimestampVM is able to store blocks (and thus, store its blockchain) while also being able to store additional data such as pending blocks. ## State Definition Below is the definition of the `State` struct which is used to maintain the state of the TimestampVM: ```rust /// Manages block and chain states for this Vm, both in-memory and persistent. #[derive(Clone)] pub struct State { pub db: Arc>>, /// Maps block Id to Block. /// Each element is verified but not yet accepted/rejected (e.g., preferred). pub verified_blocks: Arc>>, } ``` `State` in this context acts like the database of TimestampVM. Within `State`, we are managing two different data structures: - `db`: a byte-based mapping which maps bytes to bytes. This is where finalized (that is, accepted blocks are stored) - `verified_blocks`: a hashmap which maps block numbers to their respective blocks. This is where all verified, but pending blocks are stored While one could have guessed the functionalities of `db` and `verified_blocks` from their respective types `subnet::rpc::database::Database + Send + Sync` and `HashMap`, it is not immediately clear why we are wrapping these fields with Read/Write locks and Arc pointers. However, as we'll see soon when we examine the Block data structure, blocks need access to the VM state so they can add themselves to state. This is due to the `SetPrefernce` function of SnowmanVM interface, which states: > `Set Preference` > > The VM implements the function SetPreference(blkID ids.ID) to allow the consensus engine to notify the VM which block is currently preferred to be accepted. The VM should use this information to set the head of its blockchain. Most importantly, when the consensus engine calls BuildBlock, the VM should be sure to build on top of the block that is the most recently set preference. > > Note: SetPreference will always be called with a block that has no verified children. Therefore, when building a Rust-based VM (or a VM in any supported language), the VM itself is only responsible for tracking the ID of the most recent finalized block; blocks bear the responsibility of storing themselves in VM state. As a result, we will need to wrap the `db` and `verified_blocks` fields with the following: - An `Arc` pointer so that whenever we clone the `State` structure, the cloned versions of `db` and `verified_blocks` are still pointing to the same data structures in memory. This allows for multiple Blocks to share the same `db` and `verified_blocks` - A read/write lock (that is, `RwLock`) so that we safely utilize concurrency in our VM ## `State` Functions Below are the functions associated with the `State` struct: ```rust title="timestampvm/src/state/mod.rs" impl State { /// Persists the last accepted block Id to state. /// # Errors /// Fails if the db can't be updated pub async fn set_last_accepted_block(&self, blk_id: &ids::Id) -> io::Result<()> { let mut db = self.db.write().await; db.put(LAST_ACCEPTED_BLOCK_KEY, &blk_id.to_vec()) .await .map_err(|e| { Error::new( ErrorKind::Other, format!("failed to put last accepted block: {e:?}"), ) }) } /// Returns "true" if there's a last accepted block found. /// # Errors /// Fails if the db can't be read pub async fn has_last_accepted_block(&self) -> io::Result { let db = self.db.read().await; match db.has(LAST_ACCEPTED_BLOCK_KEY).await { Ok(found) => Ok(found), Err(e) => Err(Error::new( ErrorKind::Other, format!("failed to load last accepted block: {e}"), )), } } /// Returns the last accepted block Id from state. /// # Errors /// Can fail if the db can't be read pub async fn get_last_accepted_block_id(&self) -> io::Result { let db = self.db.read().await; match db.get(LAST_ACCEPTED_BLOCK_KEY).await { Ok(d) => Ok(ids::Id::from_slice(&d)), Err(e) => { if subnet::rpc::errors::is_not_found(&e) { return Ok(ids::Id::empty()); } Err(e) } } } /// Adds a block to "`verified_blocks`". pub async fn add_verified(&mut self, block: &Block) { let blk_id = block.id(); log::info!("verified added {blk_id}"); let mut verified_blocks = self.verified_blocks.write().await; verified_blocks.insert(blk_id, block.clone()); } /// Removes a block from "`verified_blocks`". pub async fn remove_verified(&mut self, blk_id: &ids::Id) { let mut verified_blocks = self.verified_blocks.write().await; verified_blocks.remove(blk_id); } /// Returns "true" if the block Id has been already verified. pub async fn has_verified(&self, blk_id: &ids::Id) -> bool { let verified_blocks = self.verified_blocks.read().await; verified_blocks.contains_key(blk_id) } /// Writes a block to the state storage. /// # Errors /// Can fail if the block fails to serialize or if the db can't be updated pub async fn write_block(&mut self, block: &Block) -> io::Result<()> { let blk_id = block.id(); let blk_bytes = block.to_vec()?; let mut db = self.db.write().await; let blk_status = BlockWithStatus { block_bytes: blk_bytes, status: block.status(), }; let blk_status_bytes = blk_status.encode()?; db.put(&block_with_status_key(&blk_id), &blk_status_bytes) .await .map_err(|e| Error::new(ErrorKind::Other, format!("failed to put block: {e:?}"))) } /// Reads a block from the state storage using the `block_with_status_key`. /// # Errors /// Can fail if the block is not found in the state storage, or if the block fails to deserialize pub async fn get_block(&self, blk_id: &ids::Id) -> io::Result { // check if the block exists in memory as previously verified. let verified_blocks = self.verified_blocks.read().await; if let Some(b) = verified_blocks.get(blk_id) { return Ok(b.clone()); } let db = self.db.read().await; let blk_status_bytes = db.get(&block_with_status_key(blk_id)).await?; let blk_status = BlockWithStatus::from_slice(blk_status_bytes)?; let mut blk = Block::from_slice(&blk_status.block_bytes)?; blk.set_status(blk_status.status); Ok(blk) } } ``` The functions above are called will be called by both blocks and the VM itself. # Validator Manager Contracts (/docs/avalanche-l1s/validator-manager/contract) --- title: "Validator Manager Contracts" description: "This page lists all available contracts for the Validator Manager." edit_url: https://github.com/ava-labs/icm-contracts/edit/main/contracts/validator-manager/README.md --- # Validator Manager Contracts The contracts in this directory define the Validator Manager used to manage Avalanche L1 validators, as defined in [ACP-77](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/77-reinventing-subnets). They comply with [ACP-99](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/99-validatorsetmanager-contract), which specifies the standard minimal functionality that Validator Managers should implement. The contracts in this directory are are related as follows: > ACP99Manager class ValidatorManager { +initializeValidatorSet() +completeValidatorRegistration() onlyOwner +completeValidatorRemoval() onlyOwner +completeValidatorWeightUpdate() onlyOwner +initiateValidatorRegistration() onlyOwner +initiateValidatorRemoval() onlyOwner +initiateValidatorWeightUpdate() onlyOwner } class PoAManager { +completeValidatorRegistration() +completeValidatorRemoval() +completeValidatorWeightUpdate() +initiateValidatorRegistration() onlyOwner +initiateValidatorRemoval() onlyOwner +initiateValidatorWeightUpdate() onlyOwner +transferValidatorManagerOwnership() onlyOwner } class StakingManager { +completeValidatorRegistration() +initiateValidatorRemoval() +completeValidatorRemoval() +completeDelegatorRegistration() +initiateDelegatorRemoval() +completeDelegatorRemoval() -_initiateValidatorRegistration() -_initiateDelegatorRegistration() } <> StakingManager class ERC20TokenStakingManager { +initiateValidatorRegistration() +initiateDelegatorRegistration() } class NativeTokenStakingManager { +initiateValidatorRegistration() payable +initiateDelegatorRegistration() payable } ACP99Manager <|-- ValidatorManager ValidatorManager --o PoAManager : owner ValidatorManager --o StakingManager : owner StakingManager <|-- ERC20TokenStakingManager StakingManager <|-- NativeTokenStakingManager`} /> ## A Note on Nomenclature The contracts in this directory are only useful to L1s that have been converted from Subnets as described in ACP-77. As such, `l1`/`L1` is generally preferred over `subnet`/`Subnet` in the source code. The one major exception is that `subnetID` should be used to refer to both Subnets that have not been converted, and L1s that have. This is because an L1 must first be initialized as a Subnet by issuing a `CreateSubnetTx` on the P-Chain, the transaction hash of which becomes the `subnetID`. Rather than change the name and/or value of this identifier, it is simpler for both to remain static in perpetuity. ## Deploying The validator manager system consists of a `ValidatorManager`, and one of `NativeTokenStakingManager`, `ERC20TokenStakingManager`, or `PoAManager`. `ValidatorManager` is `Ownable`, and its owner should be set to the address of the other contract. All of these are implemented as [upgradeable](https://github.com/OpenZeppelin/openzeppelin-contracts-upgradeable/blob/3d6a15108b50491ec3c51c03b32802c33e092a0f/contracts/proxy/utils/Initializable.sol#L56) contracts. There are numerous [guides](https://blog.chain.link/upgradable-smart-contracts/) for deploying upgradeable smart contracts, but the general steps are as follows: 1. Deploy the implementation contract 2. Deploy the proxy contract 3. Call the implementation contract's `initialize` function - Each deployed contract requires different settings. For example, `ValidatorManagerSettings` specifies the churn parameters, while `StakingManagerSettings` specifies the staking and rewards parameters. 4. Initialize the validator set by calling `initializeValidatorSet` on `ValidatorManager` - When an L1 is first created on the P-Chain, it must be explicitly converted to an L1 via [`ConvertSubnetToL1Tx`](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/77-reinventing-subnets#convertsubnettol1tx). The resulting `SubnetToL1ConversionMessage` ICM [message](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/77-reinventing-subnets#subnettol1conversionmessage) is provided in the call to `initializeValidatorSet` to specify the starting validator set in the `ValidatorManager`. Regardless of the setup of the overall validator manager system, these initial validators are treated as PoA and are not eligible for staking rewards. ### Proof-of-Authority PoA validator management is provided by `PoAManager` by providing an `owner` in the call to `initialize`. Only the `owner` may initiate validator set changes, but anybody can complete the validator set change by providing the corresponding ICM message signed by the P-Chain. > [!NOTE] > PoA validator management can also be implemented by `ValidatorManager` on its own, by setting the `owner` to the desired admin address. Unlike `PoAManager`, only the admin is able to initiate or complete validator set changes. ### Proof-of-Stake PoS validator management is provided by the abstract contract `StakingManager`, which has two concrete implementations: `NativeTokenStakingManager` and `ERC20TokenStakingManager`. `StakingManager` supports uptime-based validation rewards, as well as delegation to a chosen validator. The `uptimeBlockchainID` used to initialize the `StakingManager` **must** be validated by the L1 validator set that the contract manages. **There is no way to verify this from within the contract, so take care when setting this value.** This [state transition diagram](https://github.com/ava-labs/icm-contracts/blob/main/contracts/validator-manager/StateTransition.md) illustrates the relationship between validators and delegators. After deploying `StakingManager` and a proxy, call the `initialize` function, which takes a `StakingManagerSettings` as well as any implementation-specific arguments. > [!NOTE] > The `weightToValueFactor` fields of `StakingManagerSettings` sets the factor used to convert between the weight that the validator is registered with on the P-Chain, and the value transferred to the contract as stake. This involves integer division, which may result in loss of precision. When selecting `weightToValueFactor`, it's important to make the following considerations: > > 1. If `weightToValueFactor` is near the denomination of the asset, then staking amounts on the order of 1 unit of the asset may cause the converted weight to round down to 0. This may impose a larger-than-expected minimum stake amount. > - Ex: If USDC (denomination of 6) is used as the staking token and `weightToValueFactor` is 1e9, then any amount less than 1,000 USDC will round down to 0 and therefore be invalid. > 2. Staked amounts up to `weightValueFactor - 1` may be lost in the contract as dust, as the validator's registered weight is used to calculate the original staked amount. > - Ex: `value=1001` and `weightToValueFactor=1e3`. The resulting weight will be `1`. Converting the weight back to a value results in `value=1000`. > 3. The validator's weight is represented on the P-Chain as a `uint64`. `StakingManager` restricts values such that the calculated weight does not exceed the maximum value for that type. ### Migrating from Proof-of-Authority to Proof-of-Stake See the [migration guide](https://github.com/ava-labs/icm-contracts/blob/main/contracts/validator-manager/PoAMigration.md) for details. #### NativeTokenStakingManager `NativeTokenStakingManager` allows permissionless addition and removal of validators that post the L1's native token as stake. Staking rewards are minted via the Native Minter Precompile, which is configured with a set of addresses with minting privileges. As such, the address that `NativeTokenStakingManager` is deployed to must be added as an admin to the precompile. This can be done by either calling the precompile's `setAdmin` method from an admin address, or setting the address in the Native Minter precompile settings in the chain's genesis (`config.contractNativeMinterConfig.adminAddresses`). There are a couple of methods to get this address: one is to calculate the resulting deployed address based on the deployer's address and account nonce: `keccak256(rlp.encode(address, nonce))`. The second method involves manually placing the `NativeTokenStakingManager` bytecode at a particular address in the genesis, then setting that address as an admin. ```json { "config" : { ... "contractNativeMinterConfig": { "blockTimestamp": 0, "adminAddresses": [ "0xffffffffffffffffffffffffffffffffffffffff" ] } }, "alloc": { "0xffffffffffffffffffffffffffffffffffffffff": { "balance": "0x0", "code": "", "nonce": 1 } } } ``` #### ERC20TokenStakingManager `ERC20TokenStakingManager` allows permissionless addition and removal of validators that post the an ERC20 token as stake. The ERC20 is specified in the call to `initialize`, and must implement [`IERC20Mintable`](https://github.com/ava-labs/icm-contracts/blob/main/contracts/validator-manager/interfaces/IERC20Mintable.sol). Care should be taken to enforce that only authorized users are able to `mint` the ERC20 staking token. ## Usage ### Register a Validator #### PoA Validator registration is initiated with a call to `PoAManager.initiateValidatorRegistration`. Churn limitations are checked - only a certain (configurable) percentage of the total weight is allowed to be added or removed in a (configurable) period of time. The `ValidatorManager` then constructs a [`RegisterL1ValidatorMessage`](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/77-reinventing-subnets#registerl1validatormessage) ICM message to be sent to the P-Chain. Each validator registration request includes all of the information needed to identify the validator and its stake weight, as well as an `expiry` timestamp before which the `RegisterL1ValidatorMessage` must be delivered to the P-Chain. If the validator is not registered on the P-Chain before the `expiry`, then the validator may be removed from the contract state by calling `completeValidatorRemoval`. The `RegisterL1ValidatorMessage` is delivered to the P-Chain as the ICM message payload of a `RegisterL1ValidatorTx`. Please see the transaction [specification](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/77-reinventing-subnets#registerl1validatortx) for validity requirements. The P-Chain then signs a [`L1ValidatorRegistrationMessage`](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/77-reinventing-subnets#l1validatorregistrationmessage) ICM message indicating that the specified validator was successfully registered on the P-Chain. The `L1ValidatorRegistrationMessage` is delivered by calling `ValidatorManager.completeValidatorRegistration`. #### PoS When registering a PoS validator, the same steps as the PoA case apply, with the only difference being that `StakingManager.initiateValidatorRegistration` and `StakingManager.completeValidatorRegistration` must be called instead. The sender of the transaction that called `StakingManager.initiateValidatorRegistration` is registered as the validator owner. Only this owner can remove the validator. Staking rewards begin accruing once `StakingManager.completeValidatorRegistration` is called. ### Remove a Validator ### PoA Validator exit is initiated with a call to `PoAManager.initiateValidatorRemoval`. The `ValidatorManager` contructs an [`L1ValidatorWeightMessage`](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/77-reinventing-subnets#l1validatorweightmessage) ICM message with the weight set to `0`. This is delivered to the P-Chain as the payload of a [`SetL1ValidatorWeightTx`](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/77-reinventing-subnets#setl1validatorweighttx). The P-Chain acknowledges the validator exit by signing an `L1ValidatorRegistrationMessage` with `valid=0`, which is delivered by calling `ValidatorManager.completeValidatorRemoval`. The validation is removed from the contract's state. ### PoS PoS validator removal follows the same flow as the PoA case, except that `StakingManager.initiateValidatorRemoval` and `StakingManager.completeValidatorRemoval` must be called instead. There are two additional considerations: - A [`ValidationUptimeMessage`](https://github.com/ava-labs/icm-contracts/blob/main/contracts/validator-manager/UptimeMessageSpec.md) ICM message may optionally be provided in the call to `StakingManager.initiateValidatorRemoval` in order to calculate the staking rewards; otherwise the latest received uptime will be used (see [(PoS only) Submit and Uptime Proof](#pos-only-submit-an-uptime-proof)). This proof may be requested directly from the L1 validators, which will provide it in a `ValidationUptimeMessage` ICM message. If the uptime is not sufficient to earn validation rewards, the call to `initiateValidatorRemoval` will fail. `forceInitiateValidatorRemoval` acts the same as `initiateValidatorRemoval`, but bypasses the uptime-based rewards check. Once `initiateValidatorRemoval` or `forceInitiateValidatorRemoval` is called, staking rewards cease accruing for `StakingManagers`. - Unlike with PoA, PoS validators are not able to decrease their weight. This can lead to a scenario in which a PoS validator manager with a high proportion of the L1's weight is not able to exit the validator set due to churn restrictions. Additional validators or delegators will need to first be registered to more evenly distribute weight across the L1's validator set. Once acknowledgement from the P-Chain has been received via a call to `StakingManager.completeValidatorRemoval`, staking rewards are disbursed and stake is returned. #### Disable a Validator Directly on the P-Chain ACP-77 also provides a method to disable a validator without interacting with the L1 directly. The P-Chain transaction [`DisableL1ValidatorTx`](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/77-reinventing-subnets#disablel1validatortx) disables the validator on the P-Chain. The disabled validator's weight will still count towards the L1's total weight. Disabled L1 validators can re-activate at any time by increasing their balance with an `IncreaseBalanceTx`. Anyone can call `IncreaseBalanceTx` for any validator on the P-Chain. A disabled validator can only be completely and permanently removed from the validator set by a call to `initiateValidatorRemoval`. ### (PoS only) Register a Delegator `StakingManager` supports delegation to an actively staked validator as a way for users to earn staking rewards without having to validate the chain. Delegators pay a configurable percentage fee on any earned staking rewards to the host validator. A delegator may be registered by calling `initiateDelegatorRegistration` and providing an amount to stake. The delegator will be registered as long as churn restrictions are not violated. The delegator is reflected on the P-Chain by adjusting the validator's registered weight via a [`SetL1ValidatorWeightTx`](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/77-reinventing-subnets#setl1validatorweighttx). The weight change acknowledgement is delivered to the `StakingManager` via an [`L1ValidatorWeightMessage`](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/77-reinventing-subnets#l1validatorweightmessage), which is provided by calling `completeDelegatorRegistration`. > [!NOTE] > The P-Chain is only willing to sign an `L1ValidatorWeightMessage` for a validator in the L1's validator set. Once a validator exit has been initiated (via a call to `initiateValidatorRemoval`), the `StakingManager` must assume that the validator has been removed from the L1's validator set on the P-Chain, and therefore that the P-Chain will not sign any further weight updates. The contracts prohibit _initiating_ adding or removing a delegator in between calls to `initiateValidatorRemoval` and `completeValidatorRemoval`. However, if the `L1ValidatorWeightMessage` pertaining to an already initiated delegator action is constructed _before_ the validator is removed on the P-Chain, then the delegator action may be completed. Otherwise, `completeValidatorRemoval` must be called before completing the delegator action. ### (PoS only) Remove a Delegator Delegator removal may be initiated by calling `initiateDelegatorRemoval`, as long as churn restrictions are not violated. Similar to `initiateValidatorRemoval`, an uptime proof may be provided to be used to determine delegator rewards eligibility. If no proof is provided, the latest known uptime will be used (see [(PoS only) Submit and Uptime Proof](#pos-only-submit-an-uptime-proof)). The validator's weight is updated on the P-Chain by the same mechanism used to register a delegator. The `L1ValidatorWeightMessage` from the P-Chain is delivered to the `StakingManager` in the call to `completeDelegatorRemoval`. Either the delegator owner or the validator owner may initiate removing a delegator. This is to prevent the validator from being unable to remove itself due to churn limitations if it is has too high a proportion of the Subnet's total weight due to delegator additions. The validator owner may only remove Delegators after the minimum stake duration has elapsed. ### (PoS only) Submit an Uptime Proof The [rewards calculator](https://github.com/ava-labs/icm-contracts/blob/main/contracts/validator-manager/interfaces/IRewardCalculator.sol) is a function of uptime seconds since the validator's start time. In addition to doing so in the calls to `initiateValidatorRemoval` and `initiateDelegatorRemoval` as described above, uptime proofs may also be supplied by calling `submitUptimeProof`. Unlike `initiateValidatorRemoval` and `initiateDelegatorRemoval`, `submitUptimeProof` may be called by anyone, decreasing the likelihood of a validation or delegation not being able to claim rewards that it deserved based on its actual uptime. ### (PoS only) Collect Staking Rewards #### Validation Rewards Validation rewards are distributed in the call to `completeValidatorRemoval` on the `StakingManager`. #### Delegation Rewards Delegation rewards are distributed in the call to `completeDelegatorRemoval` on the `StakingManager`. #### Delegation Fees Delegation fees owed to validators are _not_ distributed when the validation ends as to bound the amount of gas consumed in the call to `completeValidatorRemoval`. Instead, `claimDelegationFees` on the `StakingManager` may be called after the validation is completed. # How to Stake (/docs/primary-network/validate/how-to-stake) --- title: How to Stake description: Learn how to stake on Avalanche. --- Staking Parameters on Avalanche[​](#staking-parameters-on-avalanche "Direct link to heading") --------------------------------------------------------------------------------------------- When a validator is done validating the [Primary Network](http://support.avalabs.org/en/articles/4135650-what-is-the-primary-network), it receives back the AVAX tokens it staked. It may receive a reward for helping to secure the network. A validator only receives a [validation reward](http://support.avalabs.org/en/articles/4587396-what-are-validator-staking-rewards) if it is sufficiently responsive and correct during the time it validates. Read the [Avalanche token white paper](https://www.avalabs.org/whitepapers) to learn more about AVAX and the mechanics of staking. Staking rewards are sent to your wallet address at the end of the staking term **as long as all of these parameters are met**. ### Mainnet[​](#mainnet "Direct link to heading") - The minimum amount that a validator must stake is 2,000 AVAX - The minimum amount that a delegator must delegate is 25 AVAX - The minimum amount of time one can stake funds for validation is 2 weeks - The maximum amount of time one can stake funds for validation is 1 year - The minimum amount of time one can stake funds for delegation is 2 weeks - The maximum amount of time one can stake funds for delegation is 1 year - The minimum delegation fee rate is 2% - The maximum weight of a validator (their own stake + stake delegated to them) is the minimum of 3 million AVAX and 5 times the amount the validator staked. For example, if you staked 2,000 AVAX to become a validator, only 8000 AVAX can be delegated to your node total (not per delegator) A validator will receive a staking reward if they are online and response for more than 80% of their validation period, as measured by a majority of validators, weighted by stake. **You should aim for your validator be online and responsive 100% of the time.** You can call API method `info.uptime` on your node to learn its weighted uptime and what percentage of the network currently thinks your node has an uptime high enough to receive a staking reward. See [here.](/docs/rpcs/other/info-rpc#infouptime) You can get another opinion on your node's uptime from Avalanche's [Validator Health dashboard](https://stats.avax.network/dashboard/validator-health-check/). If your reported uptime is not close to 100%, there may be something wrong with your node setup, which may jeopardize your staking reward. If this is the case, please see [here](#why-is-my-uptime-low) or contact us on [Discord](https://discord.gg/avax/) so we can help you find the issue. Note that only checking the uptime of your validator as measured by non-staking nodes, validators with small stake, or validators that have not been online for the full duration of your validation period can provide an inaccurate view of your node's true uptime. ### Fuji Testnet[​](#fuji-testnet "Direct link to heading") On Fuji Testnet, all staking parameters are the same as those on Mainnet except the following ones: - The minimum amount that a validator must stake is 1 AVAX - The minimum amount that a delegator must delegate is 1 AVAX - The minimum amount of time one can stake funds for validation is 24 hours - The minimum amount of time one can stake funds for delegation is 24 hours Validators[​](#validators "Direct link to heading") --------------------------------------------------- **Validators** secure Avalanche, create new blocks, and process transactions. To achieve consensus, validators repeatedly sample each other. The probability that a given validator is sampled is proportional to its stake. When you add a node to the validator set, you specify: - Your node's ID - Your node's BLS key and BLS signature - When you want to start and stop validating - How many AVAX you are staking - The address to send any rewards to - Your delegation fee rate (see below) The minimum amount that a validator must stake is 2,000 AVAX. Note that once you issue the transaction to add a node as a validator, there is no way to change the parameters. **You can't remove your stake early or change the stake amount, node ID, or reward address.** Please make sure you're using the correct values in the API calls below. If you're not sure, ask for help on [Discord](https://discord.gg/avax/). If you want to add more tokens to your own validator, you can delegate the tokens to this node - but you cannot increase the base validation amount (so delegating to yourself goes against your delegation cap). ### Running a Validator[​](#running-a-validator "Direct link to heading") If you're running a validator, it's important that your node is well connected to ensure that you receive a reward. When you issue the transaction to add a validator, the staked tokens and transaction fee (which is 0) are deducted from the addresses you control. When you are done validating, the staked funds are returned to the addresses they came from. If you earned a reward, it is sent to the address you specified when you added yourself as a validator. #### Allow API Calls[​](#allow-api-calls "Direct link to heading") To make API calls to your node from remote machines, allow traffic on the API port (`9650` by default), and run your node with argument `--http-host=` You should disable all APIs you will not use via command-line arguments. You should configure your network to only allow access to the API port from trusted machines (for example, your personal computer.) #### Why Is My Uptime Low?[​](#why-is-my-uptime-low "Direct link to heading") Every validator on Avalanche keeps track of the uptime of other validators. Every validator has a weight (that is the amount staked on it.) The more weight a validator has, the more influence they have when validators vote on whether your node should receive a staking reward. You can call API method `info.uptime` on your node to learn its weighted uptime and what percentage of the network stake currently thinks your node has an uptime high enough to receive a staking reward. You can also see the connections a node has by calling `info.peers`, as well as the uptime of each connection. **This is only one node's point of view**. Other nodes may perceive the uptime of your node differently. Just because one node perceives your uptime as being low does not mean that you will not receive staking rewards. If your node's uptime is low, make sure you're setting config option `--public-ip=[NODE'S PUBLIC IP]` and that your node can receive incoming TCP traffic on port 9651. #### Secret Management[​](#secret-management "Direct link to heading") The only secret that you need on your validating node is its Staking Key, the TLS key that determines your node's ID. The first time you start a node, the Staking Key is created and put in `$HOME/.avalanchego/staking/staker.key`. You should back up this file (and `staker.crt`) somewhere secure. Losing your Staking Key could jeopardize your validation reward, as your node will have a new ID. You do not need to have AVAX funds on your validating node. In fact, it's best practice to **not** have a lot of funds on your node. Almost all of your funds should be in "cold" addresses whose private key is not on any computer. #### Monitoring[​](#monitoring "Direct link to heading") Follow this [tutorial](/docs/nodes/maintain/monitoring) to learn how to monitor your node's uptime, general health, etc. ### Reward Formula[​](#reward-formula "Direct link to heading") Consider a validator which stakes a $Stake$ amount of Avax for $StakingPeriod$ seconds. Assume that at the start of the staking period there is a $Supply$ amount of Avax in the Primary Network. The maximum amount of Avax is $MaximumSupply$ . Then at the end of its staking period, a responsive validator receives a reward calculated as follows: $$ Reward = \left(MaximumSupply - Supply \right) \times \frac{Stake}{Supply} \times \frac{Staking Period}{Minting Period} \times EffectiveConsumptionRate $$ where, $$ EffectiveConsumptionRate = $$ $$ \frac{MinConsumptionRate}{PercentDenominator} \times \left(1- \frac{Staking Period}{Minting Period}\right) + \frac{MaxConsumptionRate}{PercentDenominator} \times \frac{Staking Period}{Minting Period} $$ Note that $StakingPeriod$ is the staker's entire staking period, not just the staker's uptime, that is the aggregated time during which the staker has been responsive. The uptime comes into play only to decide whether a staker should be rewarded; to calculate the actual reward, only the staking period duration is taken into account. $EffectiveConsumptionRate$ is a linear combination of $MinConsumptionRate$ and $MaxConsumptionRate$. $MinConsumptionRate$ and $MaxConsumptionRate$ bound $EffectiveConsumptionRate$ because $$ MinConsumptionRate \leq EffectiveConsumptionRate \leq MaxConsumptionRate $$ The larger $StakingPeriod$ is, the closer $EffectiveConsumptionRate$ is to $MaxConsumptionRate$. A staker achieves the maximum reward for its stake if $StakingPeriod$ = $Minting Period$. The reward is: $$ Max Reward = \left(MaximumSupply - Supply \right) \times \frac{Stake}{Supply} \times \frac{MaxConsumptionRate}{PercentDenominator} $$ Delegators[​](#delegators "Direct link to heading") --------------------------------------------------- A delegator is a token holder, who wants to participate in staking, but chooses to trust an existing validating node through delegation. When you delegate stake to a validator, you specify: - The ID of the node you're delegating to - When you want to start/stop delegating stake (must be while the validator is validating) - How many AVAX you are staking - The address to send any rewards to The minimum amount that a delegator must delegate is 25 AVAX. Note that once you issue the transaction to add your stake to a delegator, there is no way to change the parameters. **You can't remove your stake early or change the stake amount, node ID, or reward address.** If you're not sure, ask for help on [Discord](https://discord.gg/avax/). ### Delegator Rewards[​](#delegator-rewards "Direct link to heading") If the validator that you delegate tokens to is sufficiently correct and responsive, you will receive a reward when you are done delegating. Delegators are rewarded according to the same function as validators. However, the validator that you delegate to keeps a portion of your reward specified by the validator's delegation fee rate. When you issue the transaction to delegate tokens, the staked tokens and transaction fee are deducted from the addresses you control. When you are done delegating, the staked tokens are returned to your address. If you earned a reward, it is sent to the address you specified when you delegated tokens. Rewards are sent to delegators right after the delegation ends with the return of staked tokens, and before the validation period of the node they're delegating to is complete. FAQ[​](#faq "Direct link to heading") ------------------------------------- ### Is There a Tool to Check the Health of a Validator?[​](#is-there-a-tool-to-check-the-health-of-a-validator "Direct link to heading") Yes, just enter your node's ID in the Avalanche Stats [Validator Health Dashboard](https://stats.avax.network/dashboard/validator-health-check/?nodeid=NodeID-Jp4dLMTHd6huttS1jZhqNnBN9ZMNmTmWC). ### How Is It Determined Whether a Validator Receives a Staking Reward?[​](#how-is-it-determined-whether-a-validator-receives-a-staking-reward "Direct link to heading") When a node leaves the validator set, the validators vote on whether the leaving node should receive a staking reward or not. If a validator calculates that the leaving node was responsive for more than the required uptime (currently 80%), the validator will vote for the leaving node to receive a staking reward. Otherwise, the validator will vote that the leaving node should not receive a staking reward. The result of this vote, which is weighted by stake, determines whether the leaving node receives a reward or not. Each validator only votes "yes" or "no." It does not share its data such as the leaving node's uptime. Each validation period is considered separately. That is, suppose a node joins the validator set, and then leaves. Then it joins and leaves again. The node's uptime during its first period in the validator set does not affect the uptime calculation in the second period, hence, has no impact on whether the node receives a staking reward for its second period in the validator set. ### How Are Delegation Fees Distributed To Validators?[​](#how-are-delegation-fees-distributed-to-validators "Direct link to heading") If a validator is online for 80% of a delegation period, they receive a % of the reward (the fee) earned by the delegator. The P-Chain used to distribute this fee as a separate UTXO per delegation period. After the [Cortina Activation](https://medium.com/avalancheavax/cortina-x-chain-linearization-a1d9305553f6), instead of sending a fee UTXO for each successful delegation period, fees are now batched during a node's entire validation period and are distributed when it is unstaked. ### Error: Couldn't Issue TX: Validator Would Be Over Delegated[​](#error-couldnt-issue-tx-validator-would-be-over-delegated "Direct link to heading") This error occurs whenever the delegator can not delegate to the named validator. This can be caused by the following. - The delegator `startTime` is before the validator `startTime` - The delegator `endTime` is after the validator `endTime` - The delegator weight would result in the validator total weight exceeding its maximum weight # Turn Node Into Validator (/docs/primary-network/validate/node-validator) --- title: Turn Node Into Validator description: This tutorial will show you how to add a node to the validator set of Primary Network on Avalanche. --- ## Introduction The [Primary Network](/docs/primary-network) is inherent to the Avalanche platform and validates Avalanche's built-in blockchains. In this tutorial, we'll add a node to the Primary Network on Avalanche using one of three methods: the [Core web](https://core.app) interface, the [platform-cli](/docs/tooling/platform-cli) command-line tool, or [Avalanche SDK](/docs/tooling/avalanche-sdk) programmatically. You can also use the interactive [Builder Console staking tool](https://build.avax.network/console/primary-network/stake). The P-Chain manages metadata on Avalanche. This includes tracking which nodes are in which Avalanche L1s, which blockchains exist, and which Avalanche L1s are validating which blockchains. To add a validator, we'll issue [transactions](http://support.avalabs.org/en/articles/4587384-what-is-a-transaction) to the P-Chain. You can also add a validator using the interactive [Builder Console staking tool](https://build.avax.network/console/primary-network/stake), which provides a guided web interface for staking operations. Note that once you issue the transaction to add a node as a validator, there is no way to change the parameters. **You can't remove your stake early or change the stake amount, node ID, or reward address.** Please make sure you're using the correct values in the API calls below. If you're not sure, feel free to join our [Discord](https://chat.avalabs.org/) to ask questions. ## Requirements You've completed [Run an Avalanche Node](/docs/nodes/run-a-node/from-source) and are familiar with [Avalanche's architecture](/docs/primary-network). In this tutorial, we use [AvalancheJS](/docs/tooling/avalanche-sdk) and [Avalanche's Postman collection](/docs/tooling/avalanche-postman) to help us make API calls. In order to ensure your node is well-connected, make sure that your node can receive and send TCP traffic on the staking port (`9651` by default) and your node has a public IP address(it's optional to set --public-ip=[YOUR NODE'S PUBLIC IP HERE] when executing the AvalancheGo binary, as by default, the node will attempt to perform NAT traversal to get the node's IP according to its router). Failing to do either of these may jeopardize your staking reward. ## Add a Validator with Core extension First, we show you how to add your node as a validator by using [Core web](https://core.app). ### Retrieve the Node ID, the BLS signature and the BLS key Get this info by calling [`info.getNodeID`](/docs/rpcs/other/info-rpc#infogetnodeid): ```bash curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"info.getNodeID" }' -H 'content-type:application/json' 127.0.0.1:9650/ext/info ``` The response has your node's ID, the BLS key (public key) and the BLS signature (proof of possession): ```json { "jsonrpc": "2.0", "result": { "nodeID": "NodeID-5mb46qkSBj81k9g9e4VFjGGSbaaSLFRzD", "nodePOP": { "publicKey": "0x8f95423f7142d00a48e1014a3de8d28907d420dc33b3052a6dee03a3f2941a393c2351e354704ca66a3fc29870282e15", "proofOfPossession": "0x86a3ab4c45cfe31cae34c1d06f212434ac71b1be6cfe046c80c162e057614a94a5bc9f1ded1a7029deb0ba4ca7c9b71411e293438691be79c2dbf19d1ca7c3eadb9c756246fc5de5b7b89511c7d7302ae051d9e03d7991138299b5ed6a570a98" } }, "id": 1 } ``` ### Add as a Validator Connect [Core extension](https://core.app) to [Core web](https://core.app), and go the 'Staking' tab. Here, choose 'Validate' from the menu. Fill out the staking parameters. They are explained in more detail in [this doc](/docs/primary-network/validate/how-to-stake). When you've filled in all the staking parameters and double-checked them, click `Submit Validation`. Make sure the staking period is at least 2 weeks, the delegation fee rate is at least 2%, and you're staking at least 2,000 AVAX on Mainnet (1 AVAX on Fuji Testnet). A full guide about this can be found [here](https://support.avax.network/en/articles/8117267-core-web-how-do-i-validate-in-core-stake). You should see a success message, and your balance should be updated. Go back to the `Stake` tab, and you'll see here an overview of your validation, with information like the amount staked, staking time, and more. ![Staking Overview](/images/node-validator.png) Calling [`platform.getPendingValidators`](/docs/rpcs/p-chain#platformgetpendingvalidators) verifies that your transaction was accepted. Note that this API call should be made before your node's validation start time, otherwise, the return will not include your node's id as it is no longer pending. You can also call [`platform.getCurrentValidators`](/docs/rpcs/p-chain#platformgetcurrentvalidators) to check that your node's id is included in the response. That's it! ## Add a Validator with platform-cli [platform-cli](/docs/tooling/platform-cli) is a lightweight CLI tool for P-Chain operations that provides the simplest command-line workflow for staking. ### Install platform-cli ```bash curl -sSfL https://build.avax.network/install/platform-cli | sh ``` For full installation details, see the [platform-cli documentation](/docs/tooling/platform-cli). ### Retrieve Node ID and BLS Credentials You can use the same `info.getNodeID` API call shown in the Core extension section above. Alternatively, platform-cli can fetch BLS credentials directly from a running node: ```bash platform node info --ip :9650 ``` See the [platform-cli command reference](/docs/tooling/platform-cli/command-reference) for all available node commands. You can also look up the full response format in the [Info RPC docs](/docs/rpcs/other/info-rpc#infogetnodeid). ### Generate or Import a Key Generate a new key: ```bash platform keys generate --name mykey ``` Or import an existing private key: ```bash platform keys import --name mykey --private-key ``` You can also use a **Ledger hardware wallet** instead of a stored key. Pass `--ledger` in place of `--key-name` when running `platform validator add`. ### Add Validator **Fuji Testnet — manual BLS:** ```bash platform validator add \ --node-id \ --stake 1 \ --duration 336h \ --delegation-fee 0.02 \ --bls-public-key \ --bls-pop \ --key-name mykey \ --network fuji ``` **Fuji Testnet — auto-fetch BLS from running node:** ```bash platform validator add \ --node-id \ --stake 1 \ --duration 336h \ --delegation-fee 0.02 \ --node-endpoint http://:9650 \ --key-name mykey \ --network fuji ``` **Mainnet:** ```bash platform validator add \ --node-id \ --stake 2000 \ --duration 336h \ --delegation-fee 0.02 \ --node-endpoint http://:9650 \ --key-name mykey \ --network mainnet ``` **Mainnet with Ledger:** ```bash platform validator add \ --node-id \ --stake 2000 \ --duration 336h \ --delegation-fee 0.02 \ --node-endpoint http://:9650 \ --ledger \ --network mainnet ``` #### Key Flags | Flag | Description | |------|-------------| | `--node-id` | Your node's NodeID (from `info.getNodeID`) | | `--stake` | Amount in AVAX (min 1 on Fuji, 2000 on Mainnet) | | `--duration` | Validation duration (min `336h` = 14 days) | | `--delegation-fee` | Percentage retained by validator (min `0.02` = 2%) | | `--bls-public-key` | BLS public key from node | | `--bls-pop` | BLS proof of possession from node | | `--node-endpoint` | Alternative to manual BLS — fetches credentials from running node | | `--reward-address` | Optional: specify a different reward address | | `--network` | `fuji`, `mainnet`, or `local` | | `--key-name` | Name of the stored key to sign the transaction with | For the full flag reference, see the [platform-cli staking docs](/docs/tooling/platform-cli/staking). ### Verify Validator Status Use the same verification steps as the Core extension section: call [`platform.getPendingValidators`](/docs/rpcs/p-chain#platformgetpendingvalidators) before the validation start time, or [`platform.getCurrentValidators`](/docs/rpcs/p-chain#platformgetcurrentvalidators) once the validation period has started. See the [Info RPC docs](/docs/rpcs/other/info-rpc) for additional node status methods. ## Add a Validator with Avalanche SDK We can also add a node to the validator set using the [Avalanche SDK](/docs/tooling/avalanche-sdk). ### Install AvalancheJS To use AvalancheJS, you can clone the repo: ```bash git clone https://github.com/ava-labs/avalanchejs.git ``` The repository cloning method used is HTTPS, but SSH can be used too: `git clone git@github.com:ava-labs/avalanchejs.git` You can find more about SSH and how to use it [here](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/about-ssh). or add it to an existing project: ```bash yarn add @avalabs/avalanchejs ``` For this tutorial we will use [`ts-node`](https://www.npmjs.com/package/ts-node) to run the example scripts directly from an AvalancheJS directory. ### Fuji Workflow In this section, we will use Fuji Testnet to show how to add a node to the validator set. Open your AvalancheJS directory and select the [**`examples/p-chain`**](https://github.com/ava-labs/avalanchejs/tree/master/examples/p-chain) folder to view the source code for the examples scripts. We will use the [**`validate.ts`**](https://github.com/ava-labs/avalanchejs/blob/master/examples/p-chain/validate.ts) script to add a validator. #### Add Necessary Environment Variables Locate the `.env.example` file at the root of AvalancheJS, and remove `.example` from the title. Now, this will be the `.env` file for global variables. Add the private key and the P-Chain address associated with it. The API URL is already set to Fuji (`https://api.avax-test.network/`). ![env Variables](/images/validator-avalanchejs-1.png) #### Retrieve the Node ID, the BLS signature and the BLS key Get this info by calling [`info.getNodeID`](/docs/rpcs/other/info-rpc#infogetnodeid): ```bash curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"info.getNodeID" }' -H 'content-type:application/json' 127.0.0.1:9650/ext/info ``` The response has your node's ID, the BLS key (public key) and the BLS signature (proof of possession): ```json { "jsonrpc": "2.0", "result": { "nodeID": "NodeID-JXJNyJXhgXzvVGisLkrDiZvF938zJxnT5", "nodePOP": { "publicKey": "0xb982b485916c1d74e3b749e7ce49730ac0e52d28279ce4c5c989d75a43256d3012e04b1de0561276631ea6c2c8dc4429", "proofOfPossession": "0xb6cdf3927783dba3245565bd9451b0c2a39af2087fdf401956489b42461452ec7639b9082195b7181907177b1ea09a6200a0d32ebbc668d9c1e9156872633cfb7e161fbd0e75943034d28b25ec9d9cdf2edad4aaf010adf804af8f6d0d5440c5" } }, "id": 1 } ``` #### Fill in the Node ID, the BLS signature and the BLS key After retrieving this data, go to `examples/p-chain/validate.ts`. Replace the `nodeID`, `blsPublicKey` and `blsSignature` with your own node's values. ![Replaced values](/images/validator-avalanchejs-2.png) #### Settings for Validation Next we need to specify the node's validation period and delegation fee. #### Validation Period The validation period is set by default to 21 days, the start date being the date and time the transaction is issued. The start date cannot be modified. The end date can be adjusted in the code. Let's say we want the validation period to end after 50 days. You can achieve this by adding the number of desired days to `endTime.getDate()`, in this case `50`. ```ts // move ending date 50 days into the future endTime.setDate(endTime.getDate() + 50); ``` Now let's say you want the staking period to end on a specific date and time, for example May 15, 2024, at 11:20 AM. It can be achieved as shown in the code below. ```ts const startTime = await new PVMApi().getTimestamp(); const startDate = new Date(startTime.timestamp); const start = BigInt(startDate.getTime() / 1000); // Set the end time to a specific date and time const endTime = new Date('2024-05-15T11:20:00'); // May 15, 2024, at 11:20 AM const end = BigInt(endTime.getTime() / 1000); ``` #### Delegation Fee Rate Avalanche allows for delegation of stake. This parameter is the percent fee this validator charges when others delegate stake to them. For example, if `delegationFeeRate` is `10` and someone delegates to this validator, then when the delegation period is over, 10% of the reward goes to the validator and the rest goes to the delegator, if this node meets the validation reward requirements. The delegation fee on AvalancheJS is set `20`. To change this, you need to provide the desired fee percent as a parameter to `newAddPermissionlessValidatorTx`, which is by default `1e4 * 20`. For example, if you'd want it to be `10`, the updated code would look like this: ```ts const tx = newAddPermissionlessValidatorTx( context, utxos, [bech32ToBytes(P_CHAIN_ADDRESS)], nodeID, PrimaryNetworkID.toString(), start, end, BigInt(1e9), [bech32ToBytes(P_CHAIN_ADDRESS)], [bech32ToBytes(P_CHAIN_ADDRESS)], 1e4 * 10, // delegation fee, replaced 20 with 10 undefined, 1, 0n, blsPublicKey, blsSignature, ); ``` #### Stake Amount Set the amount being locked for validation when calling `newAddPermissionlessValidatorTx` by replacing `weight` with a number in the unit of nAVAX. For example, `2 AVAX` would be `2e9 nAVAX`. ```ts const tx = newAddPermissionlessValidatorTx( context, utxos, [bech32ToBytes(P_CHAIN_ADDRESS)], nodeID, PrimaryNetworkID.toString(), start, end, BigInt(2e9), // the amount to stake [bech32ToBytes(P_CHAIN_ADDRESS)], [bech32ToBytes(P_CHAIN_ADDRESS)], 1e4 * 10, undefined, 1, 0n, blsPublicKey, blsSignature, ); ``` #### Execute the Code Now that we have made all of the necessary changes to the example script, it's time to add a validator to the Fuji Network. Run the command: ```bash node --loader ts-node/esm examples/p-chain/validate.ts ``` The response: ```bash laviniatalpas@Lavinias-MacBook-Pro avalanchejs % node --loader ts-node/esm examples/p-chain/validate.ts (node:87616) ExperimentalWarning: `--experimental-loader` may be removed in the future; instead use `register()`: --import 'data:text/javascript,import { register } from "node:module"; import { pathToFileURL } from "node:url"; register("ts-node/esm", pathToFileURL("./"));' (Use `node --trace-warnings ...` to show where the warning was created) { txID: 'RVe3CFRieRbBvKXKPu24Zbt1QehdyGVT6X4tPWVBeACPX3Ab8' } ``` We can check the transaction's status by running the example script with [`platform.getTxStatus`](/docs/rpcs/p-chain#platformgettxstatus) or looking up the validator directly on the [explorer](https://subnets-test.avax.network/validators/NodeID-JXJNyJXhgXzvVGisLkrDiZvF938zJxnT5). ![Added Validator](/images/validator-avalanchejs-3.png) ### Mainnet Workflow The Fuji workflow above can be adapted to Mainnet with the following modifications: - `AVAX_PUBLIC_URL` should be `https://api.avax.network/`. - `P_CHAIN_ADDRESS` should be the Mainnet P-Chain address. - Set the correct amount to stake. - The `blsPublicKey`, `blsSignature` and `nodeID` need to be the ones for your Mainnet Node. # Rewards Formula (/docs/primary-network/validate/rewards-formula) --- title: Rewards Formula description: Learn about the rewards formula for the Avalanche Primary Network validator --- ## Primary Network Validator Rewards Consider a Primary Network validator which stakes a $Stake$ amount of `AVAX` for $StakingPeriod$ seconds. The potential reward is calculated **at the beginning of the staking period**. At the beginning of the staking period there is a $Supply$ amount of `AVAX` in the network. The maximum amount of `AVAX` is $MaximumSupply$. At the end of its staking period, a responsive Primary Network validator receives a reward. $$ Potential Reward = \left(MaximumSupply - Supply \right) \times \frac{Stake}{Supply} \times \frac{Staking Period}{Minting Period} \times EffectiveConsumptionRate $$ where, $$ MaximumSupply - Supply = \text{the number of AVAX tokens left to emit in the network} $$ $$ \frac{Stake}{Supply} = \text{the individual's stake as a percentage of all available AVAX tokens in the network} $$ $$ \frac{StakingPeriod}{MintingPeriod} = \text{time tokens are locked up divided by the $MintingPeriod$} $$ $$ \text{$MintingPeriod$ is one year as configured by the network).} $$ $$ EffectiveConsumptionRate = $$ $$ \frac{MinConsumptionRate}{PercentDenominator} \times \left(1- \frac{Staking Period}{Minting Period}\right) + \frac{MaxConsumptionRate}{PercentDenominator} \times \frac{Staking Period}{Minting Period} $$ Note that $StakingPeriod$ is the staker's entire staking period, not just the staker's uptime, that is the aggregated time during which the staker has been responsive. The uptime comes into play only to decide whether a staker should be rewarded; to calculate the actual reward only the staking period duration is taken into account. $EffectiveConsumptionRate$ is the rate at which the Primary Network validator is rewarded based on $StakingPeriod$ selection. $MinConsumptionRate$ and $MaxConsumptionRate$ bound $EffectiveConsumptionRate$: $$ MinConsumptionRate \leq EffectiveConsumptionRate \leq MaxConsumptionRate $$ The larger $StakingPeriod$ is, the closer $EffectiveConsumptionRate$ is to $MaxConsumptionRate$. The smaller $StakingPeriod$ is, the closer $EffectiveConsumptionRate$ is to $MinConsumptionRate$. A staker achieves the maximum reward for its stake if $StakingPeriod$ = $Minting Period$. The reward is: $$ Max Reward = \left(MaximumSupply - Supply \right) \times \frac{Stake}{Supply} \times \frac{MaxConsumptionRate}{PercentDenominator} $$ Note that this formula is the same as the reward formula at the top of this section because $EffectiveConsumptionRate$ = $MaxConsumptionRate$. For reference, you can find all the Primary network parameters in [the section below](#primary-network-parameters-on-mainnet). ## Delegators Weight Checks There are bounds set of the maximum amount of delegators' stake that a validator can receive. The maximum weight $MaxWeight$ a validator $Validator$ can have is: $$ MaxWeight = \min(Validator.Weight \times MaxValidatorWeightFactor, MaxValidatorStake) $$ where $MaxValidatorWeightFactor$ and $MaxValidatorStake$ are the Primary Network Parameters described above. A delegator won't be added to a validator if the combination of their weights and all other validator's delegators' weight is larger than $MaxWeight$. Note that this must be true at any point in time. Note that setting $MaxValidatorWeightFactor$ to 1 disables delegation since the $MaxWeight = Validator.Weight$. ## Notes on Percentages `PercentDenominator = 1_000_000` is the denominator used to calculate percentages. It allows you to specify percentages up to 4 digital positions. To denominate your percentage in `PercentDenominator` just multiply it by `10_000`. For example: - `100%` corresponds to `100 * 10_000 = 1_000_000` - `1%` corresponds to `1* 10_000 = 10_000` - `0.02%` corresponds to `0.002 * 10_000 = 200` - `0.0007%` corresponds to `0.0007 * 10_000 = 7` ## Primary Network Parameters on Mainnet For reference we list below the Primary Network parameters on Mainnet: - `AssetID = Avax` - `InitialSupply = 240_000_000 Avax` - `MaximumSupply = 720_000_000 Avax`. - `MinConsumptionRate = 0.10 * reward.PercentDenominator`. - `MaxConsumptionRate = 0.12 * reward.PercentDenominator`. - `Minting Period = 365 * 24 * time.Hour`. - `MinValidatorStake = 2_000 Avax`. - `MaxValidatorStake = 3_000_000 Avax`. - `MinStakeDuration = 2 * 7 * 24 * time.Hour`. - `MaxStakeDuration = 365 * 24 * time.Hour`. - `MinDelegationFee = 20000`, that is `2%`. - `MinDelegatorStake = 25 Avax`. - `MaxValidatorWeightFactor = 5`. This is a platformVM parameter rather than a genesis one, so it's shared across networks. - `UptimeRequirement = 0.8`, that is `80%`. ### Interactive Graph The graph below demonstrates the reward as a function of the length of time staked. The x-axis depicts $\frac{StakingPeriod}{MintingPeriod}$ as a percentage while the y-axis depicts $Reward$ as a percentage of $MaximumSupply - Supply$, the amount of tokens left to be emitted. Graph variables correspond to those defined above: - `h` (high) = $MaxConsumptionRate$ - `l` (low) = $MinConsumptionRate$ - `s` = $\frac{Stake}{Supply}$ # AVAX Staking for Professionals (/docs/primary-network/validate/staking-for-finance-professionals) --- title: AVAX Staking for Professionals description: A comprehensive guide to Avalanche staking for financial professionals who need to understand the mechanics, risks, and operational considerations without deep technical knowledge. --- # AVAX Staking for Professionals A comprehensive guide to Avalanche staking designed for financial professionals who need to understand the mechanics, risks, and operational considerations of staking without deep technical knowledge. ## Executive Summary Staking on Avalanche allows token holders to earn rewards by participating in network security. Unlike traditional investments, staked tokens remain under your custody but are time-locked for a predetermined period (2 weeks to 1 year). At the end of the staking period, your original principal is returned along with earned rewards. **Before staking, understand these key risks:** - **No Slashing** - Avalanche does NOT have slashing. Your staked principal is never at risk of being taken by the protocol or validators, regardless of validator performance. The worst-case scenario is earning zero rewards, not losing your principal. - **Assets Are Locked** - Once assets are staked, they must remain staked until the end of the staking period. You set the maturity date, but your assets are completely illiquid until that date. - **Irreversible Transaction** - Once a staking transaction is confirmed on the P-Chain, it cannot be changed. There is NO mechanism for early withdrawal or changing transaction settings. Plan your liquidity accordingly and double-check all inputs. --- ## Understanding Staking Staking is the process of locking up cryptocurrency holdings to support a blockchain network's security and validate transactions. Stakers earn rewards (similar to interest) for helping secure the network. On Avalanche, there are two types of staking: - **Validation Staking** - Running node infrastructure and staking to become a validator - **Delegation Staking** - Staking assets to someone else's existing validator --- ## Validator vs. Delegator Comparison | Aspect | Validator | Delegator | |--------|-----------|-----------| | **Role** | Operates node infrastructure and stakes to make the node a validator | Stakes assets to an existing validator without running infrastructure | | **Category** | Advanced | User-Friendly | | **Minimum Capital** | 2,000 AVAX | 25 AVAX | | **Maximum Stake** | 3,000,000 AVAX | Dependent on validator's available capacity | | **Hardware Required** | Yes (8-core CPU, 16GB RAM, 1TB SSD) | No | | **Technical Knowledge** | High (Linux, networking, DevOps) | None required | | **Operational Burden** | Monitoring and maintenance of infrastructure | Zero operational burden | | **Providers** | Can contract node providers to manage infrastructure externally | Select from public marketplace through most staking apps | | **Risk Profile** | Operational and reputational risk | Minimal (validator selection risk only) | | **Revenue Stream** | Own validator rewards + delegation fees from delegators | Own delegation rewards minus delegation fee | | **Operational Costs** | Infrastructure cost of maintaining nodes | N/A (validator's responsibility) | | **Fees** | Trivial transaction fee at onset | Transaction fee + delegation fee to validator | | **Uptime Requirements** | Must maintain >80% uptime for rewards | N/A (validator's responsibility) | | **Liquidity** | Locked during staking period | Locked during staking period | --- ## Preparation: Set Up Your Wallet and Assets Before staking (for either validating or delegating), you need to prepare the following. At every stage, you retain ownership of your private keys. The validator never has access to your funds for either form of staking. The time-lock is enforced by the Avalanche protocol, not by any third party. **Note:** If you are using a third-party intermediary, you're subject to their restrictions and functions. ### Asset Requirements | Aspect | Applies To | Details | |--------|------------|---------| | **Asset Location** | Validators and Delegators | AVAX tokens should reside in your P-Chain wallet address | | **Ownership** | Validators and Delegators | You hold the private keys; you have full control of your assets | | **Liquidity** | Validators and Delegators | Fully liquid. Time-vested assets are eligible to be staked | ### Required Addresses All addresses below are P-Chain addresses: | Address | Staking Type | Purpose | Details | |---------|--------------|---------|---------| | **Validator Principal Address** | Validators Only | Source of funds to be staked | Chosen by validator. In most setups, this is where principal returns. | | **Validator Rewards Address** | Validators Only | Where validator rewards are sent after staking ends | Chosen by validator | | **Delegation Fee Address** | Validators Only | Where fees from delegators are sent | Chosen by validator | | **Delegator Principal Address** | Delegators Only | Source of funds to be staked | Chosen by delegator. In most setups, this is where principal returns. | | **Delegator Rewards Address** | Delegators Only | Where delegator rewards are sent after staking ends | Chosen by delegator | ### P-Chain vs. C-Chain Staking occurs on the **P-Chain** (Platform Chain), not the C-Chain. If your AVAX is on the C-Chain (used for DeFi/smart contracts), you must first transfer it to the P-Chain. ### Cross-Chain Transfer: C-Chain → P-Chain If your AVAX is on the C-Chain (common for exchange withdrawals and DeFi), transfer it to the P-Chain before staking. **Fees:** Each transaction costs approximately 0.001 AVAX. Total transfer cost: ~0.002 AVAX. | Step | Transaction Type | Chain | What It Does | |------|------------------|-------|--------------| | Export | `ExportTx` | C-Chain | Assets move out of C-Chain | | Import | `ImportTx` | P-Chain | Assets move into P-Chain | **Wallets Supporting Cross-Chain:** [Core Wallet](https://core.app/) and most Avalanche-compatible wallets provide a single "Cross-Chain Transfer" button that executes both transactions automatically. --- ## Steps for Validating You should have a node selected and prepared before creating a staking transaction—either one you're operating yourself or one provided by a contracted node provider. ### Validation Transaction Parameters Create a staking transaction by providing the following: | Parameter | Description | Example | |-----------|-------------|---------| | **Node ID** | Unique identifier of the validator node | `NodeID-A1B2C3D4E5F6...` | | **Node BLS Public Key** | 48-byte hex representation used to create verifiable signatures | `0x87772ac7668d78d...` | | **Node BLS Proof of Possession** | 96-byte hex signature proving control of the private key | `0x969d9ffbe8d00ed83b...` | | **Stake Amount** | Amount of AVAX to stake (minimum 2,000 AVAX) | `2,500 AVAX` | | **Delegation Fee %** | Fee charged to delegators (must be 2%–100%) | `2.0%` | | **Start Time** | When staking begins (usually immediately upon transaction) | `2026-02-01 00:00:00 UTC` | | **End Time** | When staking ends (2 weeks to 1 year from start) | `2026-08-01 00:00:00 UTC` | | **Stake Return Address** | Where original principal returns | `P-avax1abc123...` | | **Validator Rewards Address** | Where validator rewards are sent | `P-avax1abc123...` | | **Delegation Fee Address** | Where delegation fees are sent | `P-avax1abc123...` | ### After Submitting the Transaction - Once the transaction completes, the node is considered **active** until the maturity date - The node must maintain **greater than 80% uptime** to receive rewards - Once active, others may delegate to the node - Delegators can stake to your validator up until the last two weeks of your validation period ### Validator Delegation Capacity Validators can accept delegations up to a calculated capacity: ``` Validator Delegation Capacity = ((3,000,000 AVAX - (5 × Validator Stake Amount)) - Validator Stake Amount) - Active Delegation Stake ``` --- ## Steps for Delegating You can only delegate to a node that already has an active validation stake. Delegation is limited by the validator you select: - **Maximum delegation amount** depends on validator capacity - **Delegation period** must end before the validator's staking period ends When you delegate, your rewards depend on the validator's uptime. If it falls below 80% during the staking period, you risk receiving zero rewards. Choose a validator you're familiar with or one that has trustworthy indicators such as healthy uptime and a history of successful validation periods. ### Delegation Transaction Parameters | Parameter | Description | Example | |-----------|-------------|---------| | **Node ID** | Unique identifier of the validator you're delegating to | `NodeID-A1B2C3D4E5F6...` | | **Stake Amount** | Amount to delegate (min 25 AVAX, max set by validator capacity) | `30 AVAX` | | **Start Time** | When staking begins (usually immediately) | `2026-02-01 00:00:00 UTC` | | **End Time** | When staking ends (must be before validator's end time) | `2026-08-01 00:00:00 UTC` | | **Stake Return Address** | Where original principal returns | `P-avax1abc123...` | | **Delegator Rewards Address** | Where rewards are sent (minus delegation fee) | `P-avax1abc123...` | --- ## Monitoring Your Stake You can monitor active or historical stakes through your wallet app (such as [Core Wallet](https://core.app/)). Public explorer tools also provide monitoring: - [Example Staking Transaction](https://subnets.avax.network/p-chain/tx/to2MDTr7HBkmex6q1QiCHHZa4oZaJjmp36vJjdLjQLpPQXRF2) - [Example Delegation Transaction](https://subnets.avax.network/p-chain/tx/2DF6kRMCtpeLmx7TGaoj9PbQTWwU775c7ULU9UUuzbeizGm5h6) - [Example Active Validator Node](https://subnets.avax.network/validators/NodeID-38NGnT4q4MXLv9vBGw72vwBgyX2Wf35iu) --- ## Rewards Distribution Rewards are **NOT** distributed incrementally during the staking period. Instead: - Protocol calculates potential rewards based on stake amount and duration - Actual reward is determined **ONLY** at the end of the period - Reward depends on validator meeting 80% uptime threshold - No "accrued but not yet received" rewards exist during the period For the detailed reward calculation formula, see [Rewards Formula](/docs/primary-network/validate/rewards-formula). Rewards are automatically distributed at the end of the staking period, and principal is sent to the predesignated address. ### Validator Rewards | Component | Amount | Destination | When | |-----------|--------|-------------|------| | **Gross Validator Rewards** | Calculated by protocol | Validator Rewards Address | End of validator's staking period | | **Delegation Fee Rewards** | Gross Delegation Rewards × Delegation Fee Rate | Delegation Fee Address | Batched at end of validator's staking period | | **Net Validator Revenue** | Gross Validator Rewards + Delegation Fee Rewards | — | — | Validator principal returns to the original principal address immediately at the end of the staking period. ### Delegator Rewards | Component | Amount | Destination | When | |-----------|--------|-------------|------| | **Gross Delegator Rewards** | Calculated by protocol | — | — | | **Delegation Fee** | Gross Rewards × Delegation Fee Rate | Validator's Fee Address | End of validator's staking period | | **Net Delegator Revenue** | Gross Rewards − Delegation Fee | Delegator Rewards Address | End of **delegator's** staking period | Delegator principal returns to the original principal address immediately at the end of the **delegator's** staking period (not the validator's). --- ## Technical Implementation For detailed technical implementation support, refer to the [How to Stake](/docs/primary-network/validate/how-to-stake) guide. --- ## Coming Soon: Continuous Staking (ACP-236) [ACP-236: Continuous Staking](/docs/acps/236-continuous-staking) is currently in **Proposed** status. The features below will require a network upgrade to implement. This section is provided for planning purposes. ### What Is Continuous Staking? ACP-236 proposes a mechanism allowing validators to remain staked indefinitely without manually resubmitting staking transactions at each period's end. Instead of committing to a fixed end time, validators would specify a **cycle duration** and an **auto-renew policy**. ### Key Changes from Current System | Aspect | Current System | With ACP-236 | |--------|----------------|--------------| | **Staking Duration** | Fixed end time; must re-stake manually | Automatic renewal at cycle end | | **Reward Handling** | All rewards returned at period end | Configurable: auto-compound or withdraw | | **Exit Process** | Delegation simply ends | Submit `SetAutoRenewPolicyTx` to signal exit | | **Transaction Burden** | New transaction every staking period | One transaction, runs indefinitely | | **Uptime Tracking** | Measured over entire period | Reset each cycle | ### New Transaction Types ACP-236 introduces three new P-Chain transactions: | Transaction | Purpose | |-------------|---------| | `AddContinuousValidatorTx` | Create a continuous validator with cycle duration and auto-renew policy | | `SetAutoRenewPolicyTx` | Modify the auto-renew policy or signal exit at cycle end | | `RewardContinuousValidatorTx` | Issued by block builders to process cycle rewards | ### Auto-Renew Rewards Policy The `AutoRenewRewardsShares` field (expressed in millionths) controls what happens to rewards at each cycle end: | Value | Behavior | |-------|----------| | `0` | Restake principal only; withdraw 100% of rewards | | `300,000` | Restake 30% of rewards; withdraw 70% | | `1,000,000` | Restake 100% of rewards (full compounding) | | `MaxUint64` | Signal to exit at current cycle end | ### Benefits for Financial Professionals - **Reduced Operational Burden** — Fewer transactions to sign and manage - **Automatic Compounding** — Rewards can auto-restake each cycle - **Enhanced Security** — Less frequent key signing reduces exposure risk - **Flexible Exit** — Can signal exit mid-cycle; takes effect at cycle end - **Simpler Treasury Management** — Set-and-forget staking operations ### Impact on Delegators ACP-236 applies to **validators only**. Delegators cannot delegate continuously because there is no guarantee a validator will continue beyond their current cycle. Delegation constraints remain unchanged: your delegation period must fit within the validator's current cycle. # Validate vs. Delegate (/docs/primary-network/validate/validate-vs-delegate) --- title: Validate vs. Delegate description: Understand the difference between validation and delegation. --- Validation[​](#validation "Direct link to heading") --------------------------------------------------- Validation in the context of staking refers to the act of running a node on the blockchain network to validate transactions and secure the network. - **Stake Requirement**: To become a validator on the Avalanche network, one must stake a minimum amount of 2,000 AVAX tokens on the Mainnet (1 AVAX on the Fuji Testnet). - **Process**: Validators participate in achieving consensus by repeatedly sampling other validators. The probability of being sampled is proportional to the validator's stake, meaning the more tokens a validator stakes, the more influential they are in the consensus process. - **Rewards**: Validators are eligible to receive rewards for their efforts in securing the network. To receive rewards, a validator must be online and responsive for more than 80% of their validation period. Delegation[​](#delegation "Direct link to heading") --------------------------------------------------- Delegation allows token holders who do not wish to run their own validator node to still participate in staking by "delegating" their tokens to an existing validator node. - **Stake Requirement**: To delegate on the Avalanche network, a minimum of 25 AVAX tokens is required on the Mainnet (1 AVAX on the Fuji Testnet). - **Process**: Delegators choose a specific validator node to delegate their tokens to, trusting that the validator will behave correctly and help secure the network on their behalf. - **Rewards**: Delegators are also eligible to receive rewards for their stake. The validator they delegate to shares a portion of the reward with them, according to the validator's delegation fee rate. Key Differences[​](#key-differences "Direct link to heading") ------------------------------------------------------------- - **Responsibilities**: Validators actively run a node, validate transactions, and actively participate in securing the network. Delegators, on the other hand, do not run a node themselves but entrust their tokens to a validator to participate on their behalf. - **Stake Requirement**: Validators have a higher minimum stake requirement compared to delegators, as they take on more responsibility in the network. - **Rewards Distribution**: Validators receive rewards directly for their validation efforts. Delegators receive rewards indirectly through the validator they delegate to, sharing a portion of the validator's reward. In summary, validation involves actively participating in securing the network by running a node, while delegation allows token holders to participate passively by trusting their stake to a chosen validator. Both validators and delegators can earn rewards, but validators have higher stakes and more direct involvement in the Avalanche network. # What Is Staking? (/docs/primary-network/validate/what-is-staking) --- title: What Is Staking? description: Learn about staking and how it works in Avalanche. --- Staking is the process where users lock up their tokens to support a blockchain network and, in return, receive rewards. It is an essential part of proof-of-stake (PoS) consensus mechanisms used by many blockchain networks, including Avalanche. PoS systems require participants to stake a certain amount of tokens as collateral to participate in the network and validate transactions. How Does Proof-of-Stake Work?[​](#how-does-proof-of-stake-work "Direct link to heading") ---------------------------------------------------------------------------------------- To resist [sybil attacks](https://support.avalabs.org/en/articles/4064853-what-is-a-sybil-attack), a decentralized network must require that network influence is paid with a scarce resource. This makes it infeasibly expensive for an attacker to gain enough influence over the network to compromise its security. On Avalanche, the scarce resource is the native token, [AVAX](/docs/primary-network/avax-token). For a node to validate a blockchain on Avalanche, it must stake AVAX. # Deep Dive into ICM (/docs/cross-chain/avalanche-warp-messaging/deep-dive) --- title: "Deep Dive into ICM" description: "Learn about Avalanche Warp Messaging, a cross-Avalanche L1 communication protocol on Avalanche." edit_url: https://github.com/ava-labs/avalanchego/edit/master/vms/platformvm/warp/README.md --- # Avalanche Interchain Messaging Avalanche Interchain Messaging (ICM) provides a primitive for cross-chain communication on the Avalanche Network. The Avalanche P-Chain provides an index of every network's validator set on the Avalanche Network, including the BLS public key of each validator (as of the [Banff Upgrade](https://github.com/ava-labs/avalanchego/releases/v1.9.0)). ICM utilizes the weighted validator sets stored on the P-Chain to build a cross-chain communication protocol between any two networks on Avalanche. Any Virtual Machine (VM) on Avalanche can integrate Avalanche Interchain Messaging to send and receive messages cross-chain. ## Background This README assumes familiarity with: - Avalanche P-Chain / [PlatformVM](https://github.com/ava-labs/avalanchego/tree/master/vms/platformvm/) - [ProposerVM](https://github.com/ava-labs/avalanchego/tree/master/vms/proposervm/README.md) - Basic familiarity with [BLS Multi-Signatures](https://crypto.stanford.edu/~dabo/pubs/papers/BLSmultisig.html) ## BLS Multi-Signatures with Public-Key Aggregation Avalanche Interchain Messaging utilizes BLS multi-signatures with public key aggregation in order to verify messages signed by another network. When a validator joins a network, the P-Chain records the validator's BLS public key and NodeID, as well as a proof of possession of the validator's BLS private key to defend against [rogue public-key attacks](https://crypto.stanford.edu/~dabo/pubs/papers/BLSmultisig.html#mjx-eqn-eqaggsame). ICM utilizes the validator set's weights and public keys to verify that an aggregate signature has sufficient weight signing the message from the source network. BLS provides a way to aggregate signatures off chain into a single signature that can be efficiently verified on chain. ## ICM Serialization Unsigned Message: ``` +---------------+----------+--------------------------+ | codecID : uint16 | 2 bytes | +---------------+----------+--------------------------+ | networkID : uint32 | 4 bytes | +---------------+----------+--------------------------+ | sourceChainId : [32]byte | 32 bytes | +---------------+----------+--------------------------+ | payload : []byte | 4 + size(payload) | +---------------+----------+--------------------------+ | 42 + size(payload) bytes| +--------------------------+ ``` - `codecID` is the codec version used to serialize the payload and is hardcoded to `0x0000` - `networkID` is the unique ID of an Avalanche Network (Mainnet/Testnet) and provides replay protection for BLS Signers across different Avalanche Networks - `sourceChainID` is the hash of the transaction that created the blockchain on the Avalanche P-Chain. It serves as the unique identifier for the blockchain across the Avalanche Network so that each blockchain can only sign a message with its own id. - `payload` provides an arbitrary byte array containing the contents of the message. VMs define their own message types to include in the `payload` BitSetSignature: ``` +-----------+----------+---------------------------+ | type_id : uint32 | 4 bytes | +-----------+----------+---------------------------+ | signers : []byte | 4 + len(signers) | +-----------+----------+---------------------------+ | signature : [96]byte | 96 bytes | +-----------+----------+---------------------------+ | 104 + size(signers) bytes | +---------------------------+ ``` - `typeID` is the ID of this signature type, which is `0x00000000` - `signers` encodes a bitset of which validators' signatures are included (a bitset is a byte array where each bit indicates membership of the element at that index in the set) - `signature` is an aggregated BLS Multi-Signature of the Unsigned Message BitSetSignatures are verified within the context of a specific P-Chain height. At any given P-Chain height, the PlatformVM serves a canonically ordered validator set for the source network (validator set is ordered lexicographically by the BLS public key's byte representation). The `signers` bitset encodes which validator signatures were included. A value of `1` at index `i` in `signers` bitset indicates that a corresponding signature from the same validator at index `i` in the canonical validator set was included in the aggregate signature. The bitset tells the verifier which BLS public keys should be aggregated to verify the interchain message. Signed Message: ``` +------------------+------------------+-------------------------------------------------+ | unsigned_message : UnsignedMessage | size(unsigned_message) | +------------------+------------------+-------------------------------------------------+ | signature : Signature | size(signature) | +------------------+------------------+-------------------------------------------------+ | size(unsigned_message) + size(signature) bytes | +-------------------------------------------------+ ``` ## Sending an Avalanche Interchain Message A blockchain on Avalanche sends an Avalanche Interchain Message by coming to agreement on the message that every validator should be willing to sign. As an example, the VM of a blockchain may define that once a block is accepted, the VM should be willing to sign a message including the block hash in the payload to attest to any other network that the block was accepted. The contents of the payload, how to aggregate the signature (VM-to-VM communication, off-chain relayer, etc.), is left to the VM. Once the validator set of a blockchain is willing to sign an arbitrary message `M`, an aggregator performs the following process: 1. Gather signatures of the message `M` from `N` validators (where the `N` validators meet the required threshold of stake on the destination chain) 2. Aggregate the `N` signatures into a multi-signature 3. Look up the canonical validator set at the P-Chain height where the message will be verified 4. Encode the selection of the `N` validators included in the signature in a bitset 5. Construct the signed message from the aggregate signature, bitset, and original unsigned message ## Verifying / Receiving an Avalanche Interchain Message Avalanche Interchain Messages are verified within the context of a specific P-Chain height included in the [ProposerVM](https://github.com/ava-labs/avalanchego/tree/master/vms/proposervm/README.md)'s header. The P-Chain height is provided as context to the underlying VM when verifying the underlying VM's blocks (implemented by the optional interface [WithVerifyContext](https://github.com/ava-labs/avalanchego/tree/master/snow/engine/snowman/block/block_context_vm.go)). To verify the message, the underlying VM utilizes this `warp` package to perform the following steps: 1. Lookup the canonical validator set of the network sending the message at the P-Chain height 2. Filter the canonical validator set to only the validators claimed by the signature 3. Verify the weight of the included validators meets the required threshold defined by the receiving VM 4. Aggregate the public keys of the claimed validators into a single aggregate public key 5. Verify the aggregate signature of the unsigned message against the aggregate public key Once a message is verified, it is left to the VM to define the semantics of delivering a verified message. ## Design Considerations ### Processing Historical Avalanche Interchain Messages Verifying an Avalanche Interchain Message requires a lookup of validator sets at a specific P-Chain height. The P-Chain serves lookups maintaining validator set diffs that can be applied in-order to reconstruct the validator set of any network at any height. As the P-Chain grows, the number of validator set diffs that needs to be applied in order to reconstruct the validator set needed to verify an Avalanche Interchain Messages increases over time. Therefore, in order to support verifying historical Avalanche Interchain Messages, VMs should provide a mechanism to determine whether an Avalanche Interchain Message was treated as valid or invalid within a historical block. When nodes bootstrap in the future, they bootstrap blocks that have already been marked as accepted by the network, so they can assume the block was verified by the validators of the network when it was first accepted. Therefore, the new bootstrapping node can assume the block was valid to determine whether an Avalanche Interchain Message should be treated as valid/invalid within the execution of that block. Two strategies to provide that mechanism are: - Require interchain message validity for transaction inclusion. If the transaction is included, the interchain message must have passed verification. - Include the results of interchain message verification in the block itself. Use the results to determine which messages passed verification. # Integration with EVM (/docs/cross-chain/avalanche-warp-messaging/evm-integration) --- title: "Integration with EVM" description: "Avalanche Warp Messaging provides a basic primitive for signing and verifying messages between Avalanche L1s." edit_url: https://github.com/ava-labs/avalanchego/edit/master/graft/coreth/precompile/contracts/warp/README.md --- # Integrating Avalanche Warp Messaging into the EVM Avalanche Warp Messaging offers a basic primitive to enable Cross-L1 communication on the Avalanche Network. It is intended to allow communication between arbitrary Custom Virtual Machines (including, but not limited to Subnet-EVM and Coreth). ## How does Avalanche Warp Messaging Work? Avalanche Warp Messaging uses BLS Multi-Signatures with Public-Key Aggregation where every Avalanche validator registers a public key alongside its NodeID on the Avalanche P-Chain. Every node tracking an Avalanche L1 has read access to the Avalanche P-Chain. This provides weighted sets of BLS Public Keys that correspond to the validator sets of each L1 on the Avalanche Network. Avalanche Warp Messaging provides a basic primitive for signing and verifying messages between L1s: the receiving network can verify whether an aggregation of signatures from a set of source L1 validators represents a threshold of stake large enough for the receiving network to process the message. For more details on Avalanche Warp Messaging, see the AvalancheGo [Warp README](https://github.com/ava-labs/avalanchego/blob/master/vms/platformvm/warp/README.md). ### Flow of Sending / Receiving a Warp Message within the EVM The Avalanche Warp Precompile enables this flow to send a message from blockchain A to blockchain B: 1. Call the Warp Precompile `sendWarpMessage` function with the arguments for the `UnsignedMessage` 2. Warp Precompile emits an event / log containing the `UnsignedMessage` specified by the caller of `sendWarpMessage` 3. Network accepts the block containing the `UnsignedMessage` in the log, so that validators are willing to sign the message 4. An off-chain relayer queries the validators for their signatures of the message and aggregates the signatures to create a `SignedMessage` 5. The off-chain relayer encodes the `SignedMessage` as the [predicate](#predicate-encoding) in the AccessList of a transaction to deliver on blockchain B 6. The transaction is delivered on blockchain B, the signature is verified prior to executing the block, and the message is accessible via the Warp Precompile's `getVerifiedWarpMessage` during the execution of that transaction ### Warp Precompile The Warp Precompile is broken down into three functions defined in the Solidity interface file [IWarpMessenger.sol](https://github.com/ava-labs/avalanchego/blob/master/graft/coreth/contracts/contracts/interfaces/IWarpMessenger.sol). #### sendWarpMessage `sendWarpMessage` is used to send a verifiable message. Calling this function results in sending a message with the following contents: - `SourceChainID` - blockchainID of the sourceChain on the Avalanche P-Chain - `SourceAddress` - `msg.sender` encoded as a 32 byte value that calls `sendWarpMessage` - `Payload` - `payload` argument specified in the call to `sendWarpMessage` emitted as the unindexed data of the resulting log Calling this function will issue a `SendWarpMessage` event from the Warp Precompile. Since the EVM limits the number of topics to 4 including the EventID, this message includes only the topics that would be expected to help filter messages emitted from the Warp Precompile the most. Specifically, the `payload` is not emitted as a topic because each topic must be encoded as a hash. Therefore, we opt to take advantage of each possible topic to maximize the possible filtering for emitted Warp Messages. Additionally, the `SourceChainID` is excluded because anyone parsing the chain can be expected to already know the blockchainID. Therefore, the `SendWarpMessage` event includes the indexable attributes: - `sender` - The `messageID` of the unsigned message (sha256 of the unsigned message) The actual `message` is the entire [Avalanche Warp Unsigned Message](https://github.com/ava-labs/avalanchego/blob/master/vms/platformvm/warp/unsigned_message.go#L14) including an [AddressedCall](https://github.com/ava-labs/avalanchego/tree/master/vms/platformvm/warp/payload#addressedcall). The unsigned message is emitted as the unindexed data in the log. #### getVerifiedMessage `getVerifiedMessage` is used to read the contents of the delivered Avalanche Warp Message into the expected format. It returns the message if present and a boolean indicating if a message is present. To use this function, the transaction must include the signed Avalanche Warp Message encoded in the [predicate](#predicate-encoding) of the transaction. Prior to executing a block, the VM iterates through transactions and pre-verifies all predicates. If a transaction's predicate is invalid, then it is considered invalid to include in the block and dropped. This leads to the following advantages: 1. The EVM execution does not need to verify the Warp Message at runtime (no signature verification or external calls to the P-Chain) 2. The EVM can deterministically re-execute and re-verify blocks assuming the predicate was verified by the network (e.g., in bootstrapping) This pre-verification is performed using the ProposerVM Block header during [block verification](https://github.com/ava-labs/avalanchego/blob/master/graft/coreth/plugin/evm/wrapped_block.go) & [block building](https://github.com/ava-labs/avalanchego/blob/master/graft/coreth/miner/worker.go). #### getBlockchainID `getBlockchainID` returns the blockchainID of the blockchain that the VM is running on. This is different from the conventional Ethereum ChainID registered to [ChainList](https://chainlist.org/). The `sourceChainID` in Avalanche refers to the txID that created the blockchain on the Avalanche P-Chain ([docs](https://build.avax.network/docs/cross-chain/avalanche-warp-messaging/deep-dive#icm-serialization)). ### Predicate Encoding Avalanche Warp Messages are encoded as a signed Avalanche [Warp Message](https://github.com/ava-labs/avalanchego/blob/master/vms/platformvm/warp/message.go) where the [UnsignedMessage](https://github.com/ava-labs/avalanchego/blob/master/vms/platformvm/warp/unsigned_message.go)'s payload includes an [AddressedPayload](https://github.com/ava-labs/avalanchego/blob/master/vms/platformvm/warp/payload/payload.go). Since the predicate is encoded into the [Transaction Access List](https://eips.ethereum.org/EIPS/eip-2930), it is packed into 32 byte hashes intended to declare storage slots that should be pre-warmed into the cache prior to transaction execution. Therefore, we use the [`predicate`](https://github.com/ava-labs/avalanchego/tree/master/vms/evm/predicate) package to encode the actual byte slice of size N into the access list. ### Performance Optimization: C-Chain to Avalanche L1 For communication between the C-Chain and an L1, as well as broader interactions between the Primary Network and Avalanche L1s, we have implemented special handling for the C-Chain. The Primary Network has a large validator set, which creates a unique challenge for Avalanche Warp messages. To reach the required stake threshold, numerous signatures would need to be collected and verifying messages from the Primary Network would be computationally costly. However, we have developed a more efficient solution. When an Avalanche L1 receives a message from a blockchain on the Primary Network, we use the validator set of the receiving L1 instead of the entire network when validating the message. Note this is NOT possible if an L1 does not validate the Primary Network, in which case the Warp precompile must be configured with `requirePrimaryNetworkSigners`. Sending messages from the C-Chain remains unchanged. However, when L1 XYZ receives a message from the C-Chain, it changes the semantics to the following: 1. Read the `SourceChainID` of the signed message (C-Chain) 2. Look up the `SubnetID` that validates C-Chain: Primary Network 3. Look up the validator set of L1 XYZ (instead of the Primary Network) and the registered BLS Public Keys of L1 XYZ at the P-Chain height specified by the ProposerVM header 4. Continue Warp Message verification using the validator set of L1 XYZ instead of the Primary Network This means that C-Chain to L1 communication only requires a threshold of stake on the receiving L1 to sign the message instead of a threshold of stake for the entire Primary Network. This assumes that the security of L1 XYZ already depends on the validators of L1 XYZ to behave virtuously. Therefore, requiring a threshold of stake from the receiving L1's validator set instead of the whole Primary Network does not meaningfully change security of the receiving L1. Note: this special case is ONLY applied during Warp Message verification. The message sent by the Primary Network will still contain the Avalanche C-Chain's blockchainID as the sourceChainID and signatures will be served by querying the C-Chain directly. ## Design Considerations ### Re-Processing Historical Blocks Avalanche Warp Messaging depends on the Avalanche P-Chain state at the P-Chain height specified by the ProposerVM block header. Verifying a message requires looking up the validator set of the source L1 on the P-Chain. To support this, Avalanche Warp Messaging uses the ProposerVM header, which includes the P-Chain height it was issued at as the canonical point to lookup the source L1's validator set. This means verifying the Warp Message and therefore the state transition on a block depends on state that is external to the blockchain itself: the P-Chain. The Avalanche P-Chain tracks only its current state and reverse diff layers (reversing the changes from past blocks) in order to re-calculate the validator set at a historical height. This means calculating a very old validator set that is used to verify a Warp Message in an old block may become prohibitively expensive. Therefore, we need a heuristic to ensure that the network can correctly re-process old blocks (note: re-processing old blocks is a requirement to perform bootstrapping and is used in some VMs to serve or verify historical data). As a result, we require that the block itself provides a deterministic hint which determines which Avalanche Warp Messages were considered valid/invalid during the block's execution. This ensures that we can always re-process blocks and use the hint to decide whether an Avalanche Warp Message should be treated as valid/invalid even after the P-Chain state that was used at the original execution time may no longer support fast lookups. To provide that hint, we've explored two designs: 1. Include a predicate in the transaction to ensure any referenced message is valid 2. Append the results of checking whether a Warp Message is valid/invalid to the block data itself The current implementation uses option (1). The original reason for this was that the notion of predicates for precompiles was designed with Shared Memory in mind. In the case of shared memory, there is no canonical "P-Chain height" in the block which determines whether or not Avalanche Warp Messages are valid. Instead, the VM interprets a shared memory import operation as valid as soon as the UTXO is available in shared memory. This means that if it were up to the block producer to staple the valid/invalid results of whether or not an attempted atomic operation should be treated as valid, a byzantine block producer could arbitrarily report that such atomic operations were invalid and cause a griefing attack to burn the gas of users that attempted to perform an import. Therefore, a transaction specified predicate is required to implement the shared memory precompile to prevent such a griefing attack. In contrast, Avalanche Warp Messages are validated within the context of an exact P-Chain height. Therefore, if a block producer attempted to lie about the validity of such a message, the network would interpret that block as invalid. ### Guarantees Offered by Warp Precompile vs. Built on Top #### Guarantees Offered by Warp Precompile The Warp Precompile was designed with the intention of minimizing the trusted computing base for the VM as much as possible. Therefore, it makes several tradeoffs which encourage users to use protocols built ON TOP of the Warp Precompile itself as opposed to directly using the Warp Precompile. The Warp Precompile itself provides ONLY the following ability: - Emit a verifiable message from (Address A, Blockchain A) to (Address B, Blockchain B) that can be verified by the destination chain #### Explicitly Not Provided / Built on Top The Warp Precompile itself does not provide any guarantees of: - Eventual message delivery (may require re-send on blockchain A and additional assumptions about off-chain relayers and chain progress) - Ordering of messages (requires ordering provided a layer above) - Replay protection (requires replay protection provided a layer above) # What is ICM? (/docs/cross-chain/avalanche-warp-messaging/overview) --- title: What is ICM? description: Learn about Avalanche Interchain Messaging, a protocol for cross-chain communication. --- Avalanche Interchain Messaging (ICM) enables native cross-Avalanche L1 communication and allows [Virtual Machine (VM)](/docs/primary-network/virtual-machines) developers to implement arbitrary communication protocols between any two Avalanche L1s. ## Use Cases Use cases for ICM may include but is not limited to: - Oracle Networks: Connecting an Avalanche L1 to an oracle network is a costly process. ICM makes it easy for oracle networks to broadcast their data from their origin chain to other Avalanche L1s. - Token transfers between Avalanche L1s - State Sharding between multiple Avalanche L1s Elements of Cross-Avalanche L1 Communication[​](#elements-of-cross-avalanche-l1-communication "Direct link to heading") ----------------------------------------------------------------------------------------------------------- The communication consists of the following four steps: ![image showing four steps of cross-Avalanche L1 communication: Signing, aggregation, Delivery and Verification](/images/warp1.png) ### Signing Messages on the Origin Avalanche L1[​](#signing-messages-on-the-origin-avalanche-l1 "Direct link to heading") ICM is a low-level messaging protocol. Any type of data encoded in an array of bytes can be included in the message sent to another Avalanche L1. ICM uses the [BLS signature scheme](https://crypto.stanford.edu/~dabo/pubs/papers/BLSmultisig.html), which allows message recipients to verify the authenticity of these messages. Therefore, every validator on the Avalanche network holds a BLS key pair, consisting of a private key for signing messages and a public key that others can use to verify the signature. ### Signature Aggregation on the Origin Avalanche L1[​](#signature-aggregation-on-the-origin-avalanche-l1 "Direct link to heading") If the validator set of an Avalanche L1 is very large, this would result in the Avalanche L1's validators sending many signatures between them. One of the powerful features of BLS is the ability to aggregate many signatures of different signers in a single multi-signature. Therefore, validators of one Avalanche L1 can now individually sign a message and these signatures are then aggregated into a short multi-signature that can be quickly verified. ### Delivery of Messages to the Destination Avalanche L1[​](#delivery-of-messages-to-the-destination-avalanche-l1 "Direct link to heading") The messages do not pass through a central protocol or trusted entity, and there is no record of messages sent between Avalanche L1s on the primary network. This avoids a bottleneck in Avalanche L1-to-Avalanche L1 communication, and non-public Avalanche L1s can communicate privately. It is up to the Avalanche L1s and their users to determine how they want to transport data from the validators of the origin Avalanche L1 to the validators of the destination Avalanche L1 and what guarantees they want to provide for the transport. ### Verification of Messages in the Destination Avalanche L1[​](#verification-of-messages-in-the-destination-avalanche-l1 "Direct link to heading") When an Avalanche L1 wants to process another Avalanche L1's message, it will look up both BLS Public Keys and stake of the origin Avalanche L1. The authenticity of the message can be verified using these public keys and the signature. The combined weight of the validators that must be part of the BLS multi-signature to be considered valid can be set according to the individual requirements of each Avalanche L1-to-Avalanche L1 communication. Avalanche L1 A may accept messages from Avalanche L1 B that are signed by at least 70% of stake. Messages from Avalanche L1 C are only accepted if they have been signed by validators that account for 90% of the stake. Since both the public keys and stake weights of all validators are recorded on the primary network's P-chain, they are readily accessible to any virtual machine run by the validators. Therefore, the Avalanche L1s do not need to communicate with each other about changes in their respective sets of validators, but can simply rely on the latest information on the P-Chain. Therefore, ICM introduces no additional trust assumption other than that the validators of the origin Avalanche L1 are participating honestly. Reference Implementation[​](#reference-implementation "Direct link to heading") ------------------------------------------------------------------------------- A Proof-of-Concept VM called [XSVM](https://github.com/ava-labs/xsvm) was created to demonstrate the power of ICM. XSVM enables simple ICM transfers between any two Avalanche L1s if run out-of-the-box. # ICM Services Releases (/docs/cross-chain/avalanche-warp-messaging/releases) --- title: ICM Services Releases description: Track ICM Relayer and Signature Aggregator releases, version compatibility, and download binaries. --- This page is automatically generated from the [ICM Services GitHub releases](https://github.com/ava-labs/icm-services/releases). ## Current Recommended Versions | Component | Version | Released | Type | |-----------|---------|----------|------| | **ICM Relayer** | v1.7.5 | January 27, 2026 | Stable | | **Signature Aggregator** | v0.5.4 | January 27, 2026 | Stable | **Important:** ICM Services must be compatible with your AvalancheGo version. Always check the release notes for network upgrade compatibility requirements. ## Quick Installation ### ICM Relayer ```bash # Download the latest release (Linux AMD64) curl -sL -o icm-relayer.tar.gz https://github.com/ava-labs/icm-services/releases/download/icm-relayer-v1.7.5/icm-relayer_1.7.5_linux_amd64.tar.gz # Extract and install tar -xzf icm-relayer.tar.gz sudo install icm-relayer /usr/local/bin ``` ### Signature Aggregator ```bash # Download the latest release (Linux AMD64) curl -sL -o signature-aggregator.tar.gz https://github.com/ava-labs/icm-services/releases/download/signature-aggregator-v0.5.4/signature-aggregator_0.5.4_linux_amd64.tar.gz # Extract and install tar -xzf signature-aggregator.tar.gz sudo install signature-aggregator /usr/local/bin ``` ### Docker Both components are available as Docker images: ```bash # ICM Relayer docker pull avaplatform/icm-relayer:latest # Signature Aggregator docker pull avaplatform/signature-aggregator:latest ``` ## ICM Relayer Releases The ICM Relayer listens for Warp message events on source blockchains and constructs transactions to relay messages to destination blockchains. ### v1.7.5 **Released:** January 27, 2026 | [View on GitHub](https://github.com/ava-labs/icm-services/releases/tag/icm-relayer-v1.7.5) ## Overview > [!IMPORTANT] > This version is incompatible with previously used configs. All references to `vm` fields in `source-blockchain` and `destination-blockchain` config blocks must be removed This release includes a bugfix for an issue that could cause messages sent around relayer startup to be missed by both the catch-up as well as the live chain processing. It also adds new chec... | Platform | File | Size | |----------|------|------| | Linux (AMD64) | [icm-relayer_1.7.5_linux_amd64.tar.gz](https://github.com/ava-labs/icm-services/releases/download/icm-relayer-v1.7.5/icm-relayer_1.7.5_linux_amd64.tar.gz) | 18.7 MB | | Linux (ARM64) | [icm-relayer_1.7.5_linux_arm64.tar.gz](https://github.com/ava-labs/icm-services/releases/download/icm-relayer-v1.7.5/icm-relayer_1.7.5_linux_arm64.tar.gz) | 17.4 MB | | macOS (AMD64) | [icm-relayer_1.7.5_darwin_amd64.tar.gz](https://github.com/ava-labs/icm-services/releases/download/icm-relayer-v1.7.5/icm-relayer_1.7.5_darwin_amd64.tar.gz) | 18.7 MB | | macOS (ARM64) | [icm-relayer_1.7.5_darwin_arm64.tar.gz](https://github.com/ava-labs/icm-services/releases/download/icm-relayer-v1.7.5/icm-relayer_1.7.5_darwin_arm64.tar.gz) | 17.7 MB | ### v1.7.4 **Released:** November 13, 2025 | [View on GitHub](https://github.com/ava-labs/icm-services/releases/tag/icm-relayer-v1.7.4) ## Overview This version is compatible with Granite on both Fuji and Mainnet. **All Mainnet nodes must upgrade before 11 AM ET, November 19th 2025.** This version reduces the frequency of validator set fetching from [v1.7.3](https://github.com/ava-labs/icm-services/releases/tag/icm-relayer-v1.7.3) which can help in resource constrained deployments. It also includes a work-around for ... | Platform | File | Size | |----------|------|------| | Linux (AMD64) | [icm-relayer_1.7.4_linux_amd64.tar.gz](https://github.com/ava-labs/icm-services/releases/download/icm-relayer-v1.7.4/icm-relayer_1.7.4_linux_amd64.tar.gz) | 18.2 MB | | Linux (ARM64) | [icm-relayer_1.7.4_linux_arm64.tar.gz](https://github.com/ava-labs/icm-services/releases/download/icm-relayer-v1.7.4/icm-relayer_1.7.4_linux_arm64.tar.gz) | 16.8 MB | | macOS (AMD64) | [icm-relayer_1.7.4_darwin_amd64.tar.gz](https://github.com/ava-labs/icm-services/releases/download/icm-relayer-v1.7.4/icm-relayer_1.7.4_darwin_amd64.tar.gz) | 18.2 MB | | macOS (ARM64) | [icm-relayer_1.7.4_darwin_arm64.tar.gz](https://github.com/ava-labs/icm-services/releases/download/icm-relayer-v1.7.4/icm-relayer_1.7.4_darwin_arm64.tar.gz) | 17.1 MB | ### v1.7.3 **Released:** November 5, 2025 | [View on GitHub](https://github.com/ava-labs/icm-services/releases/tag/icm-relayer-v1.7.3) This version is compatible with Avalanche Granite upgrade. **All mainnet ICM relayer instances must be update before November 19th at 11:00 AM ET.** | Platform | File | Size | |----------|------|------| | Linux (AMD64) | [icm-relayer_1.7.3_linux_amd64.tar.gz](https://github.com/ava-labs/icm-services/releases/download/icm-relayer-v1.7.3/icm-relayer_1.7.3_linux_amd64.tar.gz) | 18.3 MB | | Linux (ARM64) | [icm-relayer_1.7.3_linux_arm64.tar.gz](https://github.com/ava-labs/icm-services/releases/download/icm-relayer-v1.7.3/icm-relayer_1.7.3_linux_arm64.tar.gz) | 16.9 MB | | macOS (AMD64) | [icm-relayer_1.7.3_darwin_amd64.tar.gz](https://github.com/ava-labs/icm-services/releases/download/icm-relayer-v1.7.3/icm-relayer_1.7.3_darwin_amd64.tar.gz) | 18.3 MB | | macOS (ARM64) | [icm-relayer_1.7.3_darwin_arm64.tar.gz](https://github.com/ava-labs/icm-services/releases/download/icm-relayer-v1.7.3/icm-relayer_1.7.3_darwin_arm64.tar.gz) | 17.2 MB | ### v1.6.7 **Released:** October 14, 2025 | [View on GitHub](https://github.com/ava-labs/icm-services/releases/tag/icm-relayer-v1.6.7) ## Overview The most significant change from the previous release is a new configuration option to set a `suggested-priority-fee-buffer` which gets added to the suggested tip returned by the RPC | Platform | File | Size | |----------|------|------| | Linux (AMD64) | [icm-relayer_1.6.7_linux_amd64.tar.gz](https://github.com/ava-labs/icm-services/releases/download/icm-relayer-v1.6.7/icm-relayer_1.6.7_linux_amd64.tar.gz) | 13.1 MB | | Linux (ARM64) | [icm-relayer_1.6.7_linux_arm64.tar.gz](https://github.com/ava-labs/icm-services/releases/download/icm-relayer-v1.6.7/icm-relayer_1.6.7_linux_arm64.tar.gz) | 12.1 MB | | macOS (AMD64) | [icm-relayer_1.6.7_darwin_amd64.tar.gz](https://github.com/ava-labs/icm-services/releases/download/icm-relayer-v1.6.7/icm-relayer_1.6.7_darwin_amd64.tar.gz) | 13.2 MB | | macOS (ARM64) | [icm-relayer_1.6.7_darwin_arm64.tar.gz](https://github.com/ava-labs/icm-services/releases/download/icm-relayer-v1.6.7/icm-relayer_1.6.7_darwin_arm64.tar.gz) | 12.5 MB | ### v1.6.6 **Released:** July 30, 2025 | [View on GitHub](https://github.com/ava-labs/icm-services/releases/tag/icm-relayer-v1.6.6) | Platform | File | Size | |----------|------|------| | Linux (AMD64) | [icm-relayer_1.6.6_linux_amd64.tar.gz](https://github.com/ava-labs/icm-services/releases/download/icm-relayer-v1.6.6/icm-relayer_1.6.6_linux_amd64.tar.gz) | 11.4 MB | | Linux (ARM64) | [icm-relayer_1.6.6_linux_arm64.tar.gz](https://github.com/ava-labs/icm-services/releases/download/icm-relayer-v1.6.6/icm-relayer_1.6.6_linux_arm64.tar.gz) | 10.6 MB | | macOS (AMD64) | [icm-relayer_1.6.6_darwin_amd64.tar.gz](https://github.com/ava-labs/icm-services/releases/download/icm-relayer-v1.6.6/icm-relayer_1.6.6_darwin_amd64.tar.gz) | 11.4 MB | | macOS (ARM64) | [icm-relayer_1.6.6_darwin_arm64.tar.gz](https://github.com/ava-labs/icm-services/releases/download/icm-relayer-v1.6.6/icm-relayer_1.6.6_darwin_arm64.tar.gz) | 10.8 MB | ## Signature Aggregator Releases The Signature Aggregator collects and aggregates BLS signatures from validators to create valid Warp message proofs. ### v0.5.4 **Released:** January 27, 2026 | [View on GitHub](https://github.com/ava-labs/icm-services/releases/tag/signature-aggregator-v0.5.4) | Platform | File | Size | |----------|------|------| | Linux (AMD64) | [signature-aggregator_0.5.4_linux_amd64.tar.gz](https://github.com/ava-labs/icm-services/releases/download/signature-aggregator-v0.5.4/signature-aggregator_0.5.4_linux_amd64.tar.gz) | 16.0 MB | | Linux (ARM64) | [signature-aggregator_0.5.4_linux_arm64.tar.gz](https://github.com/ava-labs/icm-services/releases/download/signature-aggregator-v0.5.4/signature-aggregator_0.5.4_linux_arm64.tar.gz) | 15.0 MB | | macOS (AMD64) | [signature-aggregator_0.5.4_darwin_amd64.tar.gz](https://github.com/ava-labs/icm-services/releases/download/signature-aggregator-v0.5.4/signature-aggregator_0.5.4_darwin_amd64.tar.gz) | 16.0 MB | | macOS (ARM64) | [signature-aggregator_0.5.4_darwin_arm64.tar.gz](https://github.com/ava-labs/icm-services/releases/download/signature-aggregator-v0.5.4/signature-aggregator_0.5.4_darwin_arm64.tar.gz) | 15.2 MB | ### v0.5.3 **Released:** November 13, 2025 | [View on GitHub](https://github.com/ava-labs/icm-services/releases/tag/signature-aggregator-v0.5.3) ## Overview This version is compatible with Granite on both Fuji and Mainnet. **All Mainnet nodes must upgrade before 11 AM ET, November 19th 2025.** This version reduces the frequency of validator set fetching from [v0.5.2](https://github.com/ava-labs/icm-services/releases/tag/signature-aggregator-v0.5.3) which can help in resource constrained deployments. | Platform | File | Size | |----------|------|------| | Linux (AMD64) | [signature-aggregator_0.5.3_linux_amd64.tar.gz](https://github.com/ava-labs/icm-services/releases/download/signature-aggregator-v0.5.3/signature-aggregator_0.5.3_linux_amd64.tar.gz) | 15.6 MB | | Linux (ARM64) | [signature-aggregator_0.5.3_linux_arm64.tar.gz](https://github.com/ava-labs/icm-services/releases/download/signature-aggregator-v0.5.3/signature-aggregator_0.5.3_linux_arm64.tar.gz) | 14.6 MB | | macOS (AMD64) | [signature-aggregator_0.5.3_darwin_amd64.tar.gz](https://github.com/ava-labs/icm-services/releases/download/signature-aggregator-v0.5.3/signature-aggregator_0.5.3_darwin_amd64.tar.gz) | 15.7 MB | | macOS (ARM64) | [signature-aggregator_0.5.3_darwin_arm64.tar.gz](https://github.com/ava-labs/icm-services/releases/download/signature-aggregator-v0.5.3/signature-aggregator_0.5.3_darwin_arm64.tar.gz) | 14.9 MB | ### v0.5.2 **Released:** November 5, 2025 | [View on GitHub](https://github.com/ava-labs/icm-services/releases/tag/signature-aggregator-v0.5.2) This version is compatible with Avalanche Granite upgrade. **All mainnet signature aggregator instances must be update before November 19th at 11:00 AM ET.** | Platform | File | Size | |----------|------|------| | Linux (AMD64) | [signature-aggregator_0.5.2_linux_amd64.tar.gz](https://github.com/ava-labs/icm-services/releases/download/signature-aggregator-v0.5.2/signature-aggregator_0.5.2_linux_amd64.tar.gz) | 15.6 MB | | Linux (ARM64) | [signature-aggregator_0.5.2_linux_arm64.tar.gz](https://github.com/ava-labs/icm-services/releases/download/signature-aggregator-v0.5.2/signature-aggregator_0.5.2_linux_arm64.tar.gz) | 14.6 MB | | macOS (AMD64) | [signature-aggregator_0.5.2_darwin_amd64.tar.gz](https://github.com/ava-labs/icm-services/releases/download/signature-aggregator-v0.5.2/signature-aggregator_0.5.2_darwin_amd64.tar.gz) | 15.7 MB | | macOS (ARM64) | [signature-aggregator_0.5.2_darwin_arm64.tar.gz](https://github.com/ava-labs/icm-services/releases/download/signature-aggregator-v0.5.2/signature-aggregator_0.5.2_darwin_arm64.tar.gz) | 14.9 MB | ### v0.4.5 **Released:** July 30, 2025 | [View on GitHub](https://github.com/ava-labs/icm-services/releases/tag/signature-aggregator-v0.4.5) | Platform | File | Size | |----------|------|------| | Linux (AMD64) | [signature-aggregator_0.4.5_linux_amd64.tar.gz](https://github.com/ava-labs/icm-services/releases/download/signature-aggregator-v0.4.5/signature-aggregator_0.4.5_linux_amd64.tar.gz) | 9.2 MB | | Linux (ARM64) | [signature-aggregator_0.4.5_linux_arm64.tar.gz](https://github.com/ava-labs/icm-services/releases/download/signature-aggregator-v0.4.5/signature-aggregator_0.4.5_linux_arm64.tar.gz) | 8.6 MB | | macOS (AMD64) | [signature-aggregator_0.4.5_darwin_amd64.tar.gz](https://github.com/ava-labs/icm-services/releases/download/signature-aggregator-v0.4.5/signature-aggregator_0.4.5_darwin_amd64.tar.gz) | 9.4 MB | | macOS (ARM64) | [signature-aggregator_0.4.5_darwin_arm64.tar.gz](https://github.com/ava-labs/icm-services/releases/download/signature-aggregator-v0.4.5/signature-aggregator_0.4.5_darwin_arm64.tar.gz) | 8.9 MB | ### v0.4.4 **Released:** July 11, 2025 | [View on GitHub](https://github.com/ava-labs/icm-services/releases/tag/signature-aggregator-v0.4.4) | Platform | File | Size | |----------|------|------| | Linux (AMD64) | [signature-aggregator_0.4.4_linux_amd64.tar.gz](https://github.com/ava-labs/icm-services/releases/download/signature-aggregator-v0.4.4/signature-aggregator_0.4.4_linux_amd64.tar.gz) | 9.2 MB | | Linux (ARM64) | [signature-aggregator_0.4.4_linux_arm64.tar.gz](https://github.com/ava-labs/icm-services/releases/download/signature-aggregator-v0.4.4/signature-aggregator_0.4.4_linux_arm64.tar.gz) | 8.6 MB | | macOS (AMD64) | [signature-aggregator_0.4.4_darwin_amd64.tar.gz](https://github.com/ava-labs/icm-services/releases/download/signature-aggregator-v0.4.4/signature-aggregator_0.4.4_darwin_amd64.tar.gz) | 9.4 MB | | macOS (ARM64) | [signature-aggregator_0.4.4_darwin_arm64.tar.gz](https://github.com/ava-labs/icm-services/releases/download/signature-aggregator-v0.4.4/signature-aggregator_0.4.4_darwin_arm64.tar.gz) | 8.9 MB | ## All Releases For a complete list of all ICM Services releases including pre-releases, visit the official [GitHub Releases page](https://github.com/ava-labs/icm-services/releases). ## Related Resources - [Run a Relayer](/docs/cross-chain/avalanche-warp-messaging/run-relayer) - Detailed relayer setup guide - [Avalanche Warp Messaging Overview](/docs/cross-chain/avalanche-warp-messaging/overview) - Understanding AWM - [ICM Contracts](/docs/cross-chain/icm-contracts/overview) - Smart contract integration # Run a Relayer (/docs/cross-chain/avalanche-warp-messaging/run-relayer) --- title: "Run a Relayer" description: "Reference relayer implementation for cross-chain Avalanche Interchain Message delivery." edit_url: https://github.com/ava-labs/icm-services/edit/main/relayer/README.md --- # ICM Relayer Reference relayer implementation for cross-chain Avalanche Warp Message delivery. ICM Relayer listens for Warp message events on a set of source blockchains, and constructs transactions to relay the Warp message to the intended destination blockchain. The relayer does so by querying the source blockchain validator nodes for their BLS signatures on the Warp message, combining the individual BLS signatures into a single aggregate BLS signature, and packaging the aggregate BLS signature into a transaction according to the destination blockchain VM Warp message verification rules. ## Installation ### Dev Container & Codespace To get started easily, we provide a Dev Container specification, that can be used using GitHub Codespace or locally using Docker and VS Code. [Dev Containers](https://code.visualstudio.com/docs/devcontainers/containers) are a concept that utilizes containerization to create consistent and isolated development environment. You can run them directly on Github by clicking **Code**, switching to the **Codespaces** tab and clicking **Create codespace on main**. Alternatively, you can run them locally with the extensions for [VS Code](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers) or other code editors. ### Download Prebuilt Binaries Prebuilt binaries are available for download from the [releases page](https://github.com/ava-labs/icm-services/releases). The following commands demonstrate how to download and install the v0.2.13 release of the relayer on MacOS. The exact commands will vary by platform. ```bash # Download the release tarball and checksums curl -w '%{http_code}' -sL -o ~/Downloads/icm-relayer_0.2.13_darwin_arm64.tar.gz https://github.com/ava-labs/icm-services/releases/download/v0.2.13/icm-relayer_0.2.13_darwin_arm64.tar.gz curl -w '%{http_code}' -sL -o ~/Downloads/icm-relayer_0.2.13_checksums.txt https://github.com/ava-labs/icm-services/releases/download/v0.2.13/icm-relayer_0.2.13_checksums.txt # (Optional) Verify the checksums cd ~/Downloads # Confirm that the following two commands output the same checksum grep "icm-relayer_0.2.13_darwin_arm64.tar.gz" "icm-relayer_0.2.13_checksums.txt" 2>/dev/null shasum -a 256 "icm-relayer_0.2.13_darwin_arm64.tar.gz" 2>/dev/null # Extract the tarball and install the relayer binary tar -xzf icm-relayer_0.2.13_darwin_arm64.tar.gz sudo install icm-relayer /usr/local/bin ``` _Note:_ If downloading the binaries through a browser on MacOS, the browser may mark the binary as quarantined since it has not been verified through the App Store. To remove the quarantine, run the following command: ```bash xattr -d com.apple.quarantine /usr/local/bin/icm-relayer ``` ### Download Docker Image The published Docker image can be pulled from `avaplatform/icm-relayer:latest` on dockerhub. ### Build from Source See the [Building](#building) section for instructions on how to build the relayer from source. ## Requirements [buf](https://github.com/bufbuild/buf/) is required to rebuild protobuf definitions if changes are made to any `.proto` files. See [Generate Protobuf Files](#generate-protobuf-files) for more information. ### System Requirements - Ubuntu 22.04 or later - Tested on x86_64/amd64 architecture. - MacOS 14.3 or later - Tested on arm64 architecture (Apple silicon). ### API Requirements - ICM Relayer requires access to Avalanche API nodes for the P-Chain as well as any connected Subnets. The API nodes must have the following methods enabled: - Each Subnet API node must have enabled: - eth API (RPC and WS) - The P-Chain API node must have enabled: - platform.getHeight - platform.validatedBy - platform.getValidatorsAt OR platform.getCurrentValidators - The Info API node must have enabled: - info.peers - info.getNetworkID - If the Info API node is also a Subnet validator, it must have enabled: - info.getNodeID - info.getNodeIP The Fuji and Mainnet [public API nodes](https://docs.avax.network/tooling/rpc-providers) provided by Avalanche have these methods enabled, and are suitable for use with the relayer. ### Peer-to-Peer Connections - By default, the ICM relayer implementation gathers BLS signatures from the validators of the source Subnet via peer-to-peer `AppRequest` messages. Validator nodes need to be configured to accept incoming peer connections. Otherwise, the relayer will fail to gather Warp message signatures. For example, networking rules may need to be adjusted to allow traffic on the default AvalancheGo P2P port (9651), or the public IP may need to be manually set in the [node configuration](https://docs.avax.network/nodes/configure/avalanchego-config-flags#public-ip). - **DEPRECATED** If configured to use the Warp API (see `warp-api-endpoint` in [Configuration](#configuration)) then aggregate signatures are fetched via a single RPC request, rather than `AppRequests` to individual validators. Note that the Warp API is disabled on the public API. ### Private Key Management - Each configured destination blockchain requires a private key to sign transactions. This key can be provided as a hex-encoded string in the configuration (see `account-private-key` in [Configuration](#configuration)) or environment variable, or stored in KMS and used to sign transactions remotely (see `kms-key-id` and `kms-aws-region` in [Configuration](#configuration)). - **Each private key used by the relayer should not be used to sign transactions outside of the relayer**, as this may cause the relayer to fail to sign transactions due to nonce mismatches. ## Usage ### Options The relayer binary accepts the following command line options. Other configuration options are not supported via the command line and must be provided via the configuration JSON file or environment variable. ```bash icm-relayer --config-file path-to-config Specifies the relayer config file and begin relaying messages. icm-relayer --version Display icm-relayer version and exit. icm-relayer --help Display icm-relayer usage and exit. ``` ### Initialize the repository - Get all submodules: `git submodule update --init --recursive` ### Building Before building, be sure to install Go, which is required even if you're just building the Docker image. Build the relayer by running the script: ```bash ./scripts/build.sh ``` Build a Docker image by running the script: ```bash ./scripts/build_local_image.sh ``` ### Configuration The relayer is configured via a JSON file, the path to which is passed in via the `--config-file` command line argument. Top level configuration options are also able to be set via environment variable. To get the environment variable corresponding to a key, upper case the key and change the delimiter from "-" to "_". For example, `LOG_LEVEL` sets the `"log-level"` JSON key. The following configuration options are available: `"log-level": "verbo" | "debug" | "info" | "warn" | "error" | "fatal" | "panic"` - The log level for the relayer. Defaults to `info`. `"p-chain-api": APIConfig` - The configuration for the Avalanche P-Chain API node. The `PChainAPI` object has the following configuration: `"base-url": string` - The URL of the Avalanche P-Chain API node to which the relayer will connect. This API node needs to have the following methods enabled: - platform.getHeight - platform.validatedBy - platform.getValidatorsAt OR platform.getCurrentValidators `"query-parameters": map[string]string` - Additional query parameters to include in the API requests. `"http-headers": map[string]string` - Additional HTTP headers to include in the API requests. `"info-api": APIConfig` - The configuration for the Avalanche Info API node. The `InfoAPI` object has the following configuration: `"base-url": string` - The URL of the Avalanche Info API node to which the relayer will connect. This API node needs to have the following methods enabled: - info.peers - info.getNetworkID - Additionally, if the Info API node is also a validator, it must have enabled: - info.getNodeID - info.getNodeIP `"query-parameters": map[string]string` - Additional query parameters to include in the API requests. `"http-headers": map[string]string` - Additional HTTP headers to include in the API requests. `"storage-location": string` - The path to the directory in which the relayer will store its state. Defaults to `./icm-relayer-storage`. `"redis-url": string` - The URL of the Redis server to use to manage state. This URL should specify the user, password, host, port, DB index, and protocol version. For example, `"redis://user:password@localhost:6379/0?protocol=3"`. Overrides `storage-location` if provided. `"process-missed-blocks": boolean` - Whether or not to process missed blocks after restarting. Defaults to `true`. If set to false, the relayer will start processing blocks from the chain head. `"api-port"`: unsigned integer - The port on which the relayer will listen for API requests. Defaults to `8080`. `"metrics-port"`: unsigned integer - The port on which the relayer will expose Prometheus metrics. Defaults to `9090`. `"db-write-interval-seconds": unsigned integer` - The interval at which the relayer will write to the database. Defaults to `10`. `"tls-cert-path": string` - The path to a TLS cert file. Should only be set if a static NodeID is required for connecting to private networks. `"tls-key-path": string` - The path to a TLS key file. Should only be set if a static NodeID is required for connecting to private networks. `"initial-connection-timeout-seconds": unsigned integer` - The maximum number of seconds to wait during start up to connect to sufficient stake weight of each supported chain. `"max-concurrent-messages": unsigned integer` - The maximum number of messages the application will attempt to process concurrently. Processing messages involves making potentially multiple RPC requests, and issuing too many requests at once may cause failures. `"manual-warp-messages": []ManualWarpMessage` - The list of Warp messages to relay on startup, independent of the catch-up mechanism or normal operation. Each `ManualWarpMessage` has the following configuration: `"unsigned-message-bytes": string` - The hex-encoded bytes of the unsigned Warp message to relay. `"source-blockchain-id": string` - cb58-encoded or "0x" prefixed hex-encoded blockchain ID of the source blockchain. `"destination-blockchain-id": string` - cb58-encoded or "0x" prefixed hex-encoded blockchain ID of the destination blockchain. `"source-address": string` - The address of the source account that sent the Warp message. `"destination-address": string` - The address of the destination account that will receive the Warp message. `"source-blockchains": []SourceBlockchains` - The list of source blockchains to support. Each `SourceBlockchain` has the following configuration: `"subnet-id": string` - cb58-encoded or "0x" prefixed hex-encoded Subnet ID. `"blockchain-id": string` - cb58-encoded or "0x" prefixed hex-encoded blockchain ID. `"rpc-endpoint": APIConfig` - The RPC endpoint configuration of the source blockchain's API node. An `APIConfig` has the following fields: `"base-url": string` - The URL that will be queried. The API node is expected to have all standard ETH endpoints enabled. `"query-params": map[string]string` - A map of query parameters to values that will be added to the base URL `"http-headers": map[string]string` - A map of HTTP headers to include in the requests to this API `"ws-endpoint": APIConfig` - The WebSocket endpoint configuration of the source blockchain's API node. An `APIConfig` has the following fields: `"base-url": string` - The URL that will be queried. The API node is expected to accept eth_subscribe connections. `"query-params": map[string]string` - A map of query parameters to values that will be added to the base URL `"http-headers": map[string]string` - A map of HTTP headers to include in the requests to this API `"message-contracts": map[string]MessageProtocolConfig` - Map of contract addresses to the config options of the protocol at that address. Each `MessageProtocolConfig` consists of a unique `message-format` name, and the raw JSON `settings`. `"supported-destinations": []SupportedDestination` - List of destinations that the source blockchain supports. Each `SupportedDestination` consists of a cb58-encoded destination blockchain ID (`"blockchain-id"`), and a list of hex-encoded addresses (`"addresses"`) on that destination blockchain that the relayer supports delivering Warp messages to. The destination address is defined by the message protocol. For example, it could be the address called from the message protocol contract. If no supported addresses are provided, all addresses are allowed on that blockchain. If `supported-destinations` is empty, then all destination blockchains (and therefore all addresses on those destination blockchains) are supported. `"process-historical-blocks-from-height": unsigned integer` - The block height at which to back-process transactions from the source blockchain. If the database already contains a later block height for the source blockchain, then that will be used instead. Must be non-zero. Will only be used if `process-missed-blocks` is set to `true`. `"allowed-origin-sender-addresses": []string` - List of addresses on this source blockchain to relay Warp messages from. The sending address is defined by the message protocol. For example, it could be defined as the EOA that initiates the transaction, or the address that calls the message protocol contract. If empty, then all addresses are allowed. `"DEPRECATED warp-api-endpoint": APIConfig` - The RPC endpoint configuration for the Warp API, which is used to fetch Warp aggregate signatures. If omitted, then signatures are fetched via AppRequest instead. An `APIConfig` has the following fields: `"base-url": string` - The URL that will be queried. The API node is expected to have `warp.getMessageAggregateSignature` enabled. `"query-params": map[string]string` - A map of query parameters to values that will be added to the base URL `"http-headers": map[string]string` - A map of HTTP headers to include in the requests to this API `"destination-blockchains": []DestinationBlockchains` - The list of destination blockchains to support. Each `DestinationBlockchain` has the following configuration: `"subnet-id": string` - cb58-encoded or "0x" prefixed hex-encoded Subnet ID. `"blockchain-id": string` - cb58-encoded or "0x" prefixed hex-encoded blockchain ID. `"rpc-endpoint": APIConfig` - The RPC endpoint configuration of the destination blockchains's API node. An `APIConfig` has the following fields: `"base-url": string` - The URL that will be queried `"query-params": map[string]string` - A map of query parameters to values that will be added to the base URL `"http-headers": map[string]string` - A map of HTTP headers to include in the requests to this API `"account-private-key": string` - The hex-encoded private key to use for signing transactions on the destination blockchain. May be provided by the environment variable `ACCOUNT_PRIVATE_KEY`. Each `destination-subnet` may use a separate private key by appending the cb58 encoded blockchain ID to the private key environment variable name, for example `ACCOUNT_PRIVATE_KEY_11111111111111111111111111111111LpoYY` - Please note that the private key should be exclusive to the relayer, see [Private Key Management](#private-key-management). `"kms-key-id": string` - The ID of the KMS key to use for signing transactions on the destination blockchain. If `kms-key-id` is provided, then `kms-aws-region` is required. - Please note that the private key in KMS should be exclusive to the relayer, see [Private Key Management](#private-key-management). `"kms-aws-region": string` - The AWS region in which the KMS key is located. Required if `kms-key-id` is provided. `"account-private-keys-list": []string` - A list of hex-encoded private keys for signing transactions on the destination blockchain. May also be provided as a space-delimited list by the environment variable `ACCOUNT_PRIVATE_KEYS_LIST`. Each `destination-subnet` may use a separate list of private keys by appending the cb58 encoded blockchain ID to the private keys environment variable name, for example `ACCOUNT_PRIVATE_KEYS_LIST_11111111111111111111111111111111LpoYY` - Please note that all private keys should be exclusive to the relayer, see [Private Key Management](#private-key-management). `"kms-keys": []KMSKey` - A list of KMS Keys that may be used for signing transactions on the destination blockchain. A `KMSKey` has the following fields: `"key-id": string` - The ID of the KMS key to use for signing transactions on the destination blockchain. `"aws-region": string` - The AWS region in which the KMS key is located. `"block-gas-limit": unsigned integer` - The maximum amount of gas that can be used in a single block on this blockchain. The relayer will not attempt to deliver messages that require more gas than this limit to the given chain. Defaults to 12,000,000 if not set for a given chain. `"max-base-fee": unsigned integer` - The maximum base fee gas price (in WEI) the relayer is willing to pay on this blockchain. If zero or left unset, the relayer will use a multiple of the current base fee estimation, and not have an explicit maximum. `"suggested-priority-fee-buffer": unsigned integer` - The fee per gas (in WEI) that will be added to the priority fee rate on top of the suggested priority fee fetched via RPC. A higher suggested priority fee buffer will make transaction inclusion more likely in cases where a chain's capacity is being exhausted, but also results in higher effective gas prices for transactions when potentially not necessary. The maximum priority fee per gas is also capped by `max-priority-fee-per-gas`, as described below. `"max-priority-fee-per-gas": unsigned integer` - The maximum priority fee per gas (in WEI) that the relayer is willing to pay to incentivize transactions being included on this blockchain. The relayer will use the current estimation of the required gas tip cap for this blockchain plus the `suggested-priority-fee-buffer`, up to a maximum of this configured value. Defaults to 2.5 GWEI. `"tx-inclusion-timeout-seconds": unisgned integer` - The time in seconds to wait for a sent transaction to be included in a block when verifying transaction receipts before erroring out. If omitted, defaults to 30 seconds. `"decider-url": string` - The URL of a service implementing the gRPC service defined by `proto/decider`, which will be queried for each message to determine whether that message should be relayed. ## Architecture ### Components The relayer consists of the following components: - At the global level: - P2P app network: issues signature `AppRequests` - P-Chain client: gets the validators for a Subnet - Relayer database: stores latest processed block for each Application Relayer - Currently supports Redis and local JSON file storage - Per Source Blockchain - Subscriber: listens for logs pertaining to cross-chain message transactions - Source RPC client: queries for missed blocks on startup - Per Destination Blockchain - Destination RPC client: broadcasts transactions to the destination - Application Relayers - Relay messages from a specific source blockchain and source address to a specific destination blockchain and destination address ### Data Flow
>
Figure 1: Relayer Data Flow
### Processing Missed Blocks On startup, the relayer will process any blocks that it missed while offline if `process-missed-blocks` is set to `true` in the configuration. For each configured `source-blockchain`, the starting block height is set as the _minimum_ block height that is stored in the relayer's database across keys that pertain to that blockchain. These keys correspond to distinct sending addresses (as specified in `source-blockchain.allowed-origin-sender-addresses`), meaning that on startup, the relayer will begin processing from the _minimum_ block height across all configured sending addresses for each `source-blockchain`. Note that an empty `source-blockchain.allowed-origin-sender-addresses` list is treated as its own distinct key. If no keys are found, then the relayer begins processing from the current chain tip. Once the starting block height is calculated, all blocks between it and the current tip of the `source-blockchain` are processed according to the _current_ configuration rules. _Note:_ Given these semantics for computing the starting block height, it's possible for blocks that have previously been ignored under different configuration options to be relayed on a subsequent run. For example, consider the following scenario consisting of three subsequent runs (see Figure 2 below): - **Run 1**: Suppose that on Blockchain A we set `allowed-origin-sender-addresses=[0x1234]`, meaning that the relayer should _ignore_ messages sent by any other address. The relayer processes through block `100`, and that height is marked as processed for `0x1234`'s key. - **Run 2**: The relayer is then restarted with `allowed-origin-sender-addresses=[0xabcd]`, replacing `0x1234`. A message is sent from address `0x1234` at block height `200`. The relayer will decide to ignore this message, and will mark block `200` as processed for `0xabcd`'s key. - **Run 3**: The relayer is then restarted again with the original configuration of `allowed-origin-sender-addresses=[0x1234]`. The relayer will calculate the starting block height as `100`, and process blocks `100` through the current chain tip, reprocessing block `200` along the way. Instead of ignoring the message in this block, however, the relayer will relay it to the destination.
>
Figure 2: Processing Missed Blocks Example
### API #### `/relay` - Used to manually relay a Warp message. The body of the request must contain the following JSON: ```json { "blockchain-id": "", "message-id": "", "block-num": "" } ``` - If successful, the endpoint will return the following JSON: ```json { "transaction-hash": "" } ``` #### `/relay/message` - Used to manually relay a warp message. The body of the request must contain the following JSON: ```json { "unsigned-message-bytes": "", "source-address": "" } ``` - If successful, the endpoint will return the following JSON: ```json { "transaction-hash": "", } ``` #### `/health` - Takes no arguments. Returns a `200` status code if all Application Relayers are healthy. Returns a `503` status if any of the Application Relayers have experienced an unrecoverable error. Here is an example return body: ```json { "status": "down", "details": { "relayers-all": { "status": "down", "timestamp": "2024-06-01T05:06:07.685522Z", "error": "" } } } ``` ## Testing ### Unit Tests Unit tests can be ran locally by running the command in the root of the project: ```bash ./scripts/test.sh ``` If your temporary directory is not writable, the unit tests may fail with messages like `fork/exec /tmp/go-build2296620589/b247/config.test: permission denied`. To fix this, set the `TMPDIR` environment variable to something writable, for example `export TMPDIR=~/tmp`. ### E2E Tests To run the E2E tests locally, you'll need to install Gingko following the instructions [here](https://onsi.github.io/ginkgo/#installing-ginkgo). Run the tests using the dedicated script: ```bash ./scripts/e2e_test.sh ``` To run a specific E2E test, specify the environment variable `GINKGO_FOCUS`, which will then look for [test descriptions](https://github.com/ava-labs/icm-services/blob/main/relayer/tests/e2e_test.go#L68) that match the provided input. For example, to run the `Basic Relay` test: ```bash GINKGO_FOCUS="Basic" ./scripts/e2e_test.sh ``` The E2E tests use the `TeleporterMessenger` contract deployment transaction specified in the following files: - `tests/utils/UniversalTeleporterDeployerAddress.txt` - `tests/utils/UniversalTeleporterDeployerTransaction.txt` - `tests/utils/UniversalTeleporterMessagerContractAddress.txt` To update the version of Teleporter used by the E2E tests, update these values with the latest contract deployment information. For more information on how to deploy the Teleporter contract, see the [Teleporter documentation](https://github.com/ava-labs/icm-contracts/tree/main/utils/contract-deployment). ### Generate Mocks [Gomock](https://pkg.go.dev/go.uber.org/mock/gomock) is used to generate mocks for testing. To generate mocks, run the following command at the root of the project: ```bash go generate ./... ``` ### Generate Protobuf Files [buf](https://github.com/bufbuild/buf/) is used to generate protobuf definitions for communication with the [Decider service](https://github.com/ava-labs/icm-services/blob/main/proto/decider/decider.proto). If you change any of the protobuf definitions you will have to regenerate the `.go` files. To generate these files, run the following command at the root of the project: ```bash ./scripts/protobuf_codegen.sh ``` ### Generate Abi Bindings [subnet-evm](https://github.com/ava-labs/subnet-evm/tree/6e67777132dd7d0d556e1e34d68ad8e27b22ebef/cmd/abigen) is used to generate abi binding `.go` files for solidity contracts. If you change any of the smart contracts, you will have to update the abi bindings. To generate these files, run the following command at the root of the project: ```bash ./scripts/abi_bindings.sh ``` # Using Explorer (/docs/primary-network/verify-contract/explorer) --- title: Using Explorer description: Learn how to verify a smart contract using the official Avalanche Explorer. --- This document outlines the process of verifying a Smart Contract deployed on the Avalanche Network using the official explorer. ## Contract Deployment 1. Compile the smart contract using the tooling of your choice. 2. Deploy the compiled smart contract to the Avalanche network. - This can be done on either the mainnet or testnet (depending on your RPC configuration) 3. Upon successful deployment, you will receive: - A transaction hash - A contract address Ensure you save the contract address as it will be required for the verification process. ## Contract Verification 1. Navigate to the official [Avalanche Explorer](https://subnets.avax.network/) and click on **Tools** dropdown menu to select **Smart Contract Verification** interface. You may need to open the [Testnet Explorer](https://subnets-test.avax.network/) in case the contract is deployed on Fuji Testnet. ![](/images/verification-portal.png) 2. Prepare the following files: - The contract's Solidity file (`.sol`) - The `metadata.json` file containing the ABI and metadata 3. Upload the required files: - Upload the contract's Solidity file - Upload the `metadata.json` file 4. Enter the contract address: - Paste the contract address obtained from the deployment step into the designated input field. ![](/images/contract-addr-input.png) 5. Initiate verification: - Click on the **Submit Contract** button to start the verification process. ## Next Steps After submitting the contract for verification, your request will be processed shortly and you will see the below message. ![](/images/verification-success.png) For any issues during deployment or verification, please reach out to the DevRel/Support team on Discord/Telegram/Slack. # Using HardHat (/docs/primary-network/verify-contract/hardhat) --- title: Using HardHat description: Learn how to verify a smart contract using Hardhat. --- {/* EVM Version Warning - TEMPORARY Remove this section when Avalanche adds Pectra support (after SAE implementation) Last reviewed: December 2025 */} Avalanche C-Chain and Subnet-EVM currently support the **Cancun** EVM version and do not yet support newer hardforks like **Pectra**. Since Solidity v0.8.30 changed its default target to Pectra, you must explicitly set `evmVersion` to `cancun` in your Hardhat config. See the [sample configuration](#verifying-with-hardhat-verify) below which includes the required `evmVersion: "cancun"` setting. This tutorial assumes that the contract was deployed using Hardhat and that all Hardhat dependencies are properly installed. After deploying a smart contract one can verify the smart contract on Snowtrace in three steps: 1. Flatten the Smart Contract 2. Clean up the flattened contract 3. Verify using the Snowtrace GUI ## Flatten Smart Contract using Hardhat To flatten the contract, run the command: `npx hardhat flatten >> .sol` ## Cleanup the Flattened Smart Contract Some clean-up may be necessary to get the code to compile properly in the Snowtrace Contract Verifier - Remove all but the top SPDX license. - If the contract uses multiple SPDX licenses, use both licenses by adding **AND**: `SPDX-License-Identifier: MIT AND BSD-3-Clause` ## Verify Smart Contract using Snowtrace UI Snowtrace is currently working on a new user interface (UI) for smart contract verification. Meanwhile, you may consider using their API for a seamless smart contract verification experience. ## Verify Smart Contract Programmatically Using APIs Ensure you have Postman or any other API platform installed on your computer (or accessible through online services), along with your contract's source code and the parameters utilized during deployment. Here is the API call URL to use for a POST request: `https://api.snowtrace.io/api?module=contract&action=verifysourcecode` Please note that this URL is specifically configured for verifying contracts on the Avalanche C-Chain Mainnet. If you intend to verify on the Fuji Testnet, use: `https://api-testnet.snowtrace.io/api?module=contract&action=verifysourcecode` Here's the body of the API call with the required parameters: ```json { "contractaddress": "YOUR_CONTRACT_ADDRESS", "sourceCode": "YOUR_FLATTENED_SOURCE_CODE", "codeformat": "solidity-single-file", "contractname": "YOUR_CONTRACT_NAME", "compilerversion": "YOUR_COMPILER_VERSION", "optimizationUsed": "YOUR_OPTIMIZATION_VALUE", // 0 if not optimized, 1 if optimized "runs": "YOUR_OPTIMIZATION_RUNS", // remove if not applicable "licenseType": "YOUR_LICENSE_TYPE", // 1 if not specified "apikey": "API_KEY_PLACEHOLDER", // you don't need an API key, use a placeholder "evmversion": "YOUR_EVM_VERSION_ON_REMIX", "constructorArguments": "YOUR_CONSTRUCTOR_ARGUMENTS" // Remove if not applicable } ``` ## Verifying with Hardhat-Verify This part of the tutorial assumes that the contract was deployed using Hardhat and that all Hardhat dependencies are properly installed to include `'@nomiclabs/hardhat-etherscan'`. You will need to create a `.env.json` with your _Wallet Seed Phrase_. You don't need an API key to verify on Snowtrace. Example `.env.json`: ```json title=".env.json" { "MNEMONIC": "your-wallet-seed-phrase", } ``` Below is a sample `hardhat.config.ts` used for deployment and verification: ```ts title="hardhat.config.ts" import { task } from "hardhat/config" import { SignerWithAddress } from "@nomiclabs/hardhat-ethers/signers" import { BigNumber } from "ethers" import "@typechain/hardhat" import "@nomiclabs/hardhat-ethers" import "@nomiclabs/hardhat-waffle" import "hardhat-gas-reporter" import "@nomiclabs/hardhat-etherscan" import { MNEMONIC, APIKEY } from "./.env.json" // When using the hardhat network, you may choose to fork Fuji or Avalanche Mainnet // This will allow you to debug contracts using the hardhat network while keeping the current network state // To enable forking, turn one of these booleans on, and then run your tasks/scripts using ``--network hardhat`` // For more information go to the hardhat guide // https://hardhat.org/hardhat-network/ // https://hardhat.org/guides/mainnet-forking.html const FORK_FUJI = false const FORK_MAINNET = false const forkingData = FORK_FUJI ? { url: "https://api.avax-test.network/ext/bc/C/rpc", } : FORK_MAINNET ? { url: "https://api.avax.network/ext/bc/C/rpc", } : undefined task( "accounts", "Prints the list of accounts", async (args, hre): Promise => { const accounts: SignerWithAddress[] = await hre.ethers.getSigners() accounts.forEach((account: SignerWithAddress): void => { console.log(account.address) }) } ) task( "balances", "Prints the list of AVAX account balances", async (args, hre): Promise => { const accounts: SignerWithAddress[] = await hre.ethers.getSigners() for (const account of accounts) { const balance: BigNumber = await hre.ethers.provider.getBalance( account.address ) console.log(`${account.address} has balance ${balance.toString()}`) } } ) export default { etherscan: { // Your don't need an API key for Snowtrace }, solidity: { version: "0.8.30", settings: { evmVersion: "cancun", // Required for Avalanche optimizer: { enabled: true, runs: 200, }, }, }, networks: { hardhat: { gasPrice: 225000000000, chainId: 43114, //Only specify a chainId if we are not forking // forking: { // url: 'https://api.avax.network/ext/bc/C/rpc', // }, }, fuji: { url: "https://api.avax-test.network/ext/bc/C/rpc", gasPrice: 225000000000, chainId: 43113, accounts: { mnemonic: MNEMONIC }, }, mainnet: { url: "https://api.avax.network/ext/bc/C/rpc", gasPrice: 225000000000, chainId: 43114, accounts: { mnemonic: MNEMONIC }, }, }, } ``` Once the contract is deployed, verify with hardhat verify by running the following: ```bash npx hardhat verify --network ``` Example: ```bash npx hardhat verify 0x3972c87769886C4f1Ff3a8b52bc57738E82192D5 MockNFT Mock ipfs://QmQ2RFEmZaMds8bRjZCTJxo4DusvcBdLTS6XuDbhp5BZjY 100 --network fuji ``` You can also verify contracts programmatically via script. Example: ```ts title="verify.ts" import console from "console" const hre = require("hardhat") // Define the NFT const name = "MockNFT" const symbol = "Mock" const _metadataUri = "ipfs://QmQ2RFEmZaMds8bRjZCTJxo4DusvcBdLTS6XuDbhp5BZjY" const _maxTokens = "100" async function main() { await hre.run("verify:verify", { address: "0x3972c87769886C4f1Ff3a8b52bc57738E82192D5", constructorArguments: [name, symbol, _metadataUri, _maxTokens], }) } main() .then(() => process.exit(0)) .catch((error) => { console.error(error) process.exit(1) }) ``` First create your script, then execute it via hardhat by running the following: ```bash npx hardhat run scripts/verify.ts --network fuji ``` Verifying via terminal will not allow you to pass an array as an argument, however, you can do this when verifying via script by including the array in your _Constructor Arguments_. Example: ```ts import console from "console" const hre = require("hardhat") // Define the NFT const name = "MockNFT" const symbol = "Mock" const _metadataUri = "ipfs://QmQn2jepp3jZ3tVxoCisMMF8kSi8c5uPKYxd71xGWG38hV/Example" const _royaltyRecipient = "0xcd3b766ccdd6ae721141f452c550ca635964ce71" const _royaltyValue = "50000000000000000" const _custodians = [ "0x8626f6940e2eb28930efb4cef49b2d1f2c9c1199", "0xf39fd6e51aad88f6f4ce6ab8827279cfffb92266", "0xdd2fd4581271e230360230f9337d5c0430bf44c0", ] const _saleLength = "172800" const _claimAddress = "0xcd3b766ccdd6ae721141f452c550ca635964ce71" async function main() { await hre.run("verify:verify", { address: "0x08bf160B8e56899723f2E6F9780535241F145470", constructorArguments: [ name, symbol, _metadataUri, _royaltyRecipient, _royaltyValue, _custodians, _saleLength, _claimAddress, ], }) } main() .then(() => process.exit(0)) .catch((error) => { console.error(error) process.exit(1) }) ``` # Using Snowtrace (/docs/primary-network/verify-contract/snowtrace) --- title: Using Snowtrace description: Learn how to verify a contract on the Avalanche C-chain using Snowtrace. --- The C-Chain Explorer supports verifying smart contracts, allowing users to review it. The Mainnet C-Chain Explorer is [here](https://snowtrace.io/) and the Fuji Testnet Explorer is [here.](https://testnet.snowtrace.io/) If you have issues, contact us on [Discord](https://chat.avalabs.org/). ## Steps Navigate to the _Contract_ tab at the Explorer page for your contract's address. ![verify and publish](/images/snowtrace1.png) Click _Verify & Publish_ to enter the smart contract verification page. ![SRC](/images/snowtrace2.png) [Libraries](https://docs.soliditylang.org/en/v0.8.4/contracts.html?highlight=libraries#libraries) can be provided. If they are, they must be deployed, independently verified and in the _Add Contract Libraries_ section. ![libraries](/images/snowtrace3.png) The C-Chain Explorer can fetch constructor arguments automatically for simple smart contracts. More complicated contracts might require you to pass in special constructor arguments. Smart contracts with complicated constructors may have validation issues (see the Caveats section below). You can try this [online ABI encoder](https://abi.hashex.org/). ## Requirements - **IMPORTANT** Contracts should be verified on Testnet before being deployed to Mainnet to ensure there are no issues. - Contracts must be flattened. Includes will not work. - Contracts should be compile-able in [Remix](https://remix.ethereum.org/). A flattened contract with `pragma experimental ABIEncoderV2` (as an example) can create unusual binary and/or constructor blobs. This might cause validation issues. - The C-Chain Explorer **only** validates [solc JavaScript](https://github.com/ethereum/solc-bin) and only supports [Solidity](https://docs.soliditylang.org/) contracts. ## Libraries The compile bytecode will identify if there are external libraries. If you released with Remix, you will also see multiple transactions created. ``` { "linkReferences": { "contracts/Storage.sol": { "MathUtils": [ { "length": 20, "start": 3203 } ... ] } }, "object": "....", ... } ``` This requires you to add external libraries in order to verify the code. A library can have dependent libraries. To verify a library, the hierarchy of dependencies will need to be provided to the C-Chain Explorer. Verification may fail if you provide more than the library plus any dependencies (that is you might need to prune the Solidity code to exclude anything but the necessary classes). You can also see references in the byte code in the form `__$75f20d36....$__`. The keccak256 hash is generated from the library name. Example [online converter](https://emn178.github.io/online-tools/keccak_256.html): `contracts/Storage.sol:MathUtils` => `75f20d361629befd780a5bd3159f017ee0f8283bdb6da80805f83e829337fd12` ## Examples - [SwapFlashLoan](https://testnet.snowtrace.io/address/0x12DF75Fed4DEd309477C94cE491c67460727C0E8/contract/43113/code) SwapFlashLoan uses `swaputils` and `mathutils`: - [SwapUtils](https://testnet.snowtrace.io/address/0x6703e4660E104Af1cD70095e2FeC337dcE034dc1/contract/43113/code) SwapUtils requires mathutils: - [MathUtils](https://testnet.snowtrace.io/address/0xbA21C84E4e593CB1c6Fe6FCba340fa7795476966/contract/43113/code) ## Caveats ### SPDX License Required An SPDX must be provided. ```solidity // SPDX-License-Identifier: ... ``` ### `keccak256` Strings Processed The C-Chain Explorer interprets all keccak256(...) strings, even those in comments. This can cause issues with constructor arguments. ```solidity /// keccak256("1"); keccak256("2"); ``` This could cause automatic constructor verification failures. If you receive errors about constructor arguments they can be provided in ABI hex encoded form on the contract verification page. ### Solidity Constructors Constructors and inherited constructors can cause problems verifying the constructor arguments. Example: ```solidity abstract contract Parent { constructor () { address msgSender = ...; emit Something(address(0), msgSender); } } contract Main is Parent { constructor ( string memory _name, address deposit, uint fee ) { ... } } ``` If you receive errors about constructor arguments, they can be provided in ABI hex encoded form on the contract verification page. # ICM Contract Addresses (/docs/cross-chain/icm-contracts/addresses) --- title: ICM Contract Addresses --- ## Deployed Addresses | Contract | Address | Chain | | --------------------- | ---------------------------------------------- | ------------------------ | | `TeleporterMessenger` | **0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf** | All chains, all networks | | `TeleporterRegistry` | **0x7C43605E14F391720e1b37E49C78C4b03A488d98** | Mainnet C-Chain | | `TeleporterRegistry` | **0xF86Cb19Ad8405AEFa7d09C778215D2Cb6eBfB228** | Fuji C-Chain | 1. Using [Nick's method](https://yamenmerhi.medium.com/nicks-method-ethereum-keyless-execution-168a6659479c#), `TeleporterMessenger` deploys at a universal address across all chains, varying with each ICM contracts Major release. **Compatibility exists only between same versions of `TeleporterMessenger` instances.** See [ICM Contract Deployment](https://github.com/ava-labs/teleporter/blob/main/utils/contract-deployment/README.md) and [Deploy ICM Contracts to a Subnet](https://github.com/ava-labs/teleporter/tree/main?tab=readme-ov-file#deploy-teleporter-to-a-subnet) for more details. 2. `TeleporterRegistry` can be deployed to any address. See [Deploy TeleporterRegistry to a Subnet](https://github.com/ava-labs/teleporter/blob/main/README.md#deploy-teleporter-to-a-subnet) for details. The table above enumerates the canonical registry addresses on the Mainnet and Fuji C-Chains. ## A Note on Versioning Release versions follow the [semver](https://semver.org/) convention of incompatible Major releases. A new Major version is released whenever the `TeleporterMessenger` bytecode is changed, and a new version of `TeleporterMessenger` is meant to be deployed. Due to the use of Nick's method to deploy the contract to the same address on all chains (see [ICM Contract Deployment](https://github.com/ava-labs/teleporter/blob/main/utils/contract-deployment/README.md) for details), this also means that new release versions would result in different ICM contract addresses. Minor and Patch versions may pertain to contract changes that do not change the `TeleporterMessenger` bytecode, or to changes in the test frameworks, and will only be included in tags. # Teleporter CLI (/docs/cross-chain/icm-contracts/cli) --- title: "Teleporter CLI" description: "The CLI is a command line interface for interacting with the Teleporter contracts." edit_url: https://github.com/ava-labs/teleporter/edit/main/cmd/teleporter-cli/README.md --- # ICM Contracts CLI This directory contains the source code for the ICM Contracts CLI. The CLI is a command line interface for interacting with the ICM contracts. It is written with [cobra](https://github.com/spf13/cobra) commands as a Go application. ## Build To build the CLI, run `go build` from this directory. This will create a binary called `teleporter-cli` in the current directory. ## Usage The CLI has a number of subcommands. To see the list of subcommands, run `./teleporter-cli help`. To see the help for a specific subcommand, run `./teleporter-cli help `. The supported subcommands include: - `event`: given a log event's topics and data, attempts to decode into an ICM event in a more readable format. - `message`: given an ICM message encoded as a hex string, attempts to decode into an ICM message in a more readable format. - `transaction`: given a transaction hash, attempts to decode all relevant TeleporterMessenger and ICM log events in a more readable format. # Deep Dive into ICM Contracts (/docs/cross-chain/icm-contracts/deep-dive) --- title: "Deep Dive into ICM Contracts" description: "ICM Contracts is an EVM compatible cross-Avalanche L1 communication protocol built on top of Avalanche Interchain Messaging (ICM), and implemented as a Solidity smart contract." edit_url: https://github.com/ava-labs/icm-contracts/edit/main/README.md --- > [!IMPORTANT] > This repository has been moved in it's entirety to [`icm-services`](https://github.com/ava-labs/icm-contracts/edit/main/README.md). All commits exist with the same hash, and tags and their associated releases have also been migrated. All new issues, pull requests, and discussions should be opened in the `icm-services` repository. For the latest releases, see the `icm-services` [release page](https://github.com/ava-labs/icm-services/releases). # ICM Contracts For help getting started with building ICM contracts, refer to [the avalanche-starter-kit repository](https://github.com/ava-labs/avalanche-starter-kit). - [Setup](#setup) - [Initialize the repository](#initialize-the-repository) - [Dependencies](#dependencies) - [Structure](#structure) - [E2E tests](#e2e-tests) - [Run specific E2E tests](#run-specific-e2e-tests) - [ABI Bindings](#abi-bindings) - [Docs](#docs) - [Resources](#resources) ## Setup ### Initialize the repository - Get all submodules: `git submodule update --init --recursive` ### Dependencies - [Ginkgo](https://onsi.github.io/ginkgo/#installing-ginkgo) for running the end-to-end tests. - [Foundry](https://book.getfoundry.sh/) Use `./scripts/install_foundry.sh` to install Foundry for building contracts. ## Structure - `contracts/` - [`governance/`](https://github.com/ava-labs/teleporter/blob/main/contracts/governance/README.md) includes contracts related to L1 governance. - [`ictt/`](https://github.com/ava-labs/teleporter/blob/main/contracts/ictt/README.md) Interchain Token Transfer contracts. Facilitates the transfer of tokens among L1s. - [`teleporter/`](https://github.com/ava-labs/teleporter/blob/main/contracts/teleporter/README.md) includes `TeleporterMessenger`, which serves as the interface for most contracts to use ICM. - [`registry/`](https://github.com/ava-labs/teleporter/blob/main/contracts/teleporter/registry/README.md) includes a registry contract for managing different versions of `TeleporterMessenger`. - [`validator-manager/`](https://github.com/ava-labs/teleporter/blob/main/contracts/validator-manager/README.md) includes contracts for managing the validator set of an L1. - `abi-bindings/` includes Go ABI bindings for the contracts in `contracts/`. - [`audits/`](https://github.com/ava-labs/teleporter/blob/main/audits/README.md) includes all audits conducted on contracts in this repository. - `tests/` includes integration tests for the contracts in `contracts/`, written using the [Ginkgo](https://onsi.github.io/ginkgo/) testing framework. - `utils/` includes Go utility functions for interacting with the contracts in `contracts/`. Included are Golang scripts to derive the expected EVM contract address deployed from a given EOA at a specific nonce, and also construct a transaction to deploy provided byte code to the same address on any EVM chain using [Nick's method](https://yamenmerhi.medium.com/nicks-method-ethereum-keyless-execution-168a6659479c#). - `scripts/` includes bash scripts for interacting with TeleporterMessenger in various environments, as well as utility scripts. - `abi_bindings.sh` generates ABI bindings for the contracts in `contracts/` and outputs them to `abi-bindings/`. - `lint.sh` performs Solidity and Golang linting. ## E2E tests In addition to the docker setup, end-to-end integration tests written using Ginkgo are provided in the `tests/` directory. E2E tests are run as part of CI, but can also be run locally. Any new features or cross-chain example applications checked into the repository should be accompanied by an end-to-end tests. See the [Contribution Guide](https://github.com/ava-labs/teleporter/blob/main/CONTRIBUTING.md) for additional details. To run the E2E tests locally, you'll need to install Gingko following the instructions [here](https://onsi.github.io/ginkgo/#installing-ginkgo). Then run the following command from the root of the repository: ```bash ./scripts/e2e_test.sh ``` ### Run specific E2E tests To run a specific E2E test, specify the environment variable `GINKGO_FOCUS`, which will then look for test descriptions that match the provided input. For example, to run the `Calculate Teleporter message IDs` test: ```bash GINKGO_FOCUS="Calculate Teleporter message IDs" ./scripts/e2e_test.sh ``` A substring of the full test description can be used as well: ```bash GINKGO_FOCUS="Calculate Teleporter" ./scripts/e2e_test.sh ``` The E2E test script also supports a `--components` flag, making it easy to run all the test cases for a particular project. For example, to run all E2E tests for the `tests/flows/ictt/` folder: ```bash ./scripts/e2e_test.sh --components "ictt" ``` ## ABI Bindings The E2E tests written in Golang interface with the solidity contracts by use of generated ABI bindings. To regenerate Golang ABI bindings for the Solidity smart contracts, run: ```bash ./scripts/abi_bindings.sh ``` The auto-generated bindings should be written under the `abi-bindings/` directory. ## Docs - [ICM Protocol Overview](https://github.com/ava-labs/teleporter/blob/main/contracts/teleporter/README.md) - [Teleporter Registry and Upgrades](https://github.com/ava-labs/teleporter/blob/main/contracts/teleporter/registry/README.md) - [Contract Deployment](https://github.com/ava-labs/teleporter/blob/main/utils/contract-deployment/README.md) - [Teleporter CLI](https://github.com/ava-labs/teleporter/blob/main/cmd/teleporter-cli/README.md) ## Resources - List of blockchain signing cryptography algorithms [here](http://ethanfast.com/top-crypto.html). - Background on stateful precompiles [here](https://medium.com/avalancheavax/customizing-the-evm-with-stateful-precompiles-f44a34f39efd). - Background on BLS signature aggregation [here](https://crypto.stanford.edu/~dabo/pubs/papers/BLSmultisig.html). # Getting Started (/docs/cross-chain/icm-contracts/getting-started) --- title: Getting Started --- Dive deeper into ICM contracts and kickstart your journey in building cross-chain dApps by enrolling in our [ICM contracts course](/academy/interchain-messaging). Note: All example applications in the [examples](https://github.com/ava-labs/teleporter/tree/example-sequential-message-app/contracts/sequential-delivery-example) directory are meant for education purposes only and are not audited. The example contracts are not intended for use in production environments. This section walks through how to build an example cross-chain application on top of ICM contracts, recreating the `ExampleCrossChainMessenger` [contract](https://github.com/ava-labs/teleporter/tree/example-sequential-message-app/contracts/sequential-delivery-example) that sends arbitrary string data from one chain to another. Note that this tutorial is meant for education purposes only. The resulting code is not intended for use in production environments. Step 1: Create Initial Contract[​](#step-1-create-initial-contract "Direct link to heading") -------------------------------------------------------------------------------------------- Create a new file called `MyExampleCrossChainMessenger.sol` in a new directory: ``` mkdir teleporter/contracts/src/CrossChainApplications/MyExampleCrossChainMessenger/ touch teleporter/contracts/src/CrossChainApplications/MyExampleCrossChainMessenger/MyExampleCrossChainMessenger.sol ``` At the top of the file define the Solidity version to work with, and import the necessary types and interfaces. ``` pragma solidity 0.8.18; import {ITeleporterMessenger, TeleporterMessageInput, TeleporterFeeInfo} from "@teleporter/ITeleporterMessenger.sol"; import {ReentrancyGuard} from "@openzeppelin/[email protected]/security/ReentrancyGuard.sol"; ``` Next, define the initial empty contract. The contract inherits from `ReentrancyGuard` to prevent reentrancy attacks. ``` contract MyExampleCrossChainMessenger is ReentrancyGuard { } ``` Finally, add the following struct and event declarations into the body of the contract, which will be integrated in later: ``` /** * @dev Messages sent to this contract. */ struct Message { address sender; string message; } /** * @dev Emitted when a message is submited to be sent. */ event SendMessage( bytes32 indexed destinationBlockchainID, address indexed destinationAddress, address feeTokenAddress, uint256 feeAmount, uint256 requiredGasLimit, string message ); /** * @dev Emitted when a new message is received from a given chain ID. */ event ReceiveMessage( bytes32 indexed sourceBlockchainID, address indexed originSenderAddress, string message ); ``` Step 2: Integrating ICM Contracts[​](#step-2-integrating-teleporter-messenger "Direct link to heading") -------------------------------------------------------------------------------------------------------------- Now that the initial empty `MyExampleCrossChainMessenger` is defined, it's time to integrate with `ITeleporterMessenger`, which will provide the functionality to deliver cross chain messages. Create a state variable of `ITeleporterMessenger` type called `teleporterMessenger`. Then create a constructor that takes in an address where the ICM Messenger contract would be deployed on this chain, and set the corresponding state variable. ``` ITeleporterMessenger public immutable teleporterMessenger; constructor(address teleporterMessengerAddress) { teleporterMessenger = ITeleporterMessenger(teleporterMessengerAddress); } ``` Step 3: Send and Receive[​](#step-3-send-and-receive "Direct link to heading") ------------------------------------------------------------------------------ Now that `MyExampleCrossChainMessenger` has an instantiation of `ITeleporterMessenger`, the next step is to add in the functionality of sending and receiving arbitrary string data between chains. To start, create the function declaration for `sendMessage`, which will send string data cross-chain to the specified destination address' receiver. This function allows callers to specify the destination chain ID, destination address to send to, relayer fees, required gas limit for message execution at the destination address. ``` /** * @dev Send a new message to another chain. */ function sendMessage( bytes32 destinationBlockchainID, address destinationAddress, address feeTokenAddress, uint256 feeAmount, uint256 requiredGasLimit, string calldata message ) external returns (bytes32 messageID) { } ``` `MyExampleCrossChainMessenger` also needs to implement `ITeleporterReceiver`. First, add the import of this interface: ``` import {ITeleporterReceiver} from "@teleporter/ITeleporterReceiver.sol"; ``` Then declare that the contract will implement it: ``` contract MyExampleCrossChainMessenger is - ReentrancyGuard + ReentrancyGuard, + ITeleporterReceiver { ``` And then finally add the method `receiveTeleporterMessage` that receives the cross-chain messages from ICM. ``` /** * @dev Receive a new message from another chain. */ function receiveTeleporterMessage( bytes32 sourceBlockchainID, address originSenderAddress, bytes calldata message ) external { } ``` Now it's time to implement the methods, starting with `sendMessage`. First, add the necessary imports. ``` import {SafeERC20TransferFrom, SafeERC20} from "@teleporter/SafeERC20TransferFrom.sol"; import {IERC20} from "@openzeppelin/[email protected]/token/ERC20/IERC20.sol"; ``` Next, add a `using` directive to the top of the contract body specifying `SafeERC20` as the `IERC20` implementation to use: ``` using SafeERC20 for IERC20; ``` Then add a check to the `sendMessage` function for whether `feeAmount` is greater than zero. If it is, transfer and approve the amount of IERC20 asset at `feeTokenAddress` to the Teleporter Messenger saved as a state variable. ``` // For non-zero fee amounts, first transfer the fee to this contract, and then // allow the Teleporter contract to spend it. uint256 adjustedFeeAmount; if (feeAmount > 0) { adjustedFeeAmount = SafeERC20TransferFrom.safeTransferFrom( IERC20(feeTokenAddress), feeAmount ); IERC20(feeTokenAddress).safeIncreaseAllowance( address(teleporterMessenger), adjustedFeeAmount ); } ``` > Note: Relayer fees are an optional way to incentivize relayers to deliver an ICM message to its destination. They are not strictly necessary, and may be omitted if a relayer is willing to relay messages with no fee, such as with a self-hosted relayer. Next, to the end of the `sendMessage` function, add the event to emit, as well as the call to the `TeleporterMessenger` contract with the message data to be executed when delivered to the destination address. Form a `TeleporterMessageInput` and call `sendCrossChainMessage` on the `TeleporterMessenger` instance to start the cross chain messaging process. The `message` must be ABI encoded so that it can be properly decoded on the receiving end. > Note: `allowedRelayerAddresses` is empty in this example, meaning any relayer can try to deliver this cross chain message. Specific relayer addresses can be specified to ensure only those relayers can deliver the message. ``` emit SendMessage({ destinationBlockchainID: destinationBlockchainID, destinationAddress: destinationAddress, feeTokenAddress: feeTokenAddress, feeAmount: adjustedFeeAmount, requiredGasLimit: requiredGasLimit, message: message }); return teleporterMessenger.sendCrossChainMessage( TeleporterMessageInput({ destinationBlockchainID: destinationBlockchainID, destinationAddress: destinationAddress, feeInfo: TeleporterFeeInfo({ feeTokenAddress: feeTokenAddress, amount: adjustedFeeAmount }), requiredGasLimit: requiredGasLimit, allowedRelayerAddresses: new address[](0), message: abi.encode(message) }) ); ``` With the sending side complete, the next step is to implement `ITeleporterReceiver.receiveTeleporterMessage`. The receiver in this example will just receive the arbitrary string data, and check that the message is sent through ICM. To the `receiveTeleporterMessage` function, add: ``` // Only the Teleporter receiver can deliver a message. require(msg.sender == address(teleporterMessenger), "Unauthorized."); // do something with message. ``` The base of sending and receiving messages cross chain is complete. `MyExampleCrossChainMessenger` can now be expanded with functionality that saves the received messages, and allows users to query for the latest message received from a specified chain. Step 4: Storing the Message[​](#step-4-storing-the-message "Direct link to heading") ------------------------------------------------------------------------------------ Start by adding a map to the body of the contract, in which the key is the `sourceBlockchainID` and the value is the latest `message` sent from that chain. The `message` is of type `Message`, which is already declared in the contract. ``` mapping(bytes32 sourceBlockchainID => Message message) private _messages; ``` Next, update `receiveTeleporterMessage` to save the message into the mapping after it is received and verified that it's sent from Teleporter. At the end of that function, ABI decode the `message` bytes into a string, and emit the `ReceiveMessage` event. ``` // Store the message. string memory messageString = abi.decode(message, (string)); _messages[sourceBlockchainID] = Message( originSenderAddress, messageString ); emit ReceiveMessage( sourceBlockchainID, originSenderAddress, messageString ); ``` Next, add a function to the contract called `getCurrentMessage` that allows users or contracts to easily query the contract for the latest message sent by a specified chain. ``` /** * @dev Check the current message from another chain. */ function getCurrentMessage( bytes32 sourceBlockchainID ) external view returns (address, string memory) { Message memory messageInfo = _messages[sourceBlockchainID]; return (messageInfo.sender, messageInfo.message); } ``` Step 5: Upgrade Support[​](#step-5-upgrade-support "Direct link to heading") ---------------------------------------------------------------------------- At this point, the contract is now fully usable, and can be used to send arbitrary string data between chains. However, there are a few more modifications that need to be made to support upgrades to ICM contracts. For a more in-depth explanation of how to support upgrades, see the Upgrades README [here](https://github.com/ava-labs/teleporter/blob/main/contracts/teleporter/registry/UPGRADING.md). The first change to make is to inherit from `TeleporterOwnerUpgradeable` instead of `ITeleporterReceiver`. `TeleporterOwnerUpgradeable` integrates with the `TeleporterRegistry` via `TeleporterUpgradeable` to easily utilize the latest `TeleporterMessenger` implementation. `TeleporterOwnerUpgradeable` also ensures that only an admin address for managing Teleporter versions, specified by the constructor argument `teleporterManager`, is able to upgrade the `TeleporterMessenger` implementation used by the contract. To start, replace the import for `ITeleporterReceiver` with `TeleporterOwnerUpgradeable`: ``` - import {ITeleporterReceiver} from "@teleporter/ITeleporterReceiver.sol"; + import {TeleporterOwnerUpgradeable} from "@teleporter/upgrades/TeleporterOwnerUpgradeable.sol"; ``` Also, replace the contract declaration to inherit from `TeleporterOwnerUpgradeable` instead of `ITeleporterReceiver`: ``` contract MyExampleCrossChainMessenger is ReentrancyGuard, - ITeleporterReceiver + TeleporterOwnerUpgradeable { ``` Next, update the constructor to invoke the `TeleporterOwnerUpgradeable` constructor. ``` - constructor(address teleporterMessengerAddress) { - teleporterMessenger = ITeleporterMessenger(teleporterMessengerAddress); - } + constructor( + address teleporterRegistryAddress, + address teleporterManager + ) TeleporterOwnerUpgradeable(teleporterRegistryAddress, teleporterManager) {} ``` Then, remove the `teleporterMessenger` state variable: ``` - ITeleporterMessenger public immutable teleporterMessenger; ``` And at the beginning of `sendMessage()` add a call to get the latest `ITeleporterMessenger` implementation from `TeleporterRegistry`: ``` ITeleporterMessenger teleporterMessenger = teleporterRegistry.getLatestTeleporter(); ``` And finally, change `receiveTeleporterMessage` to `_receiveTeleporterMessage`, mark it as `internal override`, and change the data location of its `message` parameter to `memory`. It's also safe to remove the check against `teleporterMessenger` in `_receiveTeleporterMessage`, since that same check is handled in `TeleporterOwnerUpgradeable`'s `receiveTeleporterMessage` function. ``` - function receiveTeleporterMessage( + function _receiveTeleporterMessage( bytes32 sourceBlockchainID, address originSenderAddress, - bytes calldata message + bytes memory message - ) external { + ) internal override { - // Only the Teleporter receiver can deliver a message. - require(msg.sender == address(teleporterMessenger), "Unauthorized."); ``` `MyExampleCrossChainMessenger` is now a working cross-chain dApp built on top of ICM contracts! Full example [here](https://github.com/ava-labs/teleporter/tree/example-sequential-message-app/contracts/sequential-delivery-example). Step 6: Testing[​](#step-6-testing "Direct link to heading") ------------------------------------------------------------ For testing, `scripts/local/e2e_test.sh` sets up a local test environment consisting of three avalanche-l1s deployed with ICM contracts, and a lightweight inline relayer implementation to facilitate cross chain message delivery. An end-to-end test for `ExampleCrossChainMessenger` is included in `tests/flows/example_messenger.go`, which performs the following: 1. Deploys the [ExampleERC20](https://github.com/ava-labs/teleporter/blob/main/contracts/mocks/ExampleERC20.sol) token to avalanche-l1 A. 2. Deploys `ExampleCrossChainMessenger` to both avalanche-l1s A and B. 3. Approves the cross-chain messenger on avalanche-l1 A to spend ERC20 tokens from the default address. 4. Sends `"Hello, world!"` from avalanche-l1 A to avalanche-l1 B's cross-chain messenger to receive. 5. Calls `getCurrentMessage` on avalanche-l1 B to make sure the right message and sender are received. To run this test against the newly created `MyExampleCrossChainMessenger`, first generate the ABI Go bindings by running `./scripts/abi_bindings.sh --contract MyExampleCrossChainMessenger` from the root of this repository. Then, add to the generated Go package the `SendMessageRequiredGas` constant, which is required by the tests, in a new file `abi-bindings/go/CrossChainApplications/MyExampleCrossChainMessenger/MyExampleCrossChainMessenger/constants.go`: ```js package myexamplecrosschainmessenger import "math/big" var SendMessageRequiredGas = big.NewInt(300000) ``` Next, modify `tests/utils/utils.go`, which is used by `tests/flows/example_messenger.go`, to use the ABI bindings for `MyExampleCrossChainMessenger` instead of `ExampleCrossChainMessenger`. First replace the import: ``` - examplecrosschainmessenger "github.com/ava-labs/teleporter/abi-bindings/go/CrossChainApplications/examples/ExampleMessenger/ExampleCrossChainMessenger" + myexamplecrosschainmessenger "github.com/ava-labs/teleporter/abi-bindings/go/CrossChainApplications/MyExampleCrossChainMessenger/MyExampleCrossChainMessenger" ``` Then, in that same `utils.go`, replace all instances of to `examplecrosschainmessenger` with `myexamplecrosschainmessenger` and all instances of `ExampleCrossChainMessenger` with `MyExampleCrossChainMessenger`. Finally, from the root of the repository, invoke the tests with an extra bit of configuration that tells the Ginkgo test framework to focus only on the tests of this example contract (excluding all of the broader tests of Teleporter): ``` GINKGO_FOCUS="Example cross chain messenger" scripts/local/e2e_test.sh ``` # ICM Contracts Avalanche L1s on Devnet (/docs/cross-chain/icm-contracts/icm-contracts-on-devnet) --- title: ICM Contracts Avalanche L1s on Devnet description: This how-to guide focuses on deploying ICM contract-enabled Avalanche L1s to a Devnet. --- After this tutorial, you would have created a Devnet and deployed two Avalanche L1s in it, and have enabled them to cross-communicate with each other and with the C-Chain through ICM contracts and the underlying Warp technology. For more information on cross chain messaging through ICM contracts and Warp, check: - [Cross Chain References](/docs/cross-chain) Note that currently only [Subnet-EVM](https://github.com/ava-labs/subnet-evm) and [Subnet-EVM-Based](/docs/avalanche-l1s/evm-configuration/evm-l1-customization) virtual machines support ICM contracts. ## Prerequisites Before we begin, you will need to have: - Created an AWS account and have an updated AWS `credentials` file in home directory with \[default\] profile Note: the tutorial uses AWS hosts, but Devnets can also be created and operated in other supported cloud providers, such as GCP. Create Avalanche L1s Configurations[​](#create-avalanche-l1s-configurations "Direct link to heading") ----------------------------------------------------------------------------------------- For this section we will follow this [steps](/docs/tooling/avalanche-cli/cross-chain/teleporter-local-network#create-avalanche-l1s-configurations), to create two ICM contract-enabled Avalanche L1s, `` and ``. Create a Devnet and Deploy an Avalanche L1 in It[​](#create-a-devnet-and-deploy-a-avalanche-l1-in-it "Direct link to heading") ----------------------------------------------------------------------------------------------------------------- Let's use the `devnet wiz` command to create a devnet `` and deploy `` in it. The devnet will be created in the `us-east-1` region of AWS, and will consist of 5 validators only. ``` avalanche node devnet wiz --aws --node-type default --region us-east-1 --num-validators 5 --num-apis 0 --enable-monitoring=false --default-validator-params Creating the devnet... Creating new EC2 instance(s) on AWS... ... Deploying [Avalanche L1] to Cluster ... configuring AWM RElayer on host i-0f1815c016b555fcc Setting the nodes as Avalanche L1 trackers ... Setting up ICM contracts on Avalanche L1 Teleporter Messenger successfully deployed to Avalanche L1 (0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf) Teleporter Registry successfully deployed to Avalanche L1 (0xb623C4495220C603D0A939D32478F55891a61750) Teleporter Messenger successfully deployed to c-chain (0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf) Teleporter Registry successfully deployed to c-chain (0x5DB9A7629912EBF95876228C24A848de0bfB43A9) Starting AWM Relayer Service setting AWM Relayer on host i-0f1815c016b555fcc to relay L1 chain1 updating configuration file ~/.avalanche-cli/nodes/i-0f1815c016b555fcc/services/awm-relayer/awm-relayer-config.json Devnet is successfully created and is now validating blockchain chain1! Avalanche L1 RPC URL: http://67.202.23.231:9650/ext/bc/fqcM24LNb3kTV7KD1mAvUJXYy5XunwP8mrE44YuNwPjgZHY6p/rpc ✓ Cluster information YAML file can be found at ~/.avalanche-cli/nodes/inventories//clusterInfo.yaml at local host ``` Notice some details here: - Two smart contracts are deployed to the Avalanche L1: Teleporter Messenger and Teleporter Registry - Both ICM smart contracts are also deployed to `C-Chain` - [AWM ICM Relayer](https://github.com/ava-labs/icm-services/tree/main/relayer is installed and configured as a service into one of the nodes (A Relayer [listens](/docs/cross-chain/teleporter/overview#data-flow) for new messages being generated on a source Avalanche L1 and sends them to the destination Avalanche L1.) CLI configures the Relayer to enable every Avalanche L1 to send messages to all other Avalanche L1s. If you add more Avalanche L1s to the Devnet, the Relayer will be automatically reconfigured. Checking Devnet Configuration and Relayer Logs[​](#checking-devnet-configuration-and-relayer-logs "Direct link to heading") --------------------------------------------------------------------------------------------------------------------------- Execute `node list` command to get a list of the devnet nodes: ``` avalanche node list Cluster "" (Devnet) Node i-0f1815c016b555fcc (NodeID-91PGQ7keavfSV1XVFva2WsQXWLWZqqqKe) 67.202.23.231 [Validator,Relayer] Node i-026392a651571232c (NodeID-AkPyyTs9e9nPGShdSoxdvWYZ6X2zYoyrK) 52.203.183.68 [Validator] Node i-0d1b98d5d941d6002 (NodeID-ByEe7kuwtrPStmdMgY1JiD39pBAuFY2mS) 50.16.235.194 [Validator] Node i-0c291f54bb38c2984 (NodeID-8SE2CdZJExwcS14PYEqr3VkxFyfDHKxKq) 52.45.0.56 [Validator] Node i-049916e2f35231c29 (NodeID-PjQY7xhCGaB8rYbkXYddrr1mesYi29oFo) 3.214.163.110 [Validator] ``` Notice that, in this case, `i-0f1815c016b555fcc` was set as Relayer. This host contains a `systemd` service called `awm-relayer` that can be used to check the Relayer logs, and set the execution status. To view the Relayer logs, the following command can be used: ``` avalanche node ssh i-0f1815c016b555fcc "journalctl -u awm-relayer --no-pager" [Node i-0f1815c016b555fcc (NodeID-91PGQ7keavfSV1XVFva2WsQXWLWZqqqKe) 67.202.23.231 [Validator,Relayer]] Warning: Permanently added '67.202.23.231' (ED25519) to the list of known hosts. -- Logs begin at Fri 2024-04-05 14:11:43 UTC, end at Fri 2024-04-05 14:30:24 UTC. -- Apr 05 14:15:06 ip-172-31-47-187 systemd[1]: Started AWM Relayer systemd service. Apr 05 14:15:07 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:07.018Z","logger":"awm-relayer","caller":"main/main.go:66","msg":"Initializing awm-relayer"} Apr 05 14:15:07 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:07.018Z","logger":"awm-relayer","caller":"main/main.go:71","msg":"Set config options."} Apr 05 14:15:07 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:07.018Z","logger":"awm-relayer","caller":"main/main.go:78","msg":"Initializing destination clients"} Apr 05 14:15:07 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:07.021Z","logger":"awm-relayer","caller":"main/main.go:97","msg":"Initializing app request network"} Apr 05 14:15:08 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:08.159Z","logger":"awm-relayer","caller":"main/main.go:309","msg":"starting metrics server...","port":9090} Apr 05 14:15:08 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:08.160Z","logger":"awm-relayer","caller":"main/main.go:251","msg":"Creating relayer","originBlockchainID":"fqcM24LNb3kTV7KD1mAvUJXYy5XunwP8mrE44YuNwPjgZHY6p"} Apr 05 14:15:08 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:08.160Z","logger":"awm-relayer","caller":"main/main.go:251","msg":"Creating relayer","originBlockchainID":"2EfJg86if9Ka5Ag73JRfoqWz4EGuFwtemaNf4XiBBpUW4YngS6"} Apr 05 14:15:08 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:08.160Z","logger":"awm-relayer","caller":"relayer/relayer.go:114","msg":"Creating relayer","subnetID":"11111111111111111111111111111111LpoYY","subnetIDHex":"0000000000000000000000000000000000000000000000000000000000000000","blockchainID":"2EfJg86if9Ka5Ag73JRfoqWz4EGuFwtemaNf4XiBBpUW4YngS6","blockchainIDHex":"a2b6b947cf2b9bf6df03c8caab08e38ab951d8b120b9c37265d9be01d86bb170"} Apr 05 14:15:08 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:08.160Z","logger":"awm-relayer","caller":"relayer/relayer.go:114","msg":"Creating relayer","subnetID":"giY8tswWgZmcAWzPkoNrmjjrykited7GJ9799SsFzTiq5a1ML","subnetIDHex":"5a2e2d87d74b4ec62fdd6626e7d36a44716484dfcc721aa4f2168e8a61af63af","blockchainID":"fqcM24LNb3kTV7KD1mAvUJXYy5XunwP8mrE44YuNwPjgZHY6p","blockchainIDHex":"582fc7bd55472606c260668213bf1b6d291df776c9edf7e042980a84cce7418a"} Apr 05 14:15:08 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:08.171Z","logger":"awm-relayer","caller":"evm/subscriber.go:247","msg":"Successfully subscribed","blockchainID":"2EfJg86if9Ka5Ag73JRfoqWz4EGuFwtemaNf4XiBBpUW4YngS6"} Apr 05 14:15:08 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:08.171Z","logger":"awm-relayer","caller":"relayer/relayer.go:161","msg":"processed-missed-blocks set to false, starting processing from chain head","blockchainID":"2EfJg86if9Ka5Ag73JRfoqWz4EGuFwtemaNf4XiBBpUW4YngS6"} Apr 05 14:15:08 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:08.172Z","logger":"awm-relayer","caller":"relayer/message_relayer.go:662","msg":"Updating latest processed block in database","relayerID":"0xea06381426934ec1800992f41615b9d362c727ad542f6351dbfa7ad2849a35bf","latestBlock":6} Apr 05 14:15:08 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:08.173Z","logger":"awm-relayer","caller":"relayer/message_relayer.go:662","msg":"Updating latest processed block in database","relayerID":"0x175e14327136d57fe22d4bdd295ff14bea8a7d7ab1884c06a4d9119b9574b9b3","latestBlock":6} Apr 05 14:15:08 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:08.173Z","logger":"awm-relayer","caller":"main/main.go:272","msg":"Created relayer","blockchainID":"2EfJg86if9Ka5Ag73JRfoqWz4EGuFwtemaNf4XiBBpUW4YngS6"} Apr 05 14:15:08 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:08.173Z","logger":"awm-relayer","caller":"main/main.go:295","msg":"Relayer initialized. Listening for messages to relay.","originBlockchainID":"2EfJg86if9Ka5Ag73JRfoqWz4EGuFwtemaNf4XiBBpUW4YngS6"} Apr 05 14:15:08 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:08.178Z","logger":"awm-relayer","caller":"evm/subscriber.go:247","msg":"Successfully subscribed","blockchainID":"fqcM24LNb3kTV7KD1mAvUJXYy5XunwP8mrE44YuNwPjgZHY6p"} Apr 05 14:15:08 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:08.178Z","logger":"awm-relayer","caller":"relayer/relayer.go:161","msg":"processed-missed-blocks set to false, starting processing from chain head","blockchainID":"fqcM24LNb3kTV7KD1mAvUJXYy5XunwP8mrE44YuNwPjgZHY6p"} Apr 05 14:15:08 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:08.179Z","logger":"awm-relayer","caller":"relayer/message_relayer.go:662","msg":"Updating latest processed block in database","relayerID":"0xe584ccc0df44506255811f6b54375e46abd5db40a4c84fd9235a68f7b69c6f06","latestBlock":6} Apr 05 14:15:08 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:08.179Z","logger":"awm-relayer","caller":"relayer/message_relayer.go:662","msg":"Updating latest processed block in database","relayerID":"0x70f14d33bde4716928c5c4723d3969942f9dfd1f282b64ffdf96f5ac65403814","latestBlock":6} Apr 05 14:15:08 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:08.180Z","logger":"awm-relayer","caller":"main/main.go:272","msg":"Created relayer","blockchainID":"fqcM24LNb3kTV7KD1mAvUJXYy5XunwP8mrE44YuNwPjgZHY6p"} Apr 05 14:15:08 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:08.180Z","logger":"awm-relayer","caller":"main/main.go:295","msg":"Relayer initialized. Listening for messages to relay.","originBlockchainID":"fqcM24LNb3kTV7KD1mAvUJXYy5XunwP8mrE44YuNwPjgZHY6p"} ``` Deploying the Second Avalanche L1[​](#deploying-the-second-avalanche-l1 "Direct link to heading") ------------------------------------------------------------------------------------- Let's use the `devnet wiz` command again to deploy ``. When deploying Avalanche L1 ``, the two ICM contracts will not be deployed to C-Chain in Local Network as they have already been deployed when we deployed the first Avalanche L1. ``` avalanche node devnet wiz --default-validator-params Adding Avalanche L1 into existing devnet ... ... Deploying [chain2] to Cluster ... Stopping AWM Relayer Service Setting the nodes as Avalanche L1 trackers ... Setting up ICM contracts on Avalanche L1 Teleporter Messenger successfully deployed to Avalanche L1 (0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf) Teleporter Registry successfully deployed to Avalanche L1 (0xb623C4495220C603D0A939D32478F55891a61750) Teleporter Messenger has already been deployed to c-chain Starting AWM Relayer Service setting AWM Relayer on host i-0f1815c016b555fcc to relay L1 chain2 updating configuration file ~/.avalanche-cli/nodes/i-0f1815c016b555fcc/services/awm-relayer/awm-relayer-config.json Devnet is now validating Avalanche L1 chain2 Avalanche L1 RPC URL: http://67.202.23.231:9650/ext/bc/7gKt6evRnkA2uVHRfmk9WrH3dYZH9gEVVxDAknwtjvtaV3XuQ/rpc ✓ Cluster information YAML file can be found at ~/.avalanche-cli/nodes/inventories//clusterInfo.yaml at local host ``` Verify ICM Contracts Are Successfully Set Up[​](#verify-teleporter-is-successfully-set-up "Direct link to heading") --------------------------------------------------------------------------------------------------------------- To verify that ICM contracts are successfully set up, let's send a couple of cross messages: ``` avalanche teleporter msg C-Chain chain1 "Hello World" --cluster Delivering message "this is a message" to source Avalanche L1 "C-Chain" (2EfJg86if9Ka5Ag73JRfoqWz4EGuFwtemaNf4XiBBpUW4YngS6) Waiting for message to be received at destination Avalanche L1 "chain1" (fqcM24LNb3kTV7KD1mAvUJXYy5XunwP8mrE44YuNwPjgZHY6p) Message successfully Teleported! ``` ``` avalanche teleporter msg chain2 chain1 "Hello World" --cluster Delivering message "this is a message" to source Avalanche L1 "chain2" (29WP91AG7MqPUFEW2YwtKnsnzVrRsqcWUpoaoSV1Q9DboXGf4q) Waiting for message to be received at destination Avalanche L1 "chain1" (fqcM24LNb3kTV7KD1mAvUJXYy5XunwP8mrE44YuNwPjgZHY6p) Message successfully Teleported! ``` You have sent your first ICM message in the Devnet! Obtaining Information on ICM Contract Deploys[​](#obtaining-information-on-teleporter-deploys "Direct link to heading") --------------------------------------------------------------------------------------------------------------------- ### Obtaining Avalanche L1 Information[​](#obtaining-avalanche-l1-information "Direct link to heading") By executing `blockchain describe` on an ICM contract-enabled Avalanche L1, the following relevant information can be found: - Blockchain RPC URL - Blockchain ID in cb58 format - Blockchain ID in plain hex format - Teleporter Messenger address - Teleporter Registry address Let's get the information for ``: ``` avalanche blockchain describe _____ _ _ _ | __ \ | | (_) | | | | | ___| |_ __ _ _| |___ | | | |/ _ \ __/ _ | | / __| | |__| | __/ || (_| | | \__ \ |_____/ \___|\__\__,_|_|_|___/ +--------------------------------+----------------------------------------------------------------------------------------+ | PARAMETER | VALUE | +--------------------------------+----------------------------------------------------------------------------------------+ | Blockchain Name | Avalanche L1 | +--------------------------------+----------------------------------------------------------------------------------------+ | ChainID | 1 | +--------------------------------+----------------------------------------------------------------------------------------+ | Token Name | TOKEN1 Token | +--------------------------------+----------------------------------------------------------------------------------------+ | Token Symbol | TOKEN1 | +--------------------------------+----------------------------------------------------------------------------------------+ | VM Version | v0.6.3 | +--------------------------------+----------------------------------------------------------------------------------------+ | VM ID | srEXiWaHjFEgKSgK2zBgnWQUVEy2MZA7UUqjqmBSS7MZYSCQ5 | +--------------------------------+----------------------------------------------------------------------------------------+ | Cluster SubnetID | giY8tswWgZmcAWzPkoNrmjjrykited7GJ9799SsFzTiq5a1ML | +--------------------------------+----------------------------------------------------------------------------------------+ | Cluster RPC URL | http://67.202.23.231:9650/ext/bc/fqcM24LNb3kTV7KD1mAvUJXYy5XunwP8mrE44YuNwPjgZHY6p/rpc | +--------------------------------+----------------------------------------------------------------------------------------+ | Cluster | fqcM24LNb3kTV7KD1mAvUJXYy5XunwP8mrE44YuNwPjgZHY6p | | BlockchainID | | + +----------------------------------------------------------------------------------------+ | | 0x582fc7bd55472606c260668213bf1b6d291df776c9edf7e042980a84cce7418a | | | | +--------------------------------+----------------------------------------------------------------------------------------+ | Cluster Teleporter| 0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf | | Messenger Address | | +--------------------------------+----------------------------------------------------------------------------------------+ | Cluster Teleporter| 0xb623C4495220C603D0A939D32478F55891a61750 | | Registry Address | | +--------------------------------+----------------------------------------------------------------------------------------+ ... ``` ### Obtaining C-Chain Information[​](#obtaining-c-chain-information "Direct link to heading") Similar information can be found for C-Chain by using `primary describe`: ``` avalanche primary describe --cluster _____ _____ _ _ _____ / ____| / ____| | (_) | __ \ | | ______| | | |__ __ _ _ _ __ | |__) |_ _ _ __ __ _ _ __ ___ ___ | | |______| | | '_ \ / _ | | '_ \ | ___/ _ | '__/ _ | '_ _ \/ __| | |____ | |____| | | | (_| | | | | | | | | (_| | | | (_| | | | | | \__ \ \_____| \_____|_| |_|\__,_|_|_| |_| |_| \__,_|_| \__,_|_| |_| |_|___/ +------------------------------+--------------------------------------------------------------------+ | PARAMETER | VALUE | +------------------------------+--------------------------------------------------------------------+ | RPC URL | http://67.202.23.231:9650/ext/bc/C/rpc | +------------------------------+--------------------------------------------------------------------+ | EVM Chain ID | 43112 | +------------------------------+--------------------------------------------------------------------+ | TOKEN SYMBOL | AVAX | +------------------------------+--------------------------------------------------------------------+ | Address | 0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC | +------------------------------+--------------------------------------------------------------------+ | Balance | 49999489.815751426 | +------------------------------+--------------------------------------------------------------------+ | Private Key | 56289e99c94b6912bfc12adc093c9b51124f0dc54ac7a766b2bc5ccf558d8027 | +------------------------------+--------------------------------------------------------------------+ | BlockchainID | 2EfJg86if9Ka5Ag73JRfoqWz4EGuFwtemaNf4XiBBpUW4YngS6 | + +--------------------------------------------------------------------+ | | 0xa2b6b947cf2b9bf6df03c8caab08e38ab951d8b120b9c37265d9be01d86bb170 | +------------------------------+--------------------------------------------------------------------+ | ICM Messenger Address | 0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf | +------------------------------+--------------------------------------------------------------------+ | ICM Registry Address | 0x5DB9A7629912EBF95876228C24A848de0bfB43A9 | +------------------------------+--------------------------------------------------------------------+ ``` Controlling Relayer Execution[​](#controlling-relayer-execution "Direct link to heading") ----------------------------------------------------------------------------------------- CLI provides two commands to remotely control Relayer execution: ``` avalanche interchain relayer stop --cluster ✓ Remote AWM Relayer on i-0f1815c016b555fcc successfully stopped ``` ``` avalanche interchain relayer start --cluster ✓ Remote AWM Relayer on i-0f1815c016b555fcc successfully started ``` # ICM Contracts Avalanche L1s on Local Network (/docs/cross-chain/icm-contracts/icm-contracts-on-local-network) --- title: ICM Contracts Avalanche L1s on Local Network --- This how-to guide focuses on deploying ICM contract-enabled Avalanche L1s to a local Avalanche network. After this tutorial, you would have created and deployed two Avalanche L1s to the local network and have enabled them to cross-communicate with each other and with the local C-Chain (through ICM contracts and the underlying Warp technology.) Note that currently only [Subnet-EVM](https://github.com/ava-labs/subnet-evm) and [Subnet-EVM-Based](/docs/avalanche-l1s/evm-configuration/evm-l1-customization) virtual machines support ICM contracts. ## Prerequisites - [Avalanche-CLI installed](/docs/tooling/avalanche-cli) ## Create Avalanche L1 Configurations Let's create an Avalanche L1 called `` with the latest Subnet-EVM version, a chain ID of 1, TOKEN1 as the token name, and with default Subnet-EVM parameters (more information regarding Avalanche L1 creation can be found [here](/docs/tooling/avalanche-cli#create-your-avalanche-l1-configuration)): ``` avalanche blockchain create --evm --latest\ --evm-chain-id 1 --evm-token TOKEN1 --evm-defaults creating genesis for configuring airdrop to stored key "subnet__airdrop" with address 0x0EF8151A3e6ad1d4e17C8ED4128b20EB5edc58B1 loading stored key "cli-teleporter-deployer" for teleporter deploys (evm address, genesis balance) = (0xE932784f56774879e03F3624fbeC6261154ec711, 600000000000000000000) using latest teleporter version (v1.0.0) ✓ Successfully created Avalanche L1 configuration ``` Notice that by default, ICM contracts are enabled and a stored key is created to fund ICM contract related operations (that is deploy ICM smart contracts, fund ICM Relayer). To disable ICM contracts in your Avalanche L1, use the flag `--teleporter=false` when creating the Avalanche L1. To disable Relayer in your Avalanche L1, use the flag `--relayer=false` when creating the Avalanche L1. Now let's create a second Avalanche L1 called ``, with similar settings: ``` avalanche blockchain create --evm --latest\ creating genesis for configuring airdrop to stored key "subnet__airdrop" with address 0x0EF815FFFF6ad1d4e17C8ED4128b20EB5edAABBB loading stored key "cli-teleporter-deployer" for teleporter deploys (evm address, genesis balance) = (0xE932784f56774879e03F3624fbeC6261154ec711, 600000000000000000000) using latest teleporter version (v1.0.0) ✓ Successfully created Avalanche L1 configuration ``` ## Deploy the Avalanche L1s to Local Network Let's deploy ``: ``` avalanche blockchain deploy --local Deploying [] to Local Network Backend controller started, pid: 149427, output at: ~/.avalanche-cli/runs/server_20240229_165923/avalanche-cli-backend.log Booting Network. Wait until healthy... Node logs directory: ~/.avalanche-cli/runs/network_20240229_165923/node/logs Network ready to use. Deploying Blockchain. Wait until network acknowledges... Teleporter Messenger successfully deployed to c-chain (0xF7cBd95f1355f0d8d659864b92e2e9fbfaB786f7) Teleporter Registry successfully deployed to c-chain (0x17aB05351fC94a1a67Bf3f56DdbB941aE6c63E25) Teleporter Messenger successfully deployed to (0xF7cBd95f1355f0d8d659864b92e2e9fbfaB786f7) Teleporter Registry successfully deployed to (0x9EDc4cB4E781413b1b82CC3A92a60131FC111F58) Using latest awm-relayer version (v1.1.0) Executing AWM-Relayer... Blockchain ready to use. Local network node endpoints: +-------+-----------+------------------------------------------------------------------------------------+--------------------------------------------+ | NODE | VM | URL | ALIAS URL | +-------+-----------+------------------------------------------------------------------------------------+--------------------------------------------+ | node1 | | http://127.0.0.1:9650/ext/bc/MzN4AbtFzQ3eKqPhFaDpwCMJmagciWSCgghkZx6YeC6jRdvb6/rpc | http://127.0.0.1:9650/ext/bc//rpc | +-------+-----------+------------------------------------------------------------------------------------+--------------------------------------------+ | node2 | | http://127.0.0.1:9652/ext/bc/MzN4AbtFzQ3eKqPhFaDpwCMJmagciWSCgghkZx6YeC6jRdvb6/rpc | http://127.0.0.1:9652/ext/bc//rpc | +-------+-----------+------------------------------------------------------------------------------------+--------------------------------------------+ | node3 | | http://127.0.0.1:9654/ext/bc/MzN4AbtFzQ3eKqPhFaDpwCMJmagciWSCgghkZx6YeC6jRdvb6/rpc | http://127.0.0.1:9654/ext/bc//rpc | +-------+-----------+------------------------------------------------------------------------------------+--------------------------------------------+ | node4 | | http://127.0.0.1:9656/ext/bc/MzN4AbtFzQ3eKqPhFaDpwCMJmagciWSCgghkZx6YeC6jRdvb6/rpc | http://127.0.0.1:9656/ext/bc//rpc | +-------+-----------+------------------------------------------------------------------------------------+--------------------------------------------+ | node5 | | http://127.0.0.1:9658/ext/bc/MzN4AbtFzQ3eKqPhFaDpwCMJmagciWSCgghkZx6YeC6jRdvb6/rpc | http://127.0.0.1:9658/ext/bc//rpc | +-------+-----------+------------------------------------------------------------------------------------+--------------------------------------------+ Browser Extension connection details (any node URL from above works): RPC URL: http://127.0.0.1:9650/ext/bc/MzN4AbtFzQ3eKqPhFaDpwCMJmagciWSCgghkZx6YeC6jRdvb6/rpc Funded address: 0x0EF8151A3e6ad1d4e17C8ED4128b20EB5edc58B1 with 1000000 (10^18) - private key: 16289399c9466912ffffffdc093c9b51124f0dc54ac7a766b2bc5ccf558d8eee Network name: Chain ID: 1 Currency Symbol: TOKEN1 ``` Notice some details here: - Two smart contracts are deployed to each Avalanche L1: Teleporter Messenger and Teleporter Registry - Both ICM smart contracts are also deployed to `C-Chain` in the Local Network - [AWM ICM Relayer](https://github.com/ava-labs/icm-services/tree/main/relayer) is installed, configured and executed in background (A Relayer [listens](/docs/cross-chain/teleporter/overview#data-flow) for new messages being generated on a source Avalanche L1 and sends them to the destination Avalanche L1.) CLI configures the Relayer to enable every Avalanche L1 to send messages to all other Avalanche L1s. If you add more Avalanche L1s, the Relayer will be automatically reconfigured. When deploying Avalanche L1 ``, the two ICM contracts will not be deployed to C-Chain in Local Network as they have already been deployed when we deployed the first Avalanche L1. ``` avalanche blockchain deploy --local Deploying [] to Local Network Deploying Blockchain. Wait until network acknowledges... Teleporter Messenger has already been deployed to c-chain Teleporter Messenger successfully deployed to (0xF7cBd95f1355f0d8d659864b92e2e9fbfaB786f7) Teleporter Registry successfully deployed to (0x9EDc4cB4E781413b1b82CC3A92a60131FC111F58) Using latest awm-relayer version (v1.1.0) Executing AWM-Relayer... Blockchain ready to use. Local network node endpoints: +-------+-----------+-------------------------------------------------------------------------------------+--------------------------------------------+ | NODE | VM | URL | ALIAS URL | +-------+-----------+-------------------------------------------------------------------------------------+--------------------------------------------+ | node1 | | http://127.0.0.1:9650/ext/bc/2tVGwEQmeXtdnFURW1YSq5Yf4jbJPfTBfVcu68KWHdHe5e5gX5/rpc | http://127.0.0.1:9650/ext/bc//rpc | +-------+-----------+-------------------------------------------------------------------------------------+--------------------------------------------+ | node1 | | http://127.0.0.1:9650/ext/bc/MzN4AbtFzQ3eKqPhFaDpwCMJmagciWSCgghkZx6YeC6jRdvb6/rpc | http://127.0.0.1:9650/ext/bc//rpc | +-------+-----------+-------------------------------------------------------------------------------------+--------------------------------------------+ | node2 | | http://127.0.0.1:9652/ext/bc/2tVGwEQmeXtdnFURW1YSq5Yf4jbJPfTBfVcu68KWHdHe5e5gX5/rpc | http://127.0.0.1:9652/ext/bc//rpc | +-------+-----------+-------------------------------------------------------------------------------------+--------------------------------------------+ | node2 | | http://127.0.0.1:9652/ext/bc/MzN4AbtFzQ3eKqPhFaDpwCMJmagciWSCgghkZx6YeC6jRdvb6/rpc | http://127.0.0.1:9652/ext/bc//rpc | +-------+-----------+-------------------------------------------------------------------------------------+--------------------------------------------+ | node3 | | http://127.0.0.1:9654/ext/bc/2tVGwEQmeXtdnFURW1YSq5Yf4jbJPfTBfVcu68KWHdHe5e5gX5/rpc | http://127.0.0.1:9654/ext/bc//rpc | +-------+-----------+-------------------------------------------------------------------------------------+--------------------------------------------+ | node3 | | http://127.0.0.1:9654/ext/bc/MzN4AbtFzQ3eKqPhFaDpwCMJmagciWSCgghkZx6YeC6jRdvb6/rpc | http://127.0.0.1:9654/ext/bc//rpc | +-------+-----------+-------------------------------------------------------------------------------------+--------------------------------------------+ | node4 | | http://127.0.0.1:9656/ext/bc/2tVGwEQmeXtdnFURW1YSq5Yf4jbJPfTBfVcu68KWHdHe5e5gX5/rpc | http://127.0.0.1:9656/ext/bc//rpc | +-------+-----------+-------------------------------------------------------------------------------------+--------------------------------------------+ | node4 | | http://127.0.0.1:9656/ext/bc/MzN4AbtFzQ3eKqPhFaDpwCMJmagciWSCgghkZx6YeC6jRdvb6/rpc | http://127.0.0.1:9656/ext/bc//rpc | +-------+-----------+-------------------------------------------------------------------------------------+--------------------------------------------+ | node5 | | http://127.0.0.1:9658/ext/bc/MzN4AbtFzQ3eKqPhFaDpwCMJmagciWSCgghkZx6YeC6jRdvb6/rpc | http://127.0.0.1:9658/ext/bc//rpc | +-------+-----------+-------------------------------------------------------------------------------------+--------------------------------------------+ | node5 | | http://127.0.0.1:9658/ext/bc/2tVGwEQmeXtdnFURW1YSq5Yf4jbJPfTBfVcu68KWHdHe5e5gX5/rpc | http://127.0.0.1:9658/ext/bc//rpc | +-------+-----------+-------------------------------------------------------------------------------------+--------------------------------------------+ Browser Extension connection details (any node URL from above works): RPC URL: http://127.0.0.1:9650/ext/bc/2tVGwEQmeXtdnFURW1YSq5Yf4jbJPfTBfVcu68KWHdHe5e5gX5/rpc Funded address: 0x0EF815FFFF6ad1d4e17C8ED4128b20EB5edAABBB with 1000000 (10^18) - private key: 56289e99c94b6912bfc12adc093c9b51124f0dc54ac7a766b2bc5ccf558d8027 Network name: Chain ID: 2 Currency Symbol: TOKEN2 ``` ## Verify ICM Contracts Are Successfully Set Up To verify that ICM contracts are successfully set up, let's send a couple of cross messages: ``` avalanche teleporter msg C-Chain chain1 "Hello World" --local Delivering message "this is a message" to source Avalanche L1 "C-Chain" Waiting for message to be received at destination Avalanche L1 Avalanche L1 "chain1" Message successfully Teleported! ``` ``` avalanche teleporter msg chain2 chain1 "Hello World" --local Delivering message "this is a message" to source Avalanche L1 "chain2" Waiting for message to be received at destination Avalanche L1 Avalanche L1 "chain1" Message successfully Teleported! ``` You have sent your first ICM message in the Local Network! Relayer related logs can be found at `~/.avalanche-cli/runs/awm-relayer.log`, and Relayer configuration can be found at `~/.avalanche-cli/runs/awm-relayer-config.json` Obtaining Information on ICM Contract Deploys[​](#obtaining-information-on-teleporter-deploys "Direct link to heading") --------------------------------------------------------------------------------------------------------------------- ### Obtaining Avalanche L1 Information[​](#obtaining-avalanche-l1-information "Direct link to heading") By executing `blockchain describe` on an ICM contract-enabled Avalanche L1, the following relevant information can be found: - Blockchain RPC URL - Blockchain ID in cb58 format - Blockchain ID in plain hex format - Teleporter Messenger address - Teleporter Registry address Let's get the information for ``: ``` avalanche blockchain describe _____ _ _ _ | __ \ | | (_) | | | | | ___| |_ __ _ _| |___ | | | |/ _ \ __/ _ | | / __| | |__| | __/ || (_| | | \__ \ |_____/ \___|\__\__,_|_|_|___/ +--------------------------------+-------------------------------------------------------------------------------------+ | PARAMETER | VALUE | +--------------------------------+-------------------------------------------------------------------------------------+ | Avalanche L1 Name | chain1 | +--------------------------------+-------------------------------------------------------------------------------------+ | ChainID | 1 | +--------------------------------+-------------------------------------------------------------------------------------+ | Token Name | TOKEN1 Token | +--------------------------------+-------------------------------------------------------------------------------------+ | Token Symbol | TOKEN1 | +--------------------------------+-------------------------------------------------------------------------------------+ | VM Version | v0.6.3 | +--------------------------------+-------------------------------------------------------------------------------------+ | VM ID | srEXiWaHjFEgKSgK2zBgnWQUVEy2MZA7UUqjqmBSS7MZYSCQ5 | +--------------------------------+-------------------------------------------------------------------------------------+ | Local Network SubnetID | 2CZP2ndbQnZxTzGuZjPrJAm5b4s2K2Bcjh8NqWoymi8NZMLYQk | +--------------------------------+-------------------------------------------------------------------------------------+ | Local Network RPC URL | http://127.0.0.1:9650/ext/bc/2cFWSgGkmRrmKtbPkB8yTpnq9ykK3Dc2qmxphwYtiGXCvnSwg8/rpc | +--------------------------------+-------------------------------------------------------------------------------------+ | Local Network BlockchainID | 2cFWSgGkmRrmKtbPkB8yTpnq9ykK3Dc2qmxphwYtiGXCvnSwg8 | + +-------------------------------------------------------------------------------------+ | | 0xd3bc5f71e6946d17c488d320cd1f6f5337d9dce75b3fac5023433c4634b6e91e | +--------------------------------+-------------------------------------------------------------------------------------+ | Local Network Teleporter | 0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf | | Messenger Address | | +--------------------------------+-------------------------------------------------------------------------------------+ | Local Network Teleporter | 0xbD9e8eC38E43d34CAB4194881B9BF39d639D7Bd3 | | Registry Address | | +--------------------------------+-------------------------------------------------------------------------------------+ ... ``` ### Obtaining C-Chain Information[​](#obtaining-c-chain-information "Direct link to heading") Similar information can be found for C-Chain by using `primary describe`: ``` avalanche primary describe --local _____ _____ _ _ _____ / ____| / ____| | (_) | __ \ | | ______| | | |__ __ _ _ _ __ | |__) |_ _ _ __ __ _ _ __ ___ ___ | | |______| | | '_ \ / _ | | '_ \ | ___/ _ | '__/ _ | '_ _ \/ __| | |____ | |____| | | | (_| | | | | | | | | (_| | | | (_| | | | | | \__ \ \_____| \_____|_| |_|\__,_|_|_| |_| |_| \__,_|_| \__,_|_| |_| |_|___/ +------------------------------+--------------------------------------------------------------------+ | PARAMETER | VALUE | +------------------------------+--------------------------------------------------------------------+ | RPC URL | http://127.0.0.1:9650/ext/bc/C/rpc | +------------------------------+--------------------------------------------------------------------+ | EVM Chain ID | 43112 | +------------------------------+--------------------------------------------------------------------+ | TOKEN SYMBOL | AVAX | +------------------------------+--------------------------------------------------------------------+ | Address | 0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC | +------------------------------+--------------------------------------------------------------------+ | Balance | 49999489.829989485 | +------------------------------+--------------------------------------------------------------------+ | Private Key | 56289e99c94b6912bfc12adc093c9b51124f0dc54ac7a766b2bc5ccf558d8027 | +------------------------------+--------------------------------------------------------------------+ | BlockchainID | 2JeJDKL9Bvn1vLuuPL1DpUccBCVUh7iRnkv3a5pV9kJW5HbuQz | + +--------------------------------------------------------------------+ | | 0xabc1bd35cb7313c8a2b62980172e6d7ef42aaa532c870499a148858b0b6a34fd | +------------------------------+--------------------------------------------------------------------+ | ICM Messenger Address | 0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf | +------------------------------+--------------------------------------------------------------------+ | ICM Registry Address | 0x17aB05351fC94a1a67Bf3f56DdbB941aE6c63E25 | +------------------------------+--------------------------------------------------------------------+ ``` Controlling Relayer Execution[​](#controlling-relayer-execution "Direct link to heading") ----------------------------------------------------------------------------------------- Besides having the option to not use a Relayer at Avalanche L1 creation time, the Relayer can be stopped and restarted on used request. To stop the Relayer: ``` avalanche interchain relayer stop --local ✓ Local AWM Relayer successfully stopped ``` To start it again: ``` avalanche interchain relayer start --local using latest awm-relayer version (v1.1.0) Executing AWM-Relayer... ✓ Local AWM Relayer successfully started Logs can be found at ~/.avalanche-cli/runs/awm-relayer.log ``` # What is ICM Contracts? (/docs/cross-chain/icm-contracts/overview) --- title: "What is ICM Contracts?" description: "ICM Contracts is a messaging protocol built on top of Avalanche Interchain Messaging that provides a developer-friendly interface for sending and receiving cross-chain messages from the EVM." edit_url: https://github.com/ava-labs/icm-contracts/edit/main/contracts/teleporter/README.md --- # ICM Protocol - [Overview](#overview) - [Data Flow](#data-flow) - [Properties](#properties) - [Fees](#fees) - [Message Receipts and Fee Redemption](#message-receipts-and-fee-redemption) - [Required Interface](#required-interface) - [Message Delivery and Execution](#message-delivery-and-execution) - [Resending a Message](#resending-a-message) - [TeleporterMessenger Contract Deployment](#teleportermessenger-contract-deployment) - [Deployed Addresses](#deployed-addresses) - [A Note on Versioning](#a-note-on-versioning) - [Upgradability](#upgradability) - [Deploy TeleporterMessenger to an L1](#deploy-teleportermessenger-to-an-avalanche-l1) - [Deploy TeleporterRegistry to an L1](#deploy-teleporterregistry-to-an-avalanche-l1) - [Verify a Deployment of TeleporterMessenger](#verify-a-deployment-of-teleportermessenger) > **Note on Terminology:** In this documentation, **ICM Contract** refers to any smart contract that interfaces with Avalanche's native Interchain Messaging (ICM) protocol. **Teleporter** (specifically `TeleporterMessenger`) is one such implementation—a production-ready, developer-friendly ICM Contract provided in this repository. The underlying ICM protocol is extensible, and developers are free to build their own custom ICM Contracts tailored to specific use cases. Teleporter serves as a reference implementation and a convenient abstraction layer for most cross-chain communication needs. ## Overview `TeleporterMessenger` is a smart contract that serves as the interface for ICM contracts to [Avalanche Interchain Messaging (ICM)](https://build.avax.network/academy/interchain-messaging/04-icm-basics/01-icm-basics). It provides a mechanism to asynchronously invoke smart contract functions on other EVM L1s within Avalanche. `TeleporterMessenger` provides a handful of useful features to ICM, such as specifying relayer incentives for message delivery, replay protection, message delivery and execution retries, and a standard interface for sending and receiving messages within a dApp deployed across multiple Avalanche L1s. The `TeleporterMessenger` contract is a user-friendly interface to ICM, aimed at dApp developers. All of the message signing and verification is abstracted away from developers. Instead, developers simply call `sendCrossChainMessage` on the `TeleporterMessenger` contract to send a message invoking a smart contract on another Avalanche L1, and implement the `ITeleporterReceiver` interface to receive messages on the destination Avalanche L1. `TeleporterMessenger` handles all of the ICM message construction and sending, as well as the message delivery and execution. To get started with using `TeleporterMessenger`, see [How to Deploy ICM Enabled Avalanche L1s on a Local Network](https://build.avax.network/docs/tooling/cross-chain/teleporter-local-network) The `ITeleporterMessenger` interface provides two primary methods: - `sendCrossChainMessage`: called by contracts on the origin chain to initiate the sending of a message to a contract on another EVM instance. - `receiveCrossChainMessage`: called by cross-chain relayers on the destination chain to deliver signed messages to the destination EVM instance. The `ITeleporterReceiver` interface provides a single method. All contracts that wish to receive ICM messages on the destination chain must implement this interface: - `receiveTeleporterMessage`: called by `TeleporterMessenger` on the destination chain to deliver a message to the destination contract. > Note: If a contract does not implement `ITeleporterReceiver`, but instead implements [fallback](https://docs.soliditylang.org/en/latest/contracts.html#fallback-function), the fallback function will be called when `TeleporterMessenger` attempts to perform message execution. The message execution is marked as failed if the fallback function reverts, otherwise it is marked as successfully executed. ## Data Flow
/>
## Properties TeleporterMessenger provides a handful of useful properties to cross-chain applications that ICM messages do not provide by default. These include: 1. Replay protection: `TeleporterMessenger` ensures that a cross-chain message is not delivered multiple times. 2. Retries: In certain edge cases when there is significant validator churn, it is possible for an ICM Message to be dropped before a valid aggregate signature is created for it. `TeleporterMessenger` ensures that messages can still be delivered even in this event by allowing for retries of previously submitted messages. 3. Relay incentivization: `TeleporterMessenger` provides a mechanism for messages to optionally incentivize relayers to perform the necessary signature aggregation and pay the transaction fee to broadcast the signed message on the destination chain. 4. Allowed relayers: `TeleporterMessenger` allows users to specify a list of `allowedRelayerAddresses`, where only the specified addresses can relay and deliver the `TeleporterMessenger` message. Leaving this list empty allows all relayers to deliver. 5. Message execution: `TeleporterMessenger` enables cross-chain messages to have direct effect on their destination chain by using `evm.Call()` to invoke the `receiveTeleporterMessage` function of destination contracts that implement the `ITeleporterReceiver` interface. ## Fees Fees can be paid on a per message basis by specifing the ERC20 asset and amount to be used to incentivize a relayer to deliver the message in the call to `sendCrossChainMessage`. The fee amount is transferred into the control of `TeleporterMessenger` (i.e. locked) before the ICM message is sent. `TeleporterMessenger` tracks the fee amount for each message ID it creates. When it subsequently receives a message back from the destination chain of the original message, the new message will have a list of receipts identifying the relayer that delivered the given message ID. At this point, the fee amount originally locked by `TeleporterMessenger` for the given message will be redeemable by the relayer identified in the receipt. If the initial fee amount was not sufficient to incentivize a relayer, it can be added to by using `addFeeAmount`. ### Message Receipts and Fee Redemption In order to confirm delivery of a `TeleporterMessenger` message from a source chain to a destination chain, a receipt is included in the next `TeleporterMessenger` message sent in the opposite direction, from the destination chain back to the source chain. This receipt contains the message ID of the original message, as well as the reward address that the delivering relayer specified. That reward address is then able to redeem the corresponding reward on the original chain by calling `redeemRelayerRewards`. The following example illustrates this flow: - A `TeleporterMessenger` message is sent from Chain A to Chain B, with a relayer incentive of `10` `USDC`. This message is assigned the ID `1` by the `TeleporterMessenger` contract on Chain A. - On Chain A, this is done by calling `sendCrossChainMessage`, and providing the `USDC` contract address and amount in the function call. - A relayer delivers the message on Chain B by calling `receiveCrossChainMessage` and providing its address, `0x123...` - The `TeleporterMessenger` contract on Chain B stores the relayer address in a receipt for the message ID. - Some time later, a separate `TeleporterMessenger` message is sent from Chain B to Chain A. The `TeleporterMessenger` contract on Chain B includes the receipt for the original message in this new message. - When this new message is delivered on Chain A, the `TeleporterMessenger` contract on Chain A reads the receipt and attributes the rewards for delivering the original message (message ID `1`) to the address `0x123...`. - Address `0x123...` may now call `redeemRelayerRewards` on Chain A, which transfers the `10` `USDC` to its address. If it tries to do this before the receipt is received on Chain A, the call will fail. It is possible for receipts to get "stuck" on the destination chain in the event that `TeleporterMessenger` traffic between two chains is skewed in one direction. In such a scenario, incoming messages on one chain may cause the rate at which receipts are generated to outpace the rate at which they are sent back to the other chain. To mitigate this, the method `sendSpecifiedReceipts` can be called to immediately send the receipts associated with the given message IDs back to the original chain. ## Required Interface `TeleporterMessenger` messages are delivered by calling the `receiveTeleporterMessage` function defined by the `ITeleporterReceiver` interface. Contracts must implement this interface in order to be able to receive messages. The first two parameters of `receiveTeleporterMessage` identify the original sender of the given message on the origin chain and are set by the `TeleporterMessenger`. The third parameter to `receiveTeleporterMessage`, is the raw message payload. Applications using `TeleporterMessenger` are responsible for defining the exact format of this payload in a way that can be decoded on the receiving end. For example, applications may encode an action enum value along with the target method parameters on the sending side, then decode this data and route to the target method within `receiveTeleporterMessage`. See `ERC20Bridge.sol` for an example of this approach. ## Message Delivery and Execution `TeleporterMessenger` is able to ensure that messages are considered delivered even if their execution fails (i.e. reverts) by using `evm.Call()` with a pre-defined gas limit to execute the message payload. This gas limit is specified by each message in the call to `sendCrossChainMessage`. Relayers must provide at least enough gas for the sub-call in addition to the standard gas used by a call to `receiveCrossChainMessage`. In the event that a message execution runs out of gas or reverts for any other reason, the hash of the message payload is stored by the receiving `TeleporterMessenger` contract instance. This allows for the message execution to be retried in the future, with possibly a higher gas limit by calling `retryMessageExecution`. Importantly, a message is still considered delivered on its destination chain even if its execution fails. This allows the relayer of the message to redeem their reward for deliverying the message, because they have no control on whether or not its execution will succeed or not so long as they provide sufficient gas to meet the specified `requiredGasLimit`. Note that due to [EIP-150](https://eips.ethereum.org/EIPS/eip-150), the lesser of 63/64ths of the remaining gas and the `requiredGasLimit` will be provided to the code executed using `evm.Call()`. This creates an edge case where sufficient gas is provided by the relayer at time of the `requiredGasLimit` check, but less than the `requiredGasLimit` is provided for the message execution. In such a case, the message execution may fail due to having less than the `requiredGasLimit` available, but the message would still be considered received. Such a case is only possible if the remaining 1/64th of the `requiredGasLimit` is sufficient for executing the remaining logic of `receiveCrossChainMessage` so that the top level transaction does not also revert. Based on the current implementation, a message must have a `requiredGasLimit` of over 1,200,000 gas for this to be possible. In order to avoid this case entirely, it is recommended for applications sending `TeleporterMessenger` messages to add a buffer to the `requiredGasLimit` such that 63/64ths of the value passed is sufficient for the message execution. ## Resending a Message If the sending Avalanche L1's validator set changes, then it's possible for the receiving Avalanche L1 to reject the underlying ICM message due to insufficient signing stake. For example, suppose L1 A has 5 validators with equal stake weight who all sign a `TeleporterMessenger` message sent to L1 B. 100% of L1 A's stake has signed the message. Also suppose L1 B requires 67% of the sending L1's stake to have signed a given ICM message in order for it to be accepted. Before the message can be delivered, however, 5 _more_ validators are added to L1 A's validator set (all with the same stake weight as the original validators), meaning that the `TeleporterMessenger` message was signed by _only 50%_ of L1 A's stake. L1 B will reject this message. Once sent on chain, ICM messages cannot be re-signed by a new validator set in such a scenario. ICM Contracts, however, do support re-signing via the function `retrySendCrossChainMessage`, which can be called for any message that has not been acknowledged as delivered to its destination. Under the hood, this packages the `TeleporterMessenger` message into a brand new ICM message that is re-signed by the current validator set. ## TeleporterMessenger Contract Deployment **Do not deploy the `TeleporterMessenger` contract using `forge create`**. The `TeleporterMessenger` contract must be deployed to the same contract address on every chain. To achieve this, the contract can be deployed using a static transaction that uses Nick's method as documented in [this guide](https://github.com/ava-labs/icm-contracts/blob/main/utils/contract-deployment/README.md). Alternatively, if creating a new L1, the contract can be pre-allocated with the proper address and state in the new chain's [genesis file](https://build.avax.network/docs/virtual-machines/custom-precompiles#setting-the-genesis-allocation). As an example, to include `TeleporterMessenger` `v1.0.0` in the genesis file, include the following values in the `alloc` settings, as documented at the link above. The `storage` values included below correspond to the two contract values that are initialized as part of the default constructor of `TeleporterMessenger`. These are the `ReentrancyGuard` values set in this [abstract contract](https://github.com/ava-labs/icm-contracts/blob/main/contracts/utilities/ReentrancyGuards.sol). Future versions of `TeleporterMessenger` may require different storage value initializations. ```json "alloc": { "0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf": { "balance": "0x0", "code": "0x608060405234801561001057600080fd5b506004361061014d5760003560e01c8063a8898181116100c3578063df20e8bc1161007c578063df20e8bc1461033b578063e69d606a1461034e578063e6e67bd5146103b6578063ebc3b1ba146103f2578063ecc7042814610415578063fc2d61971461041e57600080fd5b8063a8898181146102b2578063a9a85614146102c5578063b771b3bc146102d8578063c473eef8146102e6578063ccb5f8091461031f578063d127dc9b1461033257600080fd5b8063399b77da11610115578063399b77da1461021957806362448850146102395780638245a1b01461024c578063860a3b061461025f578063892bf4121461027f5780638ac0fd041461029f57600080fd5b80630af5b4ff1461015257806322296c3a1461016d5780632bc8b0bf146101825780632ca40f55146101955780632e27c223146101ee575b600080fd5b61015a610431565b6040519081526020015b60405180910390f35b61018061017b366004612251565b610503565b005b61015a61019036600461226e565b6105f8565b6101e06101a336600461226e565b6005602090815260009182526040918290208054835180850190945260018201546001600160a01b03168452600290910154918301919091529082565b604051610164929190612287565b6102016101fc36600461226e565b610615565b6040516001600160a01b039091168152602001610164565b61015a61022736600461226e565b60009081526005602052604090205490565b61015a6102473660046122ae565b61069e565b61018061025a366004612301565b6106fc565b61015a61026d36600461226e565b60066020526000908152604090205481565b61029261028d366004612335565b6108a7565b6040516101649190612357565b6101806102ad366004612377565b6108da565b61015a6102c03660046123af565b610b19565b61015a6102d3366004612426565b610b5c565b6102016005600160991b0181565b61015a6102f43660046124be565b6001600160a01b03918216600090815260096020908152604080832093909416825291909152205490565b61018061032d3660046124f7565b610e03565b61015a60025481565b61015a61034936600461226e565b61123d565b61039761035c36600461226e565b600090815260056020908152604091829020825180840190935260018101546001600160a01b03168084526002909101549290910182905291565b604080516001600160a01b039093168352602083019190915201610164565b6103dd6103c436600461226e565b6004602052600090815260409020805460019091015482565b60408051928352602083019190915201610164565b61040561040036600461226e565b611286565b6040519015158152602001610164565b61015a60035481565b61018061042c36600461251e565b61129c565b600254600090806104fe576005600160991b016001600160a01b0316634213cf786040518163ffffffff1660e01b8152600401602060405180830381865afa158015610481573d6000803e3d6000fd5b505050506040513d601f19601f820116820180604052508101906104a59190612564565b9050806104cd5760405162461bcd60e51b81526004016104c49061257d565b60405180910390fd5b600281905560405181907f1eac640109dc937d2a9f42735a05f794b39a5e3759d681951d671aabbce4b10490600090a25b919050565b3360009081526009602090815260408083206001600160a01b0385168452909152902054806105855760405162461bcd60e51b815260206004820152602860248201527f54656c65706f727465724d657373656e6765723a206e6f2072657761726420746044820152676f2072656465656d60c01b60648201526084016104c4565b3360008181526009602090815260408083206001600160a01b03871680855290835281842093909355518481529192917f3294c84e5b0f29d9803655319087207bc94f4db29f7927846944822773780b88910160405180910390a36105f46001600160a01b03831633836114f7565b5050565b600081815260046020526040812061060f9061155f565b92915050565b6000818152600760205260408120546106825760405162461bcd60e51b815260206004820152602960248201527f54656c65706f727465724d657373656e6765723a206d657373616765206e6f74604482015268081c9958d95a5d995960ba1b60648201526084016104c4565b506000908152600860205260409020546001600160a01b031690565b60006001600054146106c25760405162461bcd60e51b81526004016104c4906125c4565b60026000556106f16106d383612804565b833560009081526004602052604090206106ec90611572565b61167c565b600160005592915050565b60016000541461071e5760405162461bcd60e51b81526004016104c4906125c4565b6002600081815590546107379060408401358435610b19565b6000818152600560209081526040918290208251808401845281548152835180850190945260018201546001600160a01b03168452600290910154838301529081019190915280519192509061079f5760405162461bcd60e51b81526004016104c4906128a7565b6000836040516020016107b29190612b42565b60408051601f19818403018152919052825181516020830120919250146107eb5760405162461bcd60e51b81526004016104c490612b55565b8360400135837f2a211ad4a59ab9d003852404f9c57c690704ee755f3c79d2c2812ad32da99df8868560200151604051610826929190612b9e565b60405180910390a360405163ee5b48eb60e01b81526005600160991b019063ee5b48eb90610858908490600401612c23565b6020604051808303816000875af1158015610877573d6000803e3d6000fd5b505050506040513d601f19601f8201168201806040525081019061089b9190612564565b50506001600055505050565b604080518082019091526000808252602082015260008381526004602052604090206108d390836118bc565b9392505050565b6001600054146108fc5760405162461bcd60e51b81526004016104c4906125c4565b600260005560018054146109225760405162461bcd60e51b81526004016104c490612c36565b60026001558061098c5760405162461bcd60e51b815260206004820152602f60248201527f54656c65706f727465724d657373656e6765723a207a65726f2061646469746960448201526e1bdb985b0819995948185b5bdd5b9d608a1b60648201526084016104c4565b6001600160a01b0382166109b25760405162461bcd60e51b81526004016104c490612c7b565b6000838152600560205260409020546109dd5760405162461bcd60e51b81526004016104c4906128a7565b6000838152600560205260409020600101546001600160a01b03838116911614610a6f5760405162461bcd60e51b815260206004820152603760248201527f54656c65706f727465724d657373656e6765723a20696e76616c69642066656560448201527f20617373657420636f6e7472616374206164647265737300000000000000000060648201526084016104c4565b6000610a7b8383611981565b600085815260056020526040812060020180549293508392909190610aa1908490612ce5565b909155505060008481526005602052604090819020905185917fc1bfd1f1208927dfbd414041dcb5256e6c9ad90dd61aec3249facbd34ff7b3e191610b03916001019081546001600160a01b0316815260019190910154602082015260400190565b60405180910390a2505060018080556000555050565b60408051306020820152908101849052606081018390526080810182905260009060a0016040516020818303038152906040528051906020012090509392505050565b6000600160005414610b805760405162461bcd60e51b81526004016104c4906125c4565b60026000818155905490866001600160401b03811115610ba257610ba2612607565b604051908082528060200260200182016040528015610be757816020015b6040805180820190915260008082526020820152815260200190600190039081610bc05790505b5090508660005b81811015610d6c5760008a8a83818110610c0a57610c0a612cf8565b90506020020135905060006007600083815260200190815260200160002054905080600003610c8a5760405162461bcd60e51b815260206004820152602660248201527f54656c65706f727465724d657373656e6765723a2072656365697074206e6f7460448201526508199bdd5b9960d21b60648201526084016104c4565b610c958d8783610b19565b8214610d095760405162461bcd60e51b815260206004820152603a60248201527f54656c65706f727465724d657373656e6765723a206d6573736167652049442060448201527f6e6f742066726f6d20736f7572636520626c6f636b636861696e00000000000060648201526084016104c4565b6000828152600860209081526040918290205482518084019093528383526001600160a01b03169082018190528651909190879086908110610d4d57610d4d612cf8565b602002602001018190525050505080610d6590612d0e565b9050610bee565b506040805160c0810182528b815260006020820152610df0918101610d96368b90038b018b612d27565b8152602001600081526020018888808060200260200160405190810160405280939291908181526020018383602002808284376000920182905250938552505060408051928352602080840190915290920152508361167c565b60016000559a9950505050505050505050565b6001805414610e245760405162461bcd60e51b81526004016104c490612c36565b60026001556040516306f8253560e41b815263ffffffff8316600482015260009081906005600160991b0190636f82535090602401600060405180830381865afa158015610e76573d6000803e3d6000fd5b505050506040513d6000823e601f3d908101601f19168201604052610e9e9190810190612da3565b9150915080610f015760405162461bcd60e51b815260206004820152602960248201527f54656c65706f727465724d657373656e6765723a20696e76616c69642077617260448201526870206d65737361676560b81b60648201526084016104c4565b60208201516001600160a01b03163014610f785760405162461bcd60e51b815260206004820152603260248201527f54656c65706f727465724d657373656e6765723a20696e76616c6964206f726960448201527167696e2073656e646572206164647265737360701b60648201526084016104c4565b60008260400151806020019051810190610f929190612f40565b90506000610f9e610431565b90508082604001511461100d5760405162461bcd60e51b815260206004820152603160248201527f54656c65706f727465724d657373656e6765723a20696e76616c6964206465736044820152701d1a5b985d1a5bdb8818da185a5b881251607a1b60648201526084016104c4565b8351825160009161101f918490610b19565b600081815260076020526040902054909150156110945760405162461bcd60e51b815260206004820152602d60248201527f54656c65706f727465724d657373656e6765723a206d65737361676520616c7260448201526c1958591e481c9958d95a5d9959609a1b60648201526084016104c4565b6110a2338460a00151611ae9565b6111005760405162461bcd60e51b815260206004820152602960248201527f54656c65706f727465724d657373656e6765723a20756e617574686f72697a6560448201526832103932b630bcb2b960b91b60648201526084016104c4565b61110e818460000151611b61565b6001600160a01b0386161561114557600081815260086020526040902080546001600160a01b0319166001600160a01b0388161790555b60c08301515160005b81811015611192576111828488600001518760c00151848151811061117557611175612cf8565b6020026020010151611bd3565b61118b81612d0e565b905061114e565b50604080518082018252855181526001600160a01b038916602080830191909152885160009081526004909152919091206111cc91611cfb565b336001600160a01b03168660000151837f292ee90bbaf70b5d4936025e09d56ba08f3e421156b6a568cf3c2840d9343e348a8860405161120d929190613150565b60405180910390a460e0840151511561122f5761122f82876000015186611d57565b505060018055505050505050565b600254600090806112605760405162461bcd60e51b81526004016104c49061257d565b600060035460016112719190612ce5565b905061127e828583610b19565b949350505050565b600081815260076020526040812054151561060f565b60018054146112bd5760405162461bcd60e51b81526004016104c490612c36565b60026001819055546000906112d59084908435610b19565b600081815260066020526040902054909150806113045760405162461bcd60e51b81526004016104c4906128a7565b80836040516020016113169190612b42565b60405160208183030381529060405280519060200120146113495760405162461bcd60e51b81526004016104c490612b55565b600061135b6080850160608601612251565b6001600160a01b03163b116113cf5760405162461bcd60e51b815260206004820152603460248201527f54656c65706f727465724d657373656e6765723a2064657374696e6174696f6e604482015273206164647265737320686173206e6f20636f646560601b60648201526084016104c4565b604051849083907f34795cc6b122b9a0ae684946319f1e14a577b4e8f9b3dda9ac94c21a54d3188c90600090a360008281526006602090815260408083208390558691611420918701908701612251565b61142d60e0870187613174565b60405160240161144094939291906131ba565b60408051601f198184030181529190526020810180516001600160e01b031663643477d560e11b179052905060006114886114816080870160608801612251565b5a84611e8a565b9050806114eb5760405162461bcd60e51b815260206004820152602b60248201527f54656c65706f727465724d657373656e6765723a20726574727920657865637560448201526a1d1a5bdb8819985a5b195960aa1b60648201526084016104c4565b50506001805550505050565b6040516001600160a01b03831660248201526044810182905261155a90849063a9059cbb60e01b906064015b60408051601f198184030181529190526020810180516001600160e01b03166001600160e01b031990931692909217909152611ea4565b505050565b8054600182015460009161060f916131e5565b6060600061158960056115848561155f565b611f76565b9050806000036115d85760408051600080825260208201909252906115d0565b60408051808201909152600080825260208201528152602001906001900390816115a95790505b509392505050565b6000816001600160401b038111156115f2576115f2612607565b60405190808252806020026020018201604052801561163757816020015b60408051808201909152600080825260208201528152602001906001900390816116105790505b50905060005b828110156115d05761164e85611f8c565b82828151811061166057611660612cf8565b60200260200101819052508061167590612d0e565b905061163d565b600080611687610431565b9050600060036000815461169a90612d0e565b919050819055905060006116b383876000015184610b19565b90506000604051806101000160405280848152602001336001600160a01b031681526020018860000151815260200188602001516001600160a01b0316815260200188606001518152602001886080015181526020018781526020018860a00151815250905060008160405160200161172c91906131f8565b60405160208183030381529060405290506000808960400151602001511115611794576040890151516001600160a01b031661177a5760405162461bcd60e51b81526004016104c490612c7b565b604089015180516020909101516117919190611981565b90505b6040805180820182528a820151516001600160a01b039081168252602080830185905283518085018552865187830120815280820184815260008a815260058452869020915182555180516001830180546001600160a01b03191691909516179093559101516002909101558a51915190919086907f2a211ad4a59ab9d003852404f9c57c690704ee755f3c79d2c2812ad32da99df890611838908890869061320b565b60405180910390a360405163ee5b48eb60e01b81526005600160991b019063ee5b48eb9061186a908690600401612c23565b6020604051808303816000875af1158015611889573d6000803e3d6000fd5b505050506040513d601f19601f820116820180604052508101906118ad9190612564565b50939998505050505050505050565b60408051808201909152600080825260208201526118d98361155f565b82106119315760405162461bcd60e51b815260206004820152602160248201527f5265636569707451756575653a20696e646578206f7574206f6620626f756e646044820152607360f81b60648201526084016104c4565b8260020160008385600001546119479190612ce5565b81526020808201929092526040908101600020815180830190925280548252600101546001600160a01b0316918101919091529392505050565b6040516370a0823160e01b815230600482015260009081906001600160a01b038516906370a0823190602401602060405180830381865afa1580156119ca573d6000803e3d6000fd5b505050506040513d601f19601f820116820180604052508101906119ee9190612564565b9050611a056001600160a01b038516333086612058565b6040516370a0823160e01b81523060048201526000906001600160a01b038616906370a0823190602401602060405180830381865afa158015611a4c573d6000803e3d6000fd5b505050506040513d601f19601f82011682018060405250810190611a709190612564565b9050818111611ad65760405162461bcd60e51b815260206004820152602c60248201527f5361666545524332305472616e7366657246726f6d3a2062616c616e6365206e60448201526b1bdd081a5b98dc99585cd95960a21b60648201526084016104c4565b611ae082826131e5565b95945050505050565b60008151600003611afc5750600161060f565b815160005b81811015611b5657846001600160a01b0316848281518110611b2557611b25612cf8565b60200260200101516001600160a01b031603611b465760019250505061060f565b611b4f81612d0e565b9050611b01565b506000949350505050565b80600003611bc15760405162461bcd60e51b815260206004820152602760248201527f54656c65706f727465724d657373656e6765723a207a65726f206d657373616760448201526665206e6f6e636560c81b60648201526084016104c4565b60009182526007602052604090912055565b6000611be484848460000151610b19565b6000818152600560209081526040918290208251808401845281548152835180850190945260018201546001600160a01b031684526002909101548383015290810191909152805191925090611c3b575050505050565b60008281526005602090815260408083208381556001810180546001600160a01b03191690556002018390558382018051830151878401516001600160a01b0390811686526009855283862092515116855292528220805491929091611ca2908490612ce5565b9250508190555082602001516001600160a01b031684837fd13a7935f29af029349bed0a2097455b91fd06190a30478c575db3f31e00bf578460200151604051611cec919061321e565b60405180910390a45050505050565b6001820180548291600285019160009182611d1583612d0e565b90915550815260208082019290925260400160002082518155910151600190910180546001600160a01b0319166001600160a01b039092169190911790555050565b80608001515a1015611db95760405162461bcd60e51b815260206004820152602560248201527f54656c65706f727465724d657373656e6765723a20696e73756666696369656e604482015264742067617360d81b60648201526084016104c4565b80606001516001600160a01b03163b600003611dda5761155a838383612096565b602081015160e0820151604051600092611df892869260240161323e565b60408051601f198184030181529190526020810180516001600160e01b031663643477d560e11b17905260608301516080840151919250600091611e3d919084611e8a565b905080611e5657611e4f858585612096565b5050505050565b604051849086907f34795cc6b122b9a0ae684946319f1e14a577b4e8f9b3dda9ac94c21a54d3188c90600090a35050505050565b60008060008084516020860160008989f195945050505050565b6000611ef9826040518060400160405280602081526020017f5361666545524332303a206c6f772d6c6576656c2063616c6c206661696c6564815250856001600160a01b031661210b9092919063ffffffff16565b80519091501561155a5780806020019051810190611f179190613268565b61155a5760405162461bcd60e51b815260206004820152602a60248201527f5361666545524332303a204552433230206f7065726174696f6e20646964206e6044820152691bdd081cdd58d8d9595960b21b60648201526084016104c4565b6000818310611f8557816108d3565b5090919050565b604080518082019091526000808252602082015281546001830154819003611ff65760405162461bcd60e51b815260206004820152601960248201527f5265636569707451756575653a20656d7074792071756575650000000000000060448201526064016104c4565b60008181526002840160208181526040808420815180830190925280548252600180820180546001600160a01b03811685870152888852959094529490556001600160a01b031990921690559061204e908390612ce5565b9093555090919050565b6040516001600160a01b03808516602483015283166044820152606481018290526120909085906323b872dd60e01b90608401611523565b50505050565b806040516020016120a791906131f8565b60408051601f1981840301815282825280516020918201206000878152600690925291902055829084907f4619adc1017b82e02eaefac01a43d50d6d8de4460774bc370c3ff0210d40c985906120fe9085906131f8565b60405180910390a3505050565b606061127e848460008585600080866001600160a01b031685876040516121329190613283565b60006040518083038185875af1925050503d806000811461216f576040519150601f19603f3d011682016040523d82523d6000602084013e612174565b606091505b509150915061218587838387612190565b979650505050505050565b606083156121ff5782516000036121f8576001600160a01b0385163b6121f85760405162461bcd60e51b815260206004820152601d60248201527f416464726573733a2063616c6c20746f206e6f6e2d636f6e747261637400000060448201526064016104c4565b508161127e565b61127e83838151156122145781518083602001fd5b8060405162461bcd60e51b81526004016104c49190612c23565b6001600160a01b038116811461224357600080fd5b50565b80356104fe8161222e565b60006020828403121561226357600080fd5b81356108d38161222e565b60006020828403121561228057600080fd5b5035919050565b828152606081016108d3602083018480516001600160a01b03168252602090810151910152565b6000602082840312156122c057600080fd5b81356001600160401b038111156122d657600080fd5b820160e081850312156108d357600080fd5b600061010082840312156122fb57600080fd5b50919050565b60006020828403121561231357600080fd5b81356001600160401b0381111561232957600080fd5b61127e848285016122e8565b6000806040838503121561234857600080fd5b50508035926020909101359150565b815181526020808301516001600160a01b0316908201526040810161060f565b60008060006060848603121561238c57600080fd5b83359250602084013561239e8161222e565b929592945050506040919091013590565b6000806000606084860312156123c457600080fd5b505081359360208301359350604090920135919050565b60008083601f8401126123ed57600080fd5b5081356001600160401b0381111561240457600080fd5b6020830191508360208260051b850101111561241f57600080fd5b9250929050565b60008060008060008086880360a081121561244057600080fd5b8735965060208801356001600160401b038082111561245e57600080fd5b61246a8b838c016123db565b90985096508691506040603f198401121561248457600080fd5b60408a01955060808a013592508083111561249e57600080fd5b50506124ac89828a016123db565b979a9699509497509295939492505050565b600080604083850312156124d157600080fd5b82356124dc8161222e565b915060208301356124ec8161222e565b809150509250929050565b6000806040838503121561250a57600080fd5b823563ffffffff811681146124dc57600080fd5b6000806040838503121561253157600080fd5b8235915060208301356001600160401b0381111561254e57600080fd5b61255a858286016122e8565b9150509250929050565b60006020828403121561257657600080fd5b5051919050565b60208082526027908201527f54656c65706f727465724d657373656e6765723a207a65726f20626c6f636b636040820152661a185a5b88125160ca1b606082015260800190565b60208082526023908201527f5265656e7472616e63794775617264733a2073656e646572207265656e7472616040820152626e637960e81b606082015260800190565b634e487b7160e01b600052604160045260246000fd5b604080519081016001600160401b038111828210171561263f5761263f612607565b60405290565b60405160c081016001600160401b038111828210171561263f5761263f612607565b60405161010081016001600160401b038111828210171561263f5761263f612607565b604051601f8201601f191681016001600160401b03811182821017156126b2576126b2612607565b604052919050565b6000604082840312156126cc57600080fd5b6126d461261d565b905081356126e18161222e565b808252506020820135602082015292915050565b60006001600160401b0382111561270e5761270e612607565b5060051b60200190565b600082601f83011261272957600080fd5b8135602061273e612739836126f5565b61268a565b82815260059290921b8401810191818101908684111561275d57600080fd5b8286015b848110156127815780356127748161222e565b8352918301918301612761565b509695505050505050565b60006001600160401b038211156127a5576127a5612607565b50601f01601f191660200190565b600082601f8301126127c457600080fd5b81356127d26127398261278c565b8181528460208386010111156127e757600080fd5b816020850160208301376000918101602001919091529392505050565b600060e0823603121561281657600080fd5b61281e612645565b8235815261282e60208401612246565b602082015261284036604085016126ba565b60408201526080830135606082015260a08301356001600160401b038082111561286957600080fd5b61287536838701612718565b608084015260c085013591508082111561288e57600080fd5b5061289b368286016127b3565b60a08301525092915050565b60208082526026908201527f54656c65706f727465724d657373656e6765723a206d657373616765206e6f7460408201526508199bdd5b9960d21b606082015260800190565b6000808335601e1984360301811261290457600080fd5b83016020810192503590506001600160401b0381111561292357600080fd5b8060051b360382131561241f57600080fd5b8183526000602080850194508260005b858110156129735781356129588161222e565b6001600160a01b031687529582019590820190600101612945565b509495945050505050565b6000808335601e1984360301811261299557600080fd5b83016020810192503590506001600160401b038111156129b457600080fd5b8060061b360382131561241f57600080fd5b8183526000602080850194508260005b858110156129735781358752828201356129ef8161222e565b6001600160a01b03168784015260409687019691909101906001016129d6565b6000808335601e19843603018112612a2657600080fd5b83016020810192503590506001600160401b03811115612a4557600080fd5b80360382131561241f57600080fd5b81835281816020850137506000828201602090810191909152601f909101601f19169091010190565b6000610100823584526020830135612a948161222e565b6001600160a01b0316602085015260408381013590850152612ab860608401612246565b6001600160a01b0316606085015260808381013590850152612add60a08401846128ed565b8260a0870152612af08387018284612935565b92505050612b0160c084018461297e565b85830360c0870152612b148382846129c6565b92505050612b2560e0840184612a0f565b85830360e0870152612b38838284612a54565b9695505050505050565b6020815260006108d36020830184612a7d565b60208082526029908201527f54656c65706f727465724d657373656e6765723a20696e76616c6964206d65736040820152680e6c2ceca40d0c2e6d60bb1b606082015260800190565b606081526000612bb16060830185612a7d565b90506108d3602083018480516001600160a01b03168252602090810151910152565b60005b83811015612bee578181015183820152602001612bd6565b50506000910152565b60008151808452612c0f816020860160208601612bd3565b601f01601f19169290920160200192915050565b6020815260006108d36020830184612bf7565b60208082526025908201527f5265656e7472616e63794775617264733a207265636569766572207265656e7460408201526472616e637960d81b606082015260800190565b60208082526034908201527f54656c65706f727465724d657373656e6765723a207a65726f2066656520617360408201527373657420636f6e7472616374206164647265737360601b606082015260800190565b634e487b7160e01b600052601160045260246000fd5b8082018082111561060f5761060f612ccf565b634e487b7160e01b600052603260045260246000fd5b600060018201612d2057612d20612ccf565b5060010190565b600060408284031215612d3957600080fd5b6108d383836126ba565b80516104fe8161222e565b600082601f830112612d5f57600080fd5b8151612d6d6127398261278c565b818152846020838601011115612d8257600080fd5b61127e826020830160208701612bd3565b805180151581146104fe57600080fd5b60008060408385031215612db657600080fd5b82516001600160401b0380821115612dcd57600080fd5b9084019060608287031215612de157600080fd5b604051606081018181108382111715612dfc57612dfc612607565b604052825181526020830151612e118161222e565b6020820152604083015182811115612e2857600080fd5b612e3488828601612d4e565b6040830152509350612e4b91505060208401612d93565b90509250929050565b600082601f830112612e6557600080fd5b81516020612e75612739836126f5565b82815260059290921b84018101918181019086841115612e9457600080fd5b8286015b84811015612781578051612eab8161222e565b8352918301918301612e98565b600082601f830112612ec957600080fd5b81516020612ed9612739836126f5565b82815260069290921b84018101918181019086841115612ef857600080fd5b8286015b848110156127815760408189031215612f155760008081fd5b612f1d61261d565b8151815284820151612f2e8161222e565b81860152835291830191604001612efc565b600060208284031215612f5257600080fd5b81516001600160401b0380821115612f6957600080fd5b908301906101008286031215612f7e57600080fd5b612f86612667565b82518152612f9660208401612d43565b602082015260408301516040820152612fb160608401612d43565b60608201526080830151608082015260a083015182811115612fd257600080fd5b612fde87828601612e54565b60a08301525060c083015182811115612ff657600080fd5b61300287828601612eb8565b60c08301525060e08301518281111561301a57600080fd5b61302687828601612d4e565b60e08301525095945050505050565b600081518084526020808501945080840160005b838110156129735781516001600160a01b031687529582019590820190600101613049565b600081518084526020808501945080840160005b83811015612973576130a8878351805182526020908101516001600160a01b0316910152565b6040969096019590820190600101613082565b60006101008251845260018060a01b0360208401511660208501526040830151604085015260608301516130fa60608601826001600160a01b03169052565b506080830151608085015260a08301518160a086015261311c82860182613035565b91505060c083015184820360c0860152613136828261306e565b91505060e083015184820360e0860152611ae08282612bf7565b6001600160a01b038316815260406020820181905260009061127e908301846130bb565b6000808335601e1984360301811261318b57600080fd5b8301803591506001600160401b038211156131a557600080fd5b60200191503681900382131561241f57600080fd5b8481526001600160a01b0384166020820152606060408201819052600090612b389083018486612a54565b8181038181111561060f5761060f612ccf565b6020815260006108d360208301846130bb565b606081526000612bb160608301856130bb565b81516001600160a01b03168152602080830151908201526040810161060f565b8381526001600160a01b0383166020820152606060408201819052600090611ae090830184612bf7565b60006020828403121561327a57600080fd5b6108d382612d93565b60008251613295818460208701612bd3565b919091019291505056fea2646970667358221220586881dd1413fe17197100ceb55646481dae802ef65d37df603c3915f51a4b6364736f6c63430008120033", "storage": { "0x0000000000000000000000000000000000000000000000000000000000000000": "0x0000000000000000000000000000000000000000000000000000000000000001", "0x0000000000000000000000000000000000000000000000000000000000000001": "0x0000000000000000000000000000000000000000000000000000000000000001" }, "nonce": 1 }, "0x618FEdD9A45a8C456812ecAAE70C671c6249DfaC": { "balance": "0x0", "nonce": 1 } } ``` The values above are taken from the `v1.0.0` [release artifacts](https://github.com/ava-labs/icm-contracts/releases/tag/v1.0.0). The contract address, deployed bytecode, and deployer address are unique per major release. All of the other values should remain the same. ## Deployed Addresses | Contract | Address | Chain | | --------------------- | ---------------------------------------------- | ------------------------ | | `TeleporterMessenger` | **0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf** | All chains, all networks | | `TeleporterRegistry` | **0x7C43605E14F391720e1b37E49C78C4b03A488d98** | Mainnet C-Chain | | `TeleporterRegistry` | **0xF86Cb19Ad8405AEFa7d09C778215D2Cb6eBfB228** | Fuji C-Chain | - Using [Nick's method](https://yamenmerhi.medium.com/nicks-method-ethereum-keyless-execution-168a6659479c#), `TeleporterMessenger` deploys at a universal address across all chains, varying with each `teleporter` Major release. **Compatibility exists only between same-version `TeleporterMessenger` instances.** See [TeleporterMessenger Contract Deployment](https://github.com/ava-labs/icm-contracts/blob/main/utils/contract-deployment/README.md) and [Deploy TeleporterMessenger to an Avalanche L1](#deploy-teleportermessenger-to-an-avalanche-l1) for more details. - `TeleporterRegistry` can be deployed to any address. See [Deploy TeleporterRegistry to an Avalanche L1](#deploy-teleporterregistry-to-an-avalanche-l1) for details. The table above enumerates the canonical registry addresses on the Mainnet and Fuji C-Chains. ## A Note on Versioning Release versions follow the [semver](https://semver.org/) convention of incompatible Major releases. A new Major version is released whenever the `TeleporterMessenger` bytecode is changed, and a new version of `TeleporterMessenger` is meant to be deployed. Due to the use of Nick's method to deploy the contract to the same address on all chains (see [TeleporterMessenger Contract Deployment](https://github.com/ava-labs/icm-contracts/blob/main/utils/contract-deployment/README.md) for details), this also means that new release versions would result in different `TeleporterMessenger` contract addresses. Minor and Patch versions may pertain to contract changes that do not change the `TeleporterMessenger` bytecode, or to changes in the test frameworks, and will only be included in tags. ## Upgradability `TeleporterMessenger` is a non-upgradeable contract and can not be changed once it is deployed. This provides immutability to the contracts, and ensures that the contract's behavior at each address is unchanging. However, to allow for new features and potential bug fixes, new versions of `TeleporterMessenger` can be deployed to different addresses. The [TeleporterRegistry](https://github.com/ava-labs/icm-contracts/blob/main/contracts/teleporter/registry/TeleporterRegistry.sol) is used to keep track of the deployed versions of Teleporter, and to provide a standard interface for dApps to interact with the different `TeleporterMessenger` versions. `TeleporterRegistry` **is not mandatory** for dApps built on top of ICM, but dApp's are recommended to leverage the registry to ensure they use the latest `TeleporterMessenger` version available. Another recommendation standard is to have a single canonical `TeleporterRegistry` for each Avalanche L1, and unlike the `TeleporterMessenger` contract, the registry does not need to be deployed to the same address on every chain. This means the registry does not need a Nick's method deployment, and can be at different contract addresses on different chains. For more information on the registry and how to integrate with ICM contracts, see the [Upgradability doc](https://github.com/ava-labs/icm-contracts/blob/main/contracts/teleporter/registry/README.md). ## Deploy TeleporterMessenger to an Avalanche L1 From the root of the repo, the TeleporterMessenger contract can be deployed by calling ```bash ./scripts/deploy_teleporter.sh --version --rpc-url [OPTIONS] ``` Required arguments: - `--version ` Specify the release version to deploy. These will all be of the form `v1.X.0`. Each `TeleporterMessenger` version can only send and receive messages from the **same** `TeleporterMessenger` version on another chain. You can see a list of released versions at https://github.com/ava-labs/icm-contracts/releases. - `--rpc-url ` Specify the rpc url of the node to use. Options: - `--private-key ` Funds the deployer address with the account held by `` To ensure that `TeleporterMessenger` can be deployed to the same address on every EVM based chain, it uses [Nick's Method](https://yamenmerhi.medium.com/nicks-method-ethereum-keyless-execution-168a6659479c) to deploy from a static deployer address. ICM Contracts cost exactly `10eth` in the Avalanche L1's native gas token to deploy, which must be sent to the deployer address. `deploy_teleporter.sh` will send the necessary native tokens to the deployer address if it is provided with a private key for an account with sufficient funds. Alternatively, the deployer address can be funded externally. The deployer address for each version can be found by looking up the appropriate version at https://github.com/ava-labs/icm-contracts/releases and downloading `TeleporterMessenger_Deployer_Address_.txt`. Alternatively for new Avalanche L1s, the `TeleporterMessenger` contract can be directly included in the genesis file as documented [here](https://github.com/ava-labs/icm-contracts/blob/main/contracts/teleporter/README.md#teleporter-messenger-contract-deployment). ## Deploy TeleporterRegistry to an Avalanche L1 There should only be one canonical `TeleporterRegistry` deployed for each chain, but if one does not exist, it is recommended to deploy the registry so ICM contracts can always use the most recent `TeleporterMessenger` version available. The registry does not need to be deployed to the same address on every chain, and therefore does not need a Nick's method transaction. To deploy, run the following command from the root of the repository: ```bash ./scripts/deploy_registry.sh --version --rpc-url --private-key [OPTIONS] ``` Required arguments: - `--version ` Specify the release version to deploy. These will all be of the form `v1.X.0`. - `--rpc-url ` Specify the rpc url of the node to use. - `--private-key ` Funds the deployer address with the account held by `` `deploy_registry.sh` will deploy a new `TeleporterRegistry` contract for the intended release version, and will also have the corresponding `TeleporterMessenger` contract registered as the initial protocol version. ## Verify a Deployment of TeleporterMessenger `TeleporterMessenger` can be verified on L1s using sourcify. `v1.0.0` of this repository must be checked out in order to match the source code properly. ```bash git checkout v1.0.0 git submodule update --init --recursive cd contracts forge verify-contract 0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf \ src/teleporter/TeleporterMessenger.sol:TeleporterMessenger \ --chain-id \ --rpc-url \ --verifier sourcify \ --compiler-version v0.8.18+commit.87f61d96 \ --num-of-optimizations 200 \ ``` # Upgradeability (/docs/cross-chain/icm-contracts/upgradeability) --- title: "Upgradeability" description: "The TeleporterMessenger contract is non-upgradable. However, there could still be new versions of TeleporterMessenger contracts needed to be deployed in the future." edit_url: https://github.com/ava-labs/teleporter/edit/main/contracts/teleporter/registry/README.md --- # TeleporterMessenger Contracts Upgradability ## Overview The `TeleporterMessenger` contract is non-upgradable, once a version of the contract is deployed it cannot be changed. This is with the intention of preventing any changes to the deployed contract that could potentially introduce bugs or vulnerabilities. However, there could still be new versions of `TeleporterMessenger` contracts needed to be deployed in the future. `TeleporterRegistry` provides applications that use a `TeleporterMessenger` instance a minimal step process to integrate with new versions of `TeleporterMessenger`. The `TeleporterRegistry` maintains a mapping of `TeleporterMessenger` contract versions to their addresses. When a new `TeleporterMessenger` version is deployed, its address can be added to the `TeleporterRegistry`. The `TeleporterRegistry` can only be updated through an ICM off-chain message that meets the following requirements: - `sourceChainAddress` must match `VALIDATORS_SOURCE_ADDRESS = address(0)` - The zero address can only be set as the source chain address by an ICM off-chain message, and cannot be set by an on-chain ICM message. - `sourceBlockchainID` must match the blockchain ID that the registry is deployed on - `destinationBlockchainID` must match the blockchain ID that the registry is deployed on - `destinationAddress` must match the address of the registry In the `TeleporterRegistry` contract, the `latestVersion` state variable returns the highest version number that has been registered in the registry. The `getLatestTeleporter` function returns the `ITeleporterMessenger` that is registered with the corresponding version. ## Design - `TeleporterRegistry` is deployed on each blockchain that needs to keep track of `TeleporterMessenger` contract versions. - The registry's contract address on each blockchain does not need to be the same, and does not require a Nick's method transaction for deployment. - Each registry's mapping of version to contract address is independent of registries on other blockchains, and chains can decide on their own registry mapping entries. - Each blockchain should only have one canonical `TeleporterRegistry` contract. - `TeleporterRegistry` contract can be initialized through a list of initial registry entries, which are `TeleporterMessenger` contract versions and their addresses. - The registry keeps track of a mapping of `TeleporterMessenger` contract versions to their addresses, and vice versa, a mapping of `TeleporterMessenger` contract addresses to their versions. - Version zero is an invalid version, and is used to indicate that a `TeleporterMessenger` contract has not been registered yet. - Once a version number is registered in the registry, it cannot be changed, but a previous registered protocol address can be added to the registry with a new version. This is especially important in the case of a rollback to a previous `TeleporterMessenger` version, in which case the previous `TeleporterMessenger` contract address would need to be registered with a new version to the registry. ## Integrating `TeleporterRegistryApp` into a dApp
alt="Upgrade UML diagram"/>
[TeleporterRegistryApp](https://github.com/ava-labs/teleporter/blob/main/contracts/teleporter/registry/TeleporterRegistryApp.sol) is an abstract contract that helps integrate the `TeleporterRegistry` into ICM contracts. To support upgradeable contracts, there is also a corresponding `TeleporterRegistryAppUpgradeable` contract that is upgrade compatible. By inheriting from `TeleporterRegistryApp`, dApps get: - Ability to send ICM messages through the latest version of the `TeleporterMessenger` contract registered in the Teleporter registry. (The dApp can also override this to use a specific version of the `TeleporterMessenger` contract.) - `minTeleporterVersion` management that allows the dApp to specify the minimum `TeleporterMessenger` version that can send messages to the dApp. - Access controlled utility to update the `minTeleporterVersion` - Access controlled utility to pause/unpause interaction with specific `TeleporterMessenger` addresses. To integrate `TeleporterRegistryApp` with a dApp, pass in the Teleporter registry address inside the constructor. For upgradeable contracts `TeleporterRegistryAppUpgradeable` can be inherited, and the derived contract's `initializer` function should call either `__TeleporterRegistryApp_init` or `__TeleporterRegistryApp_init_unchained` An example dApp looks like: ```solidity // An example dApp that integrates with the Teleporter registry // to send/receive ICM messages. contract ExampleApp is TeleporterRegistryApp { ... // Constructor passes in the Teleporter registry address // to the TeleporterRegistryApp contract. constructor( address teleporterRegistryAddress, uint256 minTeleporterVersion ) TeleporterRegistryApp(teleporterRegistryAddress, minTeleporterVersion) { currentBlockchainID = IWarpMessenger(WARP_PRECOMPILE_ADDRESS) .getBlockchainID(); } ... // Handles receiving ICM messages, // and also checks that the sender is a valid TeleporterMessenger contract. function _receiveTeleporterMessage( bytes32 sourceBlockchainID, address originSenderAddress, bytes memory message ) internal override { // implementation } // Implements the access control checks for the dApp's interaction with TeleporterMessenger versions. function _checkTeleporterRegistryAppAccess() internal view virtual override { //implementation } } ``` ### Checking TeleporterRegistryApp access To prevent anyone from calling the dApp's `updateMinTeleporterVersion`, which would disallow messages from old `TeleporterMessenger` versions from being received, this function should be safeguarded with access controls. All contracts deriving from `TeleporterRegistryApp` will need to implement `TeleporterRegistryApp._checkTeleporterRegistryAppAccess`. For example, [TeleporterRegistryOwnableApp](https://github.com/ava-labs/teleporter/blob/main/contracts/teleporter/registry/TeleporterRegistryOwnableApp.sol) is an abstract contract that inherits `TeleporterRegistryApp`, and implements `_checkTeleporterRegistryAppAccess` to check whether the caller is the owner. There is also a corresponding `TeleporterRegistryOwnableAppUpgradeable` contract that is upgrade compatible. ```solidity function _checkTeleporterRegistryAppAccess() internal view virtual override { _checkOwner(); } ``` Another example would be a dApp that has different roles and priveleges. `_checkTeleporterRegistryAppAccess` can be implemented to check whether the caller is a specific role. ```solidity function _checkTeleporterRegistryAppAccess() internal view virtual override { require( hasRole(TELEPORTER_REGISTRY_APP_ADMIN, _msgSender()), "TeleporterRegistryApp: caller does not have access" ); } ``` ### Sending with specific TeleporterMessenger version For sending messages with the Teleporter registry, dApps should use `TeleporterRegistryApp._getTeleporterMessenger`. This function by default extends `TeleporterRegistry.getLatestTeleporter`, using the latest version, and adds an extra check on whether the latest `TeleporterMessenger` address is paused. If the dApp wants to send a message through a specific `TeleporterMessenger` version, it can override `_getTeleporterMessenger()` to use the specific `TeleporterMessenger` version with `TeleporterRegistry.getTeleporterFromVersion`. The `TeleporterRegistryApp._sendTeleporterMessage` function makes sending ICM messages easier. The function uses `_getTeleporterMessenger` to get the sending `TeleporterMessenger` version, pays for `TeleporterMessenger` fees from the dApp's balance, and sends the cross chain message. Using latest version: ```solidity ITeleporterMessenger teleporterMessenger = _getTeleporterMessenger(); ``` Using specific version: ```solidity // Override _getTeleporterMessenger to use specific version. function _getTeleporterMessenger() internal view override returns (ITeleporterMessenger) { ITeleporterMessenger teleporter = teleporterRegistry .getTeleporterFromVersion($VERSION); require( !pausedTeleporterAddresses[address(teleporter)], "TeleporterRegistryApp: Teleporter sending version paused" ); return teleporter; } ITeleporterMessenger teleporterMessenger = _getTeleporterMessenger(); ``` ### Receiving from specific TeleporterMessenger versions `TeleporterRegistryApp` also provides an initial implementation of [ITeleporterReceiver.receiveTeleporterMessage](https://github.com/ava-labs/teleporter/blob/main/contracts/teleporter/ITeleporterReceiver.sol) that ensures `_msgSender` is a `TeleporterMessenger` contract with a version greater than or equal to `minTeleporterVersion`. This supports the case where a dApp wants to use a new version of the `TeleporterMessenger` contract, but still wants to be able to receive messages from the old `TeleporterMessenger` contract.The dApp can override `_receiveTeleporterMessage` to implement its own logic for receiving messages from `TeleporterMessenger` contracts. ## Managing a TeleporterRegistryApp dApp dApps that implement `TeleporterRegistryApp` automatically use the latest `TeleporterMessenger` version registered with the `TeleporterRegistry`. Interaction with underlying `TeleporterMessenger` versions can be managed by setting the minimum `TeleporterMessenger` version, and pausing and unpausing specific versions. The following sections include example `cast send` commands for issuing transactions that call contract functions. See the [Foundry Book](https://book.getfoundry.sh/reference/cast/cast-send) for details on how to issue transactions using common wallet options. ### Managing the Minimum TeleporterMessenger version The `TeleporterRegistryApp` contract constructor saves the Teleporter registry in a state variable used by the inheriting dApp contract, and initializes a `minTeleporterVersion` to the highest `TeleporterMessenger` version registered in `TeleporterRegistry`. `minTeleporterVersion` is used to allow dApp's to specify the `TeleporterMessenger` versions allowed to interact with it. #### Updating `minTeleporterVersion` The `TeleporterRegistryApp.updateMinTeleporterVersion` function updates the `minTeleporterVersion` used to check which `TeleporterMessenger` versions can be used for sending and receiving messages. **Once the `minTeleporterVersion` is increased, any undelivered messages sent by other chains using older versions of `TeleporterMessenger` will never be able to be received**. The `updateMinTeleporterVersion` function can only be called with a version greater than the current `minTeleporterVersion` and less than `latestVersion` in the Teleporter registry. > Example: Update the minimum TeleporterMessenger version to 2 > > ```bash > cast send "updateMinTeleporterVersion(uint256)" 2 > ``` ### Pausing TeleporterMessenger version interactions dApps that inherit from `TeleporterRegistryApp` can pause `TeleporterMessenger` interactions by calling `TeleporterRegistryApp.pauseTeleporterAddress`. This function prevents the dApp contract from interacting with the paused `TeleporterMessenger` address when sending or receiving ICM messages. `pauseTeleporterAddress` can only be called by addresses that passes the dApp's `TeleporterRegistryApp._checkTeleporterRegistryAppAccess` check. The `TeleporterMessenger` address corresponding to a `TeleporterMessenger` version can be fetched from the registry with `TeleporterRegistry.getAddressFromVersion` > Example: Pause TeleporterMessenger version 3 > > ```bash > versionThreeAddress=$(cast call "getAddressFromVersion(uint256)(address)" 3) > cast send "pauseTeleporterAddress(address)" $versionThreeAddress > ``` #### Pause all TeleporterMessenger interactions To pause all `TeleporterMessenger` interactions, `TeleporterRegistryApp.pauseTeleporterAddress` must be called for every `TeleporterMessenger` version from the `minTeleporterVersion` to the latest `TeleporterMessenger` version registered in `TeleporterRegistry`. Note that there may be gaps in `TeleporterMessenger` versions registered with `TeleporterRegistry`, but they will always be in increasing order. The latest `TeleporterMessenger` version can be obtained by inspecting the public variable `TeleporterRegistry.latestVersion`. The `minTeleporterVersion` can be obtained by calling `TeleporterRegistryApp.getMinTeleporterVersion`. > Example: Pause all registered TeleporterMessenger versions > > ```bash > # Fetch the minimum TeleporterMessenger version > minVersion=$(cast call "getMinTeleporterVersion()(uint256)") > > # Fetch the latest registered version > latestVersion=$(cast call "latestVersion()(uint256)") > > # Pause all registered versions > for ((version=minVersion; version<=latestVersion; version++)) > do > # Fetch the version address if it's registered > versionAddress=$(cast call "getAddressFromVersion(uint256)(address)" $version) > > if [ $? -eq 0 ]; then > # If cast call is successful, proceed to cast send > cast send "pauseTeleporterAddress(address)" $versionAddress > else > # If cast call fails, print an error message and skip to the next iteration > echo "Version $version not registered. Skipping." > fi > done > ``` #### Unpausing TeleporterMessenger version interactions As with pausing, dApps can unpause `TeleporterMessenger` interactions by calling `TeleporterRegistryApp.unpauseTeleporterAddress`. This unpause function allows receiving `TeleporterMessenger` message from the unpaused `TeleporterMessenger` address, and also enables the sending of messages through the unpaused `TeleporterMessenger` address in `_getTeleporterMessenger()`. Unpausing is also only allowed by addresses passing the dApp's `_checkTeleporterRegistryAppAccess` check. Note that receiving `TeleporterMessenger` messages is still governed by the `minTeleporterVersion` check, so even if a `TeleporterMessenger` address is unpaused, the dApp will not receive messages from the unpaused `TeleporterMessenger` address if the `TeleporterMessenger` version is less than `minTeleporterVersion`. > Example: Unpause TeleporterMessenger version 3 > > ```bash > versionThreeAddress=$(cast call "getAddressFromVersion(uint256)(address)" 3) > cast send "unpauseTeleporterAddress(address)" $versionThreeAddress > ``` # Avalanche Interchain Token Transfer (ICTT) (/docs/cross-chain/interchain-token-transfer/overview) --- title: "Avalanche Interchain Token Transfer (ICTT)" description: "This page describes the Avalanche Interchain Token Transfer (ICTT)" edit_url: https://github.com/ava-labs/icm-contracts/edit/main/contracts/ictt/README.md --- # Avalanche Interchain Token Transfer (ICTT) ## Overview Avalanche Interchain Token Transfer (ICTT) is an application that allows users to transfer tokens between L1s. The implementation is a set of smart contracts that are deployed across multiple L1s, and leverages [ICM](https://github.com/ava-labs/icm-contracts) for cross-chain communication. Each token transferrer instance consists of one "home" contract and at least one but possibly many "remote" contracts. Each home contract instance manages one asset to be transferred out to `TokenRemote` instances. The home contract lives on the L1 where the asset to be transferred exists. A transfer consists of locking the asset as collateral on the home L1 and minting a representation of the asset on the remote L1. The remote contracts, each of which has a single specified home contract, live on other L1s that want to import the asset transferred by their specified home. The token transferrers are designed to be permissionless: anyone can register compatible `TokenRemote` instances to allow for transferring tokens from the `TokenHome` instance to that new `TokenRemote` instance. The home contract keeps track of token balances transferred to each `TokenRemote` instance, and handles returning the original tokens back to the user when assets are transferred back to the `TokenHome` instance. `TokenRemote` instances are registered with their home contract via an ICM message upon creation. Home contract instances specify the asset to be transferred as either an ERC20 token or the native token, and they allow for transferring the token to any registered `TokenRemote` instances. The token representation on the remote chain can also either be an ERC20 or native token, allowing users to have any combination of ERC20 and native tokens between home and remote chains: - `ERC20` -> `ERC20` - `ERC20` -> `Native` - `Native` -> `ERC20` - `Native` -> `Native` The remote tokens are designed to have compatibility with the token transferrer on the home chain by default, and they allow custom logic to be implemented in addition. For example, developers can inherit and extend the `ERC20TokenRemote` contract to add additional functionality, such as a custom minting, burning, or transfer logic. The token transferrer also supports "multi-hop" transfers, where tokens can be transferred between remote chains. To illustrate, consider two remotes _Ra_ and _Rb_ that are both connected to the same home _H_. A multi-hop transfer from _Ra_ to _Rb_ first gets routed from _Ra_ to _H_, where remote balances are updated, and then _H_ automatically routes the transfer on to _Rb_. In addition to supporting basic token transfers, the token transferrer contracts offer a `sendAndCall` interface for transferring tokens and using them in a smart contract interaction all within a single ICM message. If the call to the recipient smart contract fails, the transferred tokens are sent to a fallback recipient address on the destination chain of the transfer. The `sendAndCall` interface enables the direct use of transferred tokens in dApps on other chains, such as performing swaps, using the tokens to pay for fees when invoking services, etc. ## Upgradability The token transferrer contracts implement both upgradeable and non-upgradeable versions. The non-upgradeable versions are extensions of their respective upgradeable token transferrer contract, and has a `constructor` that calls the `initialize` function of the upgradeable version. The upgradeable contracts are ERC7201 compliant, and use namespace storage to store the state of the contract. ## `ITokenTransferrer` Interface that defines the events token transfer contract implementations must emit. Also defines the message types and formats of messages between all implementations. ## `IERC20TokenTransferrer` and `INativeTokenTransferrer` Interfaces that define the external functions for interacting with token transfer contract implementations of each type. ERC20 and native token transferrer interfaces vary from each other in that the native token transferrer functions are `payable` and do not take an explicit amount parameter (it is implied by `msg.value`), while the ERC20 token transferrer functions are not `payable` and require the explicit amount parameter. Otherwise, they include the same functions. ## `TokenHome` An abstract implementation of `ITokenTransferrer` for a token transfer contract on the "home" chain with the asset to be transferred. Each `TokenHome` instance supports transferring exactly one token type (ERC20 or native) on its chain to arbitrarily many "remote" instances on other chains. It handles locking tokens to be sent to `TokenRemote` instances, as well as receiving token transfer messages to either redeem tokens it holds as collateral (i.e. unlock), or route them to other `TokenRemote` instances (i.e. "multi-hop"). In the case of a multi-hop transfer, the `TokenHome` already has the collateral locked from when the tokens were originally transferred to the first `TokenRemote` instance, so it simply updates the accounting of the transferred balances to each respective `TokenRemote` instance. Remote contracts must first be registered with a `TokenHome` instance before the home contract will allow for sending tokens to them. This is to prevent tokens from being transferred to invalid remote addresses. Anyone is able to deploy and register remote contracts, which may have been modified from this repository. It is the responsibility of the users of the home contract to independently evaluate each remote for its security and correctness. ## `ERC20TokenHome` A concrete implementation of `TokenHome` and `IERC20TokenTransferrer` that handles the locking and releasing of an ERC20 token. ## `NativeTokenHome` A concrete implementation of `TokenHome` and `INativeTokenTransferrer` that handles the locking and release of the native EVM asset. ## `TokenRemote` An abstract implementation of `ITokenTransferrer` for a token transfer contract on a "remote" chain that receives transferred assets from a specific `TokenHome` instance. Each `TokenRemote` instance has a single `TokenHome` instance that it receives token transfers from to mint tokens. It also handles sending messages (and correspondingly burning tokens) to route tokens back to other chains (either its `TokenHome`, or other `TokenRemote` instances). Once deployed, a `TokenRemote` instance must be registered with its specified `TokenHome` contract. This is done by calling `registerWithHome` on the remote contract, which will send an ICM message to the home contract with the information to register. All messages sent by `TokenRemote` instances are sent to the specified `TokenHome` contract, whether they are to redeem the collateral from the `TokenHome` instance or route the tokens to another `TokenRemote` instance. Routing tokens from one `TokenRemote` instance to another is referred to as a "multi-hop", where the tokens are first sent back to their `TokenHome` contract to update its accounting, and then automatically routed on to their intended destination `TokenRemote` instance. TokenRemote contracts allow for scaling token amounts, which should be used when the remote asset has a higher or lower denomination than the home asset, such as allowing for a ERC20 home asset with a denomination of 6 to be used as the native EVM asset on a remote chain (with a denomination of 18). ## `ERC20TokenRemote` A concrete implementation of `TokenRemote`, `IERC20TokenTransferrer`, and `IERC20` that handles the minting and burning of an ERC20 asset. Note that the `ERC20TokenRemote` contract is an ERC20 implementation itself, which is why it takes the `tokenName`, `tokenSymbol`, and `tokenDecimals` in its constructor. All of the ERC20 interface implementations are inherited from the standard OpenZeppelin ERC20 implementation, and can be overriden in other implementations if desired. ## `NativeTokenRemote` A concrete implementation of `TokenRemote`, `INativeTokenTransferrer`, and `IWrappedNativeToken` that handles the minting and burning of the native EVM asset on its chain using the native minter precompile. Deployments of this contract must be given the permission to mint native coins in the chain's configuration. Note that the `NativeTokenRemote` is also an implementation of `IWrappedNativeToken` itself, which is why the `nativeAssetSymbol` must be provided in its constructor. `NativeTokenRemote` instances always have a denomination of 18, which is the denomination of the native asset of EVM chains. The [native minter precompile](https://build.avax.network/docs/virtual-machines/custom-precompiles#minting-native-coins) must be configured to allow the contract address of the `NativeTokenRemote` instance to call `mintNativeCoin`. The correctness of a native token transferrer implemented using `NativeTokenRemote` relies on no other accounts being allowed to call `mintNativeCoin`, which could result in the token transferrer becoming undercollateralized. Example initialization steps for a `NativeTokenRemote` instance are shown below. Since the native minter precompile does not provide an interface for burning the native EVM asset, the "burn" functionality is implemented by transferring the native coins to an unowned address. The contract also provides a `reportBurnedTxFees` interface in order to burn the collateral in the `TokenHome` instance that should be made unredeemable to account for native tokens burnt on the chain with the `NativeTokenRemote` instance to pay for transaction fees. To account for the need to bootstrap the chain using a transferred asset as its native token, the `NativeTokenRemote` takes the `initialReserveImbalance` in its constructor. Once registered with its `TokenHome`, the `TokenHome` will require the `initialReserveImbalance` to be accounted for before sending token amounts to be minted on the given remote chain. The following example demonstrates the intended initialization flow: 1. Create a new blockchain with 100 native tokens allocated in its genesis block, and set the pre-derived `NativeTokenRemote` contract address (based on the deployer nonce) to have the permission to mint native tokens using the native minter precompile. Note that the deployer account will need to be funded in order to deploy the `NativeTokenRemote` contract, and an account used to relay messages into this chain must also be funded to relay the first messages. 2. Deploy the `NativeTokenRemote` contract to the pre-derived address set in the blockchain configuration of step 1. The `initialReserveImbalance` should be 100, matching the number of tokens allocated in the genesis block that were not initially backed by collateral in the `TokenHome` instance. 3. Call the `registerWithHome` function on the `NativeTokenRemote` instance to send an ICM message registering this remote with its `TokenHome`. This message should be relayed and delivered to the `TokenHome` instance. 4. Once registered on the `TokenHome` contract, add 100 tokens as collateral for the new `NativeTokenRemote` instance by calling the `addCollateral` function on the `TokenHome` contract. A `CollateralAdded` event will be emitted by the `TokenHome` contract with a `remaining` amount of 0 once the `NativeTokenRemote` is fully collateralized. 5. Now that the `NativeTokenRemote` contract is fully collateralized, tokens can be moved normally in both directions across the token transfer contracts by calling their `send` functions. The `totalNativeAssetSupply` implementation of `NativeTokenRemote` takes into account: - the initial reserve imbalance - the number of native tokens that it has minted - the number of native tokens that have been burned to pay for transaction fees - the number of native tokens "burned" to be transferred to other chains, which are sent to a pre-defined `BURNED_FOR_TRANSFER_ADDRESS`. Note that the value returned by `totalNativeAssetSupply` is an upper bound on the circulating supply of the native asset on the chain using the `NativeTokenRemote` instance since tokens could be burned in other ways that it does not account for. ## ICM Message Fees Fees can be optionally added to ICM messages in order to incentivize relayers to deliver them, as documented [here](https://github.com/ava-labs/icm-contracts/tree/main/contracts/teleporter#fees). The token transfer contracts in this repository allow for specifying any ERC20 token and amount to be used as the ICM message fee for single-hop transfers in either direction between `TokenHome` and `TokenRemote` instances. Fee amounts must be pre-approved to be spent by the token transfer contract before initiating a transfer. Multi-hop transfers between two `TokenRemote` instances involve two ICM messages: the first from the initiating `TokenRemote` instance to its home, and the second from its home to the destination `TokenRemote` instance. In the multi-hop case, the first message fee can be paid in any ERC20 token and amount (similar to the single-hop case), but the second message fee must be paid in-kind of the asset being transferred and is deducted from the amount being transferred. This restriction on the secondary message fee is necessary because the transaction on the intermediate chain routing the funds to the destination `TokenRemote` instance is not sent by the wallet performing the transfer. Because of this, it can not directly spend an arbitrary ERC20 token from that wallet. Using the asset being transferred for the optional secondary fee allows users to perform an incentivized multi-hop transfer without needing to make any interaction with the home themselves. If there is a need for the second message from the home to the destination `TokenRemote` instance to pay a fee in another asset, it is recommended to perform two single-hop transfers, which allows for specifying an arbitrary ERC20 token to be used for the fee of each. # Consensus Protocols (/docs/nodes/architecture/consensus) --- title: Consensus Protocols description: Deep dive into Avalanche's Snow* family of consensus protocols including Snowball, Snowman, and Avalanche consensus. --- Avalanche uses a novel family of consensus protocols collectively known as the **Snow* protocols**. These protocols achieve consensus through repeated random sampling, providing probabilistic safety guarantees with sub-second finality. ## Consensus Overview Traditional consensus protocols (like PBFT) require all-to-all communication, limiting scalability. Avalanche's approach is fundamentally different: | Property | Traditional (PBFT) | Avalanche Snow* | |----------|-------------------|-----------------| | **Communication** | All-to-all (O(n²)) | Random sampling (O(k log n)) | | **Finality** | Deterministic | Probabilistic (tunable) | | **Scalability** | ~100 nodes | Thousands of nodes | | **Latency** | Seconds | Sub-second | The Snow* protocols are named after their "snowball" effect - once a preference starts forming, it quickly avalanches to a decision. ## The Snow* Protocol Family ### Snowball: Binary Consensus Snowball is the foundational protocol for deciding between two conflicting options: >S1: Query preference? V->>S2: Query preference? V->>S3: Query preference? S1-->>V: A S2-->>V: A S3-->>V: B Note over V: Supermajority 2/3 for A Note over V: Increment confidence for A loop Until confidence threshold V->>S1: Query preference? Note over V: Continue sampling end Note over V: Decision Accept A `} /> **Key Parameters:** - **k (sample size)**: Number of validators to query (default `20`) - **αₚ (preference threshold)**: Votes needed to switch preference (default `15`) - **α꜀ (confidence threshold)**: Votes needed to increase confidence (default `15`) - **β (finalization threshold)**: Consecutive successful rounds (default `20`) - **Concurrent polls**: Parallel polls while processing (default `4`) - **Optimal processing**: Soft cap on in-flight items (default `10`) ### Snowman: Linear Chain Consensus Snowman extends Snowball to decide on a linear sequence of blocks. It's used by: - **P-Chain** (Platform Chain) - **C-Chain** (Contract Chain) - **X-Chain** (Exchange Chain) - linearized in the Cortina upgrade (April 2023) - **Most Avalanche L1s** [View source on GitHub](https://github.com/ava-labs/avalanchego/blob/master/snow/consensus/snowman/consensus.go) ```go title="snow/consensus/snowman/consensus.go" type Consensus interface { // Initialize with last accepted block Initialize( ctx *snow.ConsensusContext, params snowball.Parameters, lastAcceptedID ids.ID, lastAcceptedHeight uint64, lastAcceptedTime time.Time, ) error // Tracking & liveness NumProcessing() int Processing(ids.ID) bool IsPreferred(ids.ID) bool // Add a new block to consensus Add(Block) error // Get the preferred blocks Preference() ids.ID PreferenceAtHeight(height uint64) (ids.ID, bool) // Get the last accepted block LastAccepted() (ids.ID, uint64) // Record poll results from network sampling RecordPoll(context.Context, bag.Bag[ids.ID]) error // Lightweight ancestry lookup GetParent(id ids.ID) (ids.ID, bool) } ``` **Block Lifecycle in Snowman:** Processing: ParseBlock Processing --> Processing: Verify Processing --> Accepted: Accept Processing --> Rejected: Reject Accepted --> [*] Rejected --> [*] `} /> ### Avalanche DAG Consensus (Historical) The Avalanche DAG consensus engine is **no longer used on the Primary Network**. The X-Chain was linearized in the **Cortina upgrade** (April 2023 on Mainnet) and now uses Snowman consensus. The DAG engine code remains in the codebase for historical compatibility only. Historically, Avalanche consensus operated on a Directed Acyclic Graph (DAG) of transactions, where non-conflicting transactions could be processed in parallel: ``` ┌───┐ │ G │ Genesis └─┬─┘ ┌──┴──┐ ┌─┴─┐ ┌─┴─┐ │ A │ │ B │ Vertices could have └─┬─┘ └─┬─┘ multiple parents │ ╲╱ │ │ ╱╲ │ ┌─┴─┐ ┌─┴─┐ │ C │ │ D │ └───┘ └───┘ ``` The linearization was implemented via the `LinearizableVMWithEngine` interface, which allows a DAG-based VM to transition to linear block production after a designated "stop vertex." ## Consensus Engine Architecture The consensus engine sits between the VM and the network: E E <--> S S <--> Network B --> E `} /> ### Engine States The consensus engine progresses through several states ([`snow/state.go`](https://github.com/ava-labs/avalanchego/blob/master/snow/state.go)): ```go type State uint8 const ( Initializing State = iota // 0 StateSyncing // 1 Bootstrapping // 2 NormalOp // 3 ) ``` | State | Description | |-------|-------------| | **Initializing** | Initial setup before sync begins | | **StateSyncing** | Fast catch-up using state summaries | | **Bootstrapping** | Catching up with network state via block replay | | **NormalOp** | Participating in consensus | ### The Snowman Engine [View source on GitHub](https://github.com/ava-labs/avalanchego/blob/master/snow/engine/snowman/engine.go) ```go title="snow/engine/snowman/engine.go" type Engine struct { Config // Consensus instance Consensus smcon.Consensus // VM interface VM block.ChainVM // Network communication Sender common.Sender // Block management pending map[ids.ID]snowman.Block blocked map[ids.ID][]snowman.Block } ``` **Engine Responsibilities:** 1. **Block fetching**: Request missing blocks from peers 2. **Block verification**: Validate blocks via the VM 3. **Consensus voting**: Query peers and record votes 4. **Block finalization**: Accept or reject blocks based on consensus ## Block Processing Flow ### 1. Receiving a Block ```go func (e *Engine) Put(ctx context.Context, nodeID ids.NodeID, requestID uint32, blkBytes []byte) error { // Parse the block blk, err := e.VM.ParseBlock(ctx, blkBytes) // Verify ancestry exists if !e.hasAncestry(blk) { // Request missing ancestors e.requestAncestors(blk.Parent()) return nil } // Issue to consensus return e.issue(ctx, blk) } ``` ### 2. Issuing to Consensus ```go func (e *Engine) issue(ctx context.Context, blk snowman.Block) error { // Verify the block if err := blk.Verify(ctx); err != nil { return err } // Add to consensus if err := e.Consensus.Add(blk); err != nil { return err } // Start voting e.sendQuery(ctx, blk.ID()) return nil } ``` ### 3. Recording Votes ```go func (e *Engine) Chits(ctx context.Context, nodeID ids.NodeID, requestID uint32, preferredID ids.ID, ...) error { // Collect votes in a bag votes := bag.Of(preferredID) // When enough votes collected, record the poll if e.polls.Finished() { return e.Consensus.RecordPoll(ctx, e.polls.Result()) } return nil } ``` ## Snowman++ (ProposerVM) Snowman++ adds **soft proposer windows** on top of Snowman to pace block production. It is implemented by wrapping a ChainVM in the [ProposerVM](https://github.com/ava-labs/avalanchego/tree/master/vms/proposervm) and is enabled on the P-Chain and C-Chain. ```go title="vms/proposervm/vm.go" // ProposerVM wraps a ChainVM to add proposer selection type VM struct { inner block.ChainVM // Proposer selection windower Windower } ``` **How it works:** 1. Validators are sampled (by stake) to form a proposer list for the next block. 2. Each proposer gets a 5s window; up to 6 windows are scheduled from the parent timestamp. 3. Within their window, only the designated proposer can build a valid block. 4. After the final window, any validator may propose, which preserves liveness if proposers are offline. **Benefits:** - **Predictable pacing**: Prevents multiple validators from racing the same height. - **Stake-weighted fairness**: Windows are derived from the subnet validator set. - **Graceful fallback**: Production opens to everyone after the final window. ## Consensus Parameters Consensus parameters live in [`snow/consensus/snowball/parameters.go`](https://github.com/ava-labs/avalanchego/blob/master/snow/consensus/snowball/parameters.go): ```go type Parameters struct { // Sample size for each poll K int `json:"k"` // Switch preference threshold AlphaPreference int `json:"alphaPreference"` // Increase confidence threshold AlphaConfidence int `json:"alphaConfidence"` // Finalization threshold Beta int `json:"beta"` // Concurrent polls ConcurrentRepolls int `json:"concurrentRepolls"` // Congestion control OptimalProcessing int `json:"optimalProcessing"` MaxOutstandingItems int `json:"maxOutstandingItems"` MaxItemProcessingTime time.Duration `json:"maxItemProcessingTime"` } ``` | Parameter | Default | Description | |-----------|---------|-------------| | **K** | 20 | Validators sampled per round | | **AlphaPreference** | 15 | Votes needed to change preference | | **AlphaConfidence** | 15 | Votes needed to increase confidence | | **Beta** | 20 | Consecutive successful polls to finalize | | **ConcurrentRepolls** | 4 | Parallel polls while processing | | **OptimalProcessing** | 10 | Soft target for in-flight vertices/blocks | | **MaxOutstandingItems** | 256 | Health threshold for queued items | | **MaxItemProcessingTime** | 30s | Health threshold for a single item | These parameters are network-wide and cannot be changed for individual nodes. Modifying them would cause consensus failures. ## Security Properties ### Probabilistic Safety The probability of a safety violation (accepting conflicting blocks) is: $$P(\text{safety violation}) < \left(1 - \frac{\alpha_{confidence}}{k}\right)^\beta$$ With default parameters: $P < \left(1 - \frac{15}{20}\right)^{20} \approx 10^{-12}$ ### Liveness Avalanche guarantees liveness as long as: - More than `α/k` (75%) of stake is honest - Network is eventually synchronous ## Next Steps Learn how VMs implement block building and verification Understand how consensus messages are transmitted # Core Components (/docs/nodes/architecture/core-components) --- title: Core Components description: Deep dive into AvalancheGo's package structure, startup flow, and how components interact. --- This page provides a detailed overview of AvalancheGo's internal architecture, including the main packages, startup sequence, and how components communicate. ## Package Structure AvalancheGo is organized into well-defined packages, each responsible for specific functionality (top-level folders only): ``` avalanchego/ ├── main/ # CLI entry point ├── app/ # Process lifecycle (signals, shutdown) ├── config/ # Flags/env/config parsing ├── node/ # Node wiring and initialization ├── chains/ # Chain manager and handlers ├── snow/ # Consensus protocols and engines ├── vms/ # Built-in VMs, proposerVM, rpcchainvm ├── network/ # P2P stack ├── message/ # Message codecs ├── database/ # LevelDB/Pebble/memdb backends ├── graft/ # Grafted Coreth (C-Chain EVM) ├── subnets/ # Subnet configs and validator utilities ├── staking/ # TLS/BLS staking keys and POP ├── upgrade/ # Network upgrade rules ├── trace/ # OpenTelemetry tracing helpers ├── utils/ # Common utilities └── genesis/ # Genesis configuration and samples ``` ## Startup Flow When you run AvalancheGo, the following initialization sequence occurs: >Config: Parse flags & config Config-->>Main: NodeConfig Main->>App: New(nodeConfig) App->>Node: Initialize components Node->>Node: Init database Node->>Node: Init networking Node->>Node: Init API server Node->>Node: Register VMs Node->>Node: Start chain manager Main->>App: Run() App->>Node: Dispatch (event loop) `} /> ### 1. Configuration Parsing [View source on GitHub](https://github.com/ava-labs/avalanchego/blob/master/main/main.go) ```go title="main/main.go" func main() { evm.RegisterAllLibEVMExtras() // Build configuration from flags/env/config file fs := config.BuildFlagSet() v, err := config.BuildViper(fs, os.Args[1:]) if errors.Is(err, pflag.ErrHelp) { os.Exit(0) } if v.GetBool(config.VersionJSONKey) && v.GetBool(config.VersionKey) { fmt.Println("can't print both JSON and human readable versions") os.Exit(1) } if v.GetBool(config.VersionJSONKey) { versions := version.GetVersions() jsonBytes, err := json.MarshalIndent(versions, "", " ") if err != nil { fmt.Printf("couldn't marshal versions: %s\n", err) os.Exit(1) } fmt.Println(string(jsonBytes)) os.Exit(0) } if v.GetBool(config.VersionKey) { fmt.Println(version.GetVersions().String()) os.Exit(0) } nodeConfig, err := config.GetNodeConfig(v) if term.IsTerminal(int(os.Stdout.Fd())) { fmt.Println(app.Header) } nodeApp, err := app.New(nodeConfig) exitCode := app.Run(nodeApp) os.Exit(exitCode) } ``` The configuration system supports: - **Command-line flags**: `--network-id=fuji`, `--http-port=9650` - **Config file**: Pass `--config-file=/path/to/file`. The installer writes `~/.avalanchego/configs/node.json`; source builds do not create a default file. - **Environment variables**: Prefixed with `AVAGO_` ### 2. Node Initialization The `Node` struct in [`node/node.go`](https://github.com/ava-labs/avalanchego/blob/master/node/node.go) orchestrates all components: ```go title="node/node.go" type Node struct { Log logging.Logger ID ids.NodeID Config *node.Config // Networking & routing Net network.Network chainRouter router.Router msgCreator message.Creator // Storage & shared state DB database.Database sharedMemory *atomic.Memory // VM/chain orchestration VMAliaser ids.Aliaser VMManager vms.Manager VMRegistry registry.VMRegistry chainManager chains.Manager // APIs and services APIServer server.Server health health.Health resourceManager resource.Manager } ``` ### 3. Component Initialization Order Components are initialized in a specific order to satisfy dependencies: | Order | Component | Purpose | |-------|-----------|---------| | 1 | **Identity & logging** | Staking certs/POP, VM aliases, log factories | | 2 | **Metrics** | Prometheus registries + `/ext/metrics` | | 3 | **APIs** | HTTP server + metrics API (health/info/admin added later) | | 4 | **Database & shared memory** | Open LevelDB/PebbleDB/memdb and atomic memory | | 5 | **Message codec** | `message.Creator` shared by network/engines | | 6 | **Validators & resources** | Validator manager, CPU/disk targeters, resource manager | | 7 | **Networking** | Listener, NAT/port mapping, throttlers, IP updater | | 8 | **Health & aliases** | Health API, default VM/API/chain aliases | | 9 | **Chain manager & VM registry** | Chain manager, register PlatformVM/AVM/EVM + plugins | | 10 | **Indexer & profiler** | Optional index API and continuous profiler | | 11 | **Chains** | Start PlatformVM, then other chains/bootstrap | ## The Node Struct The `Node` struct is the central coordinator. Here are its key responsibilities: ### VM Management ```go // Register built-in VMs n.VMManager.RegisterFactory(ctx, constants.PlatformVMID, &platformvm.Factory{}) n.VMManager.RegisterFactory(ctx, constants.AVMID, &avm.Factory{}) n.VMManager.RegisterFactory(ctx, constants.EVMID, &coreth.Factory{}) ``` ### Chain Creation When a new chain needs to be created (e.g., during P-Chain bootstrap): ```go type ChainParameters struct { ID ids.ID // Chain ID SubnetID ids.ID // Subnet that validates this chain GenesisData []byte // Genesis state VMID ids.ID // Which VM to run FxIDs []ids.ID // Feature extensions CustomBeacons validators.Manager // Optional: bootstrap peers for P-Chain } ``` ### API Registration Each VM can register its own API endpoints: ```go // VM implements CreateHandlers func (vm *VM) CreateHandlers(ctx context.Context) (map[string]http.Handler, error) { return map[string]http.Handler{ "/rpc": vm.rpcHandler, "/ws": vm.wsHandler, }, nil } ``` ## Chain Manager The Chain Manager ([`chains/manager.go`](https://github.com/ava-labs/avalanchego/blob/master/chains/manager.go)) is responsible for: 1. **Creating chains** when requested by the P-Chain 2. **Managing chain lifecycle** (start, stop, restart) 3. **Handling bootstrapping** and state sync 4. **Routing messages** between chains and the network ```go title="chains/manager.go" type Manager interface { // Queue a chain to be created after P-Chain bootstraps QueueChainCreation(ChainParameters) // Check if a chain has finished bootstrapping IsBootstrapped(ids.ID) bool // Resolve chain aliases Lookup(string) (ids.ID, error) // Start the chain creation process StartChainCreator(platformChain ChainParameters) error } ``` ### Chain Bootstrapping When a chain starts, it progresses through several states to catch up with the network: Initializing Initializing --> StateSyncing: State sync enabled Initializing --> Bootstrapping: State sync disabled StateSyncing --> Bootstrapping: State summaries applied Bootstrapping --> NormalOp: Bootstrap complete NormalOp --> [*] `} /> ## Database Layer AvalancheGo supports multiple database backends: ### Database Backends | Backend | Description | |---------|-------------| | **LevelDB** | Default, widely tested | | **PebbleDB** | Modern alternative, better performance | | **memdb** | In-memory (non-persistent), useful for fast testing | ### Database Organization Data is organized using prefix databases: ```go // Each component gets its own namespace vmDB := prefixdb.New(VMDBPrefix, db) chainDB := prefixdb.New(chainID[:], vmDB) ``` This allows: - **Isolation**: Each VM and chain has isolated storage - **Metrics**: Per-database metrics via `meterdb` - **Cleanup**: Easy removal of chain data ## Message Flow Here's how a transaction flows through the system: >API: Submit transaction API->>VM: IssueTx(tx) VM->>VM: Validate tx VM->>Engine: Notify pending txs Engine->>VM: BuildBlock() VM-->>Engine: New block Engine->>Network: Broadcast block Network->>Engine: Receive votes Engine->>VM: Accept/Reject block VM-->>API: Confirmation API-->>Client: Response `} /> ## Health Checks AvalancheGo exposes health checks at `/ext/health`: ```go type Checker interface { // HealthCheck returns nil if healthy HealthCheck(context.Context) (interface{}, error) } ``` Components that implement health checks: - **Network**: Peer connectivity - **Chains**: Bootstrap status - **Database**: I/O health - **Consensus**: Liveness ## Metrics Prometheus metrics are exposed at `/ext/metrics`: ```go // Example metrics namespaces const ( networkNamespace = "avalanche_network" dbNamespace = "avalanche_db" consensusNS = "avalanche_snowman" ) ``` Key metrics include: - `avalanche_network_peers`: Connected peer count - `avalanche_db_*`: Database operations - `avalanche_snowman_*`: Consensus metrics - `avalanche_api_*`: API request metrics ## Next Steps Learn how Snowman and Avalanche consensus work Understand VM architecture and interfaces # AvalancheGo Architecture (/docs/nodes/architecture) --- title: AvalancheGo Architecture description: Understand the internal architecture and components of AvalancheGo, the official Avalanche node implementation. --- AvalancheGo is the official Go implementation of an Avalanche node. It powers the Primary Network (P/C/X) and any Avalanche L1s you launch, delivering high throughput and sub-second probabilistic finality. **Source Code**: [github.com/ava-labs/avalanchego](https://github.com/ava-labs/avalanchego) ## What is AvalancheGo? AvalancheGo is a full-node implementation that: - **Validates transactions** across the Primary Network (P-Chain, C-Chain, X-Chain) - **Participates in consensus** using Avalanche's Snow* family of protocols - **Serves API requests** for wallets, dApps, and other clients - **Supports Avalanche L1s** (blockchains validated by Subnets) for custom networks AvalancheGo is written in Go and is designed to be modular, allowing developers to build custom Virtual Machines (VMs) that define their own blockchain logic. ## Architecture at a Glance - **Networking**: Custom P2P stack with mutual TLS (staking certs), throttling, peer scoring, and subnet-aware gossip. - **Consensus engines**: Snowman/Snowman++ for all Primary Network chains (P/C/X post-Cortina). The legacy Avalanche DAG engine exists but is unused. - **VMs**: PlatformVM (P-Chain), Coreth (C-Chain), AVM (X-Chain), plus pluggable/`rpcchainvm` VMs for custom L1s. - **Chain manager**: Boots P/C/X, creates new chains on request, routes consensus messages. - **APIs**: HTTP/WS via `/ext/*`, with health/metrics, admin/info, and per-chain RPCs. - **Storage**: LevelDB (default), PebbleDB, or [Firewood](/docs/nodes/architecture/execution/firewood) (experimental), shared atomic UTXO memory for cross-chain transfers. - **Execution**: [Streaming Asynchronous Execution](/docs/nodes/architecture/execution/streaming-async-execution) (experimental) decouples consensus from execution for higher throughput. ## Core Components | Component | Description | |-----------|-------------| | **Network Layer** | P2P networking for peer discovery, message routing, and validator communication | | **Chain Manager** | Orchestrates blockchain lifecycle, bootstrapping, and state synchronization | | **Consensus Engines** | Snowman/Snowman++ for all Primary Network chains and most L1s | | **Virtual Machines** | PlatformVM, Coreth, AVM, and custom VMs (native Go or `rpcchainvm`) | | **API Server** | HTTP/HTTPS endpoints for interacting with the node | | **Database** | Persistent storage using LevelDB (default) or PebbleDB; shared atomic memory | ## Primary Network Chains AvalancheGo validates three chains on the Primary Network: Manages validators, staking, subnets, and chain creation. Uses PlatformVM (Snowman++). EVM-compatible chain for smart contracts. Uses Coreth (grafted go-ethereum) with Snowman++. High-throughput asset transfers using UTXO model. Uses AVM with Snowman (linearized in Cortina). ## Key Design Principles ### Modularity AvalancheGo separates concerns into distinct layers: - **Consensus** is decoupled from application logic - **VMs** are pluggable and can be developed independently - **Networking** is abstracted from chain-specific operations ### Extensibility - Custom VMs can be loaded as plugins (native) or via `rpcchainvm` (any language) - Avalanche L1s can run any VM that implements the required interface - Chain configurations and upgrades can be customized per-network/chain ### Performance - Sub-second finality through probabilistic consensus - Parallel transaction processing across independent chains - State sync and Snowman++ proposer windows to reduce contention and bootstrap faster ## Next Steps Deep dive into AvalancheGo's package structure and component interactions Learn how Snowman and Avalanche consensus work under the hood Understand how VMs define blockchain behavior and how to build custom VMs Explore the P2P protocol and peer management system Explore streaming async execution and the Firewood database # Networking Layer (/docs/nodes/architecture/networking) --- title: Networking Layer description: Understanding AvalancheGo's P2P networking, peer management, and message protocols. --- AvalancheGo uses a custom peer-to-peer (P2P) networking layer designed for high-throughput consensus messaging. This page covers the network architecture, peer management, and message protocols. ## Network Overview PM PM <--> NM NM <--> Router Router --> PC & CC & XC `} /> ## Network Interface The core network interface ([`network/network.go`](https://github.com/ava-labs/avalanchego/blob/master/network/network.go)) handles all P2P operations: ```go title="network/network.go" type Network interface { // Message sending (from consensus) sender.ExternalSender // Health monitoring health.Checker // Peer management peer.Network // Lifecycle StartClose() Dispatch() error // Manual peer tracking ManuallyTrack(nodeID ids.NodeID, ip netip.AddrPort) // Peer information PeerInfo(nodeIDs []ids.NodeID) []peer.Info // Uptime tracking NodeUptime() (UptimeResult, error) } ``` ## Peer Discovery AvalancheGo discovers peers through multiple mechanisms: ### Bootstrap Nodes Initial connections to known bootstrap nodes are configured per-network (sampled from genesis) and can be overridden with `--bootstrap-ips`/`--bootstrap-ids`. See `config/config.go:getBootstrapConfig`. ### Peer Exchange Nodes share known peers with each other: >B: Connect A->>B: PeerList request B-->>A: Node C, Node D, ... A->>C: Connect discovered `} /> ### IP Tracking The network maintains IP information for reconnection: ```go title="network/ip_tracker.go" type ipTracker struct { // Known peer IPs mostRecentTrackedIPs map[ids.NodeID]*ips.ClaimedIPPort // Bloom filter for efficient gossip bloom *bloom.ReadFilter } ``` ## Connection Lifecycle ### Establishing Connections >B: TCP Connect A->>B: Mutual TLS with staking cert A->>B: Handshake (network ID, POP, subnets) B-->>A: Handshake + PeerList A-->>B: PeerList (optional pull) Note over A,B: Connection Established `} /> ### TLS Authentication All connections use mutual TLS with staking certificates: ```go // Each node has a staking keypair type Node struct { StakingTLSSigner crypto.Signer StakingTLSCert *staking.Certificate ID ids.NodeID // Derived from certificate } ``` **Node ID Derivation:** ```go // NodeID is derived from the staking certificate nodeID := ids.NodeIDFromCert(stakingCert) ``` ### Peer States Connecting: Dial Connecting --> Upgrading: TCP Connected Upgrading --> Handshaking: TLS Complete Handshaking --> Connected: Handshake + PeerList Connected --> Disconnected: Error or Timeout Disconnected --> Connecting: Reconnect Disconnected --> [*]: Manual Remove `} /> ## Message Protocol ### Message Types Messages are defined using Protocol Buffers ([`proto/p2p/p2p.proto`](https://github.com/ava-labs/avalanchego/blob/master/proto/p2p/p2p.proto)): ```protobuf title="proto/p2p/p2p.proto" message Message { reserved 1; oneof message { // Optional compression for supported message types bytes compressed_zstd = 2; // Handshake & peering Ping ping = 11; Pong pong = 12; Handshake handshake = 13; GetPeerList get_peer_list = 35; PeerList peer_list = 14; // State sync GetStateSummaryFrontier get_state_summary_frontier = 15; StateSummaryFrontier state_summary_frontier = 16; GetAcceptedStateSummary get_accepted_state_summary = 17; AcceptedStateSummary accepted_state_summary = 18; // Bootstrapping GetAcceptedFrontier get_accepted_frontier = 19; AcceptedFrontier accepted_frontier = 20; GetAccepted get_accepted = 21; Accepted accepted = 22; GetAncestors get_ancestors = 23; Ancestors ancestors = 24; // Consensus Get get = 25; Put put = 26; PushQuery push_query = 27; PullQuery pull_query = 28; Chits chits = 29; // Application-level AppRequest app_request = 30; AppResponse app_response = 31; AppGossip app_gossip = 32; AppError app_error = 34; // Streaming Simplex simplex = 36; } } ``` ### Consensus Messages | Message | Purpose | |---------|---------| | `PushQuery` | Send block and request vote | | `PullQuery` | Request vote without sending block | | `Chits` | Vote response with preferences | | `Get` | Request a specific block | | `Put` | Send a requested block | | `GetAcceptedFrontier` / `AcceptedFrontier` | Bootstrap frontier exchange | | `GetAccepted` / `Accepted` | Request/return accepted containers for heights | | `GetAncestors` / `Ancestors` | Fetch a container and its ancestors | | `GetStateSummaryFrontier` / `StateSummaryFrontier` | State sync frontier | | `GetAcceptedStateSummary` / `AcceptedStateSummary` | State summaries at heights | ### Application Messages VMs send custom messages through the `common.AppSender` provided at initialization: ```go type AppSender interface { SendAppRequest(ctx context.Context, nodeIDs set.Set[ids.NodeID], requestID uint32, appRequestBytes []byte) error SendAppResponse(ctx context.Context, nodeID ids.NodeID, requestID uint32, appResponseBytes []byte) error SendAppError(ctx context.Context, nodeID ids.NodeID, requestID uint32, errorCode int32, errorMessage string) error SendAppGossip(ctx context.Context, config common.SendConfig, appGossipBytes []byte) error } ``` Inbound app traffic is delivered to the VM's `AppRequest`/`AppResponse`/`AppGossip` handlers via `common.AppHandler`. ## Message Routing The router ([`snow/networking/router`](https://github.com/ava-labs/avalanchego/tree/master/snow/networking/router)) directs messages to appropriate handlers: ```go title="snow/networking/router/router.go" type Router interface { // Route messages to chains HandleInbound(ctx context.Context, msg message.InboundMessage) // Register chain handlers AddChain(ctx context.Context, chain handler.Handler) error // Health checking health.Checker } ``` ### Chain Message Handler ```go title="snow/networking/handler/handler.go" type Handler interface { // Consensus messages HandleMsg(ctx context.Context, msg Message) error // Lifecycle Start(ctx context.Context, recoverPanic bool) Stop(ctx context.Context) } ``` ## Throttling & Rate Limiting ### Inbound Throttling Protects nodes from message floods ([`network/throttling`](https://github.com/ava-labs/avalanchego/tree/master/network/throttling)): ```go title="network/throttling/inbound_msg_throttler.go" type InboundMsgThrottler interface { // Check if message should be allowed Acquire(msg message.InboundMessage, nodeID ids.NodeID) ReleaseFunc // Release acquired resources ReleaseFunc func() } ``` **Throttling Dimensions:** - **Bandwidth**: Bytes per second per peer - **Message count**: Messages per second - **CPU time**: Processing time limits ### Outbound Throttling Prevents overwhelming peers: ```go title="network/throttling/outbound_msg_throttler.go" type OutboundMsgThrottler interface { // Acquire permission to send Acquire(msg message.OutboundMessage, nodeID ids.NodeID) ReleaseFunc } ``` ### Connection Throttling Limits connection attempts: ```go type InboundConnUpgradeThrottler interface { // Check if connection upgrade should proceed ShouldUpgrade(ip netip.AddrPort) bool } ``` ## Peer Scoring Nodes track peer behavior for connection prioritization: ```go type PeerInfo struct { IP netip.AddrPort PublicIP netip.AddrPort ID ids.NodeID Version string LastSent time.Time LastReceived time.Time ObservedUptime uint32 TrackedSubnets []ids.ID } ``` ### Benchlisting Misbehaving peers are temporarily blacklisted ([`snow/networking/benchlist`](https://github.com/ava-labs/avalanchego/tree/master/snow/networking/benchlist)): ```go title="snow/networking/benchlist/benchlist.go" type Benchlist interface { // Add peer to benchlist RegisterFailure(validatorID ids.NodeID) // Check if peer is benched IsBenched(validatorID ids.NodeID) bool } ``` **Benchlist Triggers:** - Repeated query timeouts - Invalid message responses - Resource exhaustion ## Health Monitoring Network health is exposed via the health API: ```go type UptimeResult struct { // Percent of stake that sees us as meeting uptime requirement RewardingStakePercentage float64 // Weighted average of observed uptimes WeightedAveragePercentage float64 } ``` ### Health Metrics ```go const ( ConnectedPeersKey = "connectedPeers" TimeSinceLastMsgReceivedKey = "timeSinceLastMsgReceived" TimeSinceLastMsgSentKey = "timeSinceLastMsgSent" SendFailRateKey = "sendFailRate" ) ``` **Example Health Response:** ```json { "healthy": true, "checks": { "network": { "message": { "connectedPeers": 45, "timeSinceLastMsgReceived": "1.2s", "timeSinceLastMsgSent": "0.8s", "sendFailRate": 0.001 } } } } ``` ## Network Configuration Key configuration options: ```go title="config/config.go" type NetworkConfig struct { // Connection limits MaxInboundConnections int MaxOutboundConnections int // Timeouts PingFrequency time.Duration PongTimeout time.Duration ReadHandshake time.Duration // Throttling InboundThrottlerAtLargeAllocSize uint64 InboundThrottlerVdrAllocSize uint64 OutboundThrottlerAtLargeAllocSize uint64 // Peer management PeerListGossipFrequency time.Duration PeerListPullGossipFreq time.Duration } ``` ### Command Line Flags ```bash # Connection settings --network-max-reconnect-delay=1m --network-initial-reconnect-delay=1s --network-peer-list-gossip-frequency=1m # Throttling --network-inbound-throttler-at-large-alloc-size=6291456 --network-outbound-throttler-at-large-alloc-size=6291456 ``` ## Subnet Networking Validators can participate in multiple subnets: ```go type SubnetTracker interface { // Track additional subnet TrackedSubnets() set.Set[ids.ID] } ``` **Subnet-Specific Peers:** - Handshake carries tracked subnet IDs (and `all_subnets` flag) - Nodes connect preferentially to same-subnet validators - Gossip is scoped to relevant subnets - Cross-subnet communication uses explicit routing ## Debugging Network Issues ### Common Issues | Symptom | Possible Cause | Solution | |---------|---------------|----------| | No peers | Firewall blocking 9651 | Open port 9651 | | High latency | Geographic distance | Add regional bootstrappers | | Disconnections | Rate limiting | Increase throttle limits | | Benchmark failures | Peer misbehavior | Check peer logs | ### Useful Endpoints ```bash # Get connected peers curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"info.peers" }' -H 'content-type:application/json;' localhost:9650/ext/info # Get network health curl localhost:9650/ext/health ``` ## Next Steps Set up and configure your own AvalancheGo node Full reference for network configuration options # Virtual Machines (/docs/nodes/architecture/virtual-machines) --- title: Virtual Machines description: Understand how Virtual Machines (VMs) define blockchain behavior in AvalancheGo, including the VM interface and built-in VMs. --- A Virtual Machine (VM) defines the application-level logic of a blockchain. In AvalancheGo, VMs are decoupled from consensus, allowing developers to create custom blockchain behavior while reusing the battle-tested consensus layer. ## What is a Virtual Machine? Think of a VM as the "personality" of a blockchain: | Aspect | What the VM Defines | |--------|---------------------| | **State** | What data the blockchain stores | | **Transactions** | Valid operations and their effects | | **Blocks** | How transactions are packaged | | **APIs** | How users interact with the chain | | **Validation** | Rules for accepting blocks | VMs are reusable. Multiple blockchains can run the same VM, each with independent state. This is similar to how a class can have multiple instances in object-oriented programming. ## VM Architecture |BuildBlock, Verify, Accept| VMI VMI --> BM VMI --> TXP VMI --> API BM --> ST ST --> DB User -->|RPC| API API -->|Submit TX| TXP `} /> ## Core VM Interface Every VM must implement the [`ChainVM`](https://github.com/ava-labs/avalanchego/blob/master/snow/engine/snowman/block/vm.go) interface for linear chain consensus: ```go title="snow/engine/snowman/block/vm.go" type ChainVM interface { common.VM // Block building BuildBlock(ctx context.Context) (snowman.Block, error) // Block retrieval GetBlock(ctx context.Context, blkID ids.ID) (snowman.Block, error) ParseBlock(ctx context.Context, blockBytes []byte) (snowman.Block, error) // Consensus integration SetPreference(ctx context.Context, blkID ids.ID) error LastAccepted(ctx context.Context) (ids.ID, error) // Optional: height-indexed access GetBlockIDAtHeight(ctx context.Context, height uint64) (ids.ID, error) } ``` ### Base VM Interface The foundation interface that all VMs implement ([source](https://github.com/ava-labs/avalanchego/blob/master/snow/engine/common/vm.go)): ```go title="snow/engine/common/vm.go" type VM interface { common.AppHandler // AppRequest/AppResponse/AppGossip hooks health.Checker // Exposed via /ext/health validators.Connector // Called on peer connect/disconnect // Lifecycle Initialize(ctx context.Context, chainCtx *snow.Context, db database.Database, genesisBytes []byte, upgradeBytes []byte, configBytes []byte, fxs []*common.Fx, appSender common.AppSender) error SetState(ctx context.Context, state snow.State) error Shutdown(ctx context.Context) error // Info Version(ctx context.Context) (string, error) // APIs CreateHandlers(ctx context.Context) (map[string]http.Handler, error) NewHTTPHandler(ctx context.Context) (http.Handler, error) // Engine notifications (PendingTxs, etc.) WaitForEvent(ctx context.Context) (common.Message, error) } ``` ## Block Interface Blocks are the fundamental unit of consensus ([source](https://github.com/ava-labs/avalanchego/blob/master/snow/consensus/snowman/block.go)): ```go title="snow/consensus/snowman/block.go" type Block interface { // ID(), Accept(), Reject(), Status() come from snow.Decidable snow.Decidable // Identity Parent() ids.ID Height() uint64 Timestamp() time.Time Bytes() []byte // Validation Verify(context.Context) error } ``` ### Block Lifecycle Received: ParseBlock Received --> Pending: Waiting for ancestors Pending --> Processing: Ancestors verified Processing --> Verified: Verify succeeds Processing --> Invalid: Verify fails Verified --> Accepted: Accept Verified --> Rejected: Reject Invalid --> [*]: Discarded Accepted --> [*]: Committed to state Rejected --> [*]: Discarded `} /> ### Block Status ```go type Status int const ( Unknown Status = iota Processing Rejected Accepted ) ``` ## Built-in Virtual Machines ### Platform VM (P-Chain) The Platform VM ([`vms/platformvm`](https://github.com/ava-labs/avalanchego/tree/master/vms/platformvm)) manages the Avalanche network itself: ```go title="vms/platformvm/vm.go" type VM struct { // State management state state.State atomicUTXOs atomic.SharedMemory // Block building Builder blockbuilder.Builder Network network.Network // Validators Validators validators.Manager } ``` **Responsibilities:** - **Validator Management**: Add/remove validators, track stake - **Subnet Creation**: Create new validator sets - **Chain Creation**: Launch new blockchains - **Staking**: Manage delegation and rewards - **Warp Messaging**: Sign cross-chain messages **Key Transaction Types:** | Transaction | Purpose | Era | |-------------|---------|-----| | `AddValidatorTx` | Add a Primary Network validator | Apricot | | `AddDelegatorTx` | Delegate stake to a validator | Apricot | | `CreateSubnetTx` | Create a new subnet | Apricot | | `CreateChainTx` | Launch a blockchain on a subnet | Apricot | | `ImportTx` / `ExportTx` | Cross-chain asset transfers | Apricot | | `AddPermissionlessValidatorTx` | Add validator to permissionless subnet | Banff | | `AddPermissionlessDelegatorTx` | Delegate to permissionless validator | Banff | | `TransformSubnetTx` | Convert subnet to permissionless | Banff | | `TransferSubnetOwnershipTx` | Transfer subnet ownership | Durango | | `ConvertSubnetToL1Tx` | Convert subnet to Avalanche L1 | Etna | | `RegisterL1ValidatorTx` | Register validator on L1 | Etna | | `SetL1ValidatorWeightTx` | Set L1 validator weight | Etna | | `IncreaseL1ValidatorBalanceTx` | Add balance to L1 validator | Etna | | `DisableL1ValidatorTx` | Disable an L1 validator | Etna | The **Etna upgrade** introduced 5 new transaction types for managing Avalanche L1s. These enable converting subnets to sovereign L1s with their own validator management. ### AVM (X-Chain) The AVM ([`vms/avm`](https://github.com/ava-labs/avalanchego/tree/master/vms/avm)) handles asset creation and transfers using the UTXO model: The X-Chain was linearized in the **Cortina upgrade** (April 2023). It now uses Snowman consensus like the P-Chain and C-Chain, rather than the legacy Avalanche DAG consensus. The AVM implements the `LinearizableVMWithEngine` interface to support this transition. ```go title="vms/avm/vm.go" type VM struct { // Asset management fxs []*Fx state state.State // UTXO handling utxoSet atomic.SharedMemory } ``` **Features:** - **Multi-Asset Support**: Native AVAX and custom assets - **UTXO Model**: Bitcoin-style transaction inputs/outputs - **Snowman Consensus**: Linear chain consensus (linearized from DAG in Cortina upgrade) - **Feature Extensions (Fxs)**: Pluggable transaction types - `secp256k1fx`: Standard signatures - `nftfx`: Non-fungible tokens - `propertyfx`: Property ownership **Transaction Types:** | Transaction | Purpose | |-------------|---------| | `CreateAssetTx` | Create a new asset | | `OperationTx` | Mint/burn assets | | `BaseTx` | Transfer assets | | `ImportTx` | Import from other chains | | `ExportTx` | Export to other chains | ### Coreth (C-Chain) Coreth is the EVM implementation for the C-Chain: Coreth is **grafted** into the AvalancheGo repository at `graft/coreth/`. The standalone [`ava-labs/coreth`](https://github.com/ava-labs/coreth) repository has been archived and is now read-only. All active development occurs within the AvalancheGo monorepo. **Features:** - Full Ethereum Virtual Machine compatibility - Supports Solidity smart contracts - Web3 JSON-RPC API (`eth`, `personal`, `txpool`, `debug` namespaces) - EIP-1559 dynamic fees - Atomic transactions with other Avalanche chains (via shared memory) - Support for Ethereum upgrades through Cancun ### ProposerVM (Snowman++) The ProposerVM ([`vms/proposervm`](https://github.com/ava-labs/avalanchego/tree/master/vms/proposervm)) wraps other VMs to add Snowman++ proposer windows: ```go title="vms/proposervm/vm.go" type VM struct { // Wrapped VM ChainVM block.ChainVM // Proposer selection windower proposer.Windower // Block timing MinBlockDelay time.Duration } ``` **How it Works:** 1. Stake-weighted proposers are sampled for each height. 2. Each proposer has a 5s slot (up to 6 slots) counted from the parent timestamp. 3. Only the active proposer can build during its slot; after the final slot any validator may build. 4. The wrapper enforces proposer signatures/timestamps before issuing to consensus. Snowman++ is enabled on all Primary Network chains (P-Chain, C-Chain, X-Chain) to pace block production without sacrificing liveness. ## Custom VMs You can build custom VMs that run on Avalanche L1s. There are two approaches: ### 1. Native Go VM Implement the `ChainVM` interface directly in Go: ```go type MyVM struct { db database.Database state *MyState builder *BlockBuilder pending <-chan struct{} } func (vm *MyVM) Initialize(ctx context.Context, ...) error { // Set up database and state vm.db = db vm.state = NewState(db) vm.pending = vm.builder.Subscribe() // emits when there are pending txs return nil } func (vm *MyVM) WaitForEvent(ctx context.Context) (common.Message, error) { select { case <-ctx.Done(): return 0, ctx.Err() case <-vm.pending: return common.PendingTxs, nil } } func (vm *MyVM) BuildBlock(ctx context.Context) (snowman.Block, error) { // Collect pending transactions txs := vm.builder.GetPendingTxs() // Create new block return NewBlock(vm.state.LastAccepted(), txs), nil } ``` ### 2. RPC VM (Any Language) Use the [`rpcchainvm`](https://github.com/ava-labs/avalanchego/tree/master/vms/rpcchainvm) interface to build VMs in any language: |gRPC| Server Server --> Logic `} /> **Benefits:** - Write VMs in Rust, TypeScript, etc. - Process isolation - Independent deployment **Protocol:** ```go // gRPC service definition service VM { rpc Initialize(InitializeRequest) returns (InitializeResponse); rpc BuildBlock(BuildBlockRequest) returns (BuildBlockResponse); rpc ParseBlock(ParseBlockRequest) returns (ParseBlockResponse); rpc GetBlock(GetBlockRequest) returns (GetBlockResponse); // ... more methods } ``` ## VM Registration VMs are registered with the node at startup: ```go title="node/node.go" func (n *Node) initVMs() error { // Built-in VMs n.VMManager.RegisterFactory(ctx, constants.PlatformVMID, &platformvm.Factory{}) n.VMManager.RegisterFactory(ctx, constants.AVMID, &avm.Factory{}) n.VMManager.RegisterFactory(ctx, constants.EVMID, &coreth.Factory{}) // Plugin VMs are discovered from the plugins directory // (default: ~/.avalanchego/plugins/) } ``` **Plugin Discovery:** 1. VMs are placed in `~/.avalanchego/plugins/` 2. Filename is the VM ID (e.g., `srEXiWaHuhNyGwPUi444Tu47ZEDwxTWrbQiuD7FmgSAQ6X7Dy`) 3. Node discovers and loads plugins at startup ## VM Configuration Each chain can have custom VM configuration: ```json title="~/.avalanchego/configs/chains/{chainID}/config.json" { "pruning-enabled": true, "state-sync-enabled": true, "eth-apis": ["eth", "eth-filter", "net", "web3"] } ``` **Chain-Specific Configs:** - Stored in `~/.avalanchego/configs/chains/{chainID}/` - `config.json`: VM configuration - `upgrade.json`: Upgrade coordination ## Best Practices ### State Management ```go // Use versioned database for atomic commits func (vm *VM) Accept(ctx context.Context, blk *Block) error { batch := vm.db.NewBatch() defer batch.Reset() // Apply state changes for _, tx := range blk.Txs { if err := vm.state.Apply(batch, tx); err != nil { return err } } // Commit atomically return batch.Write() } ``` ### Block Building ```go // Drive block production: return PendingTxs when mempool has work func (vm *VM) WaitForEvent(ctx context.Context) (common.Message, error) { select { case <-ctx.Done(): return 0, ctx.Err() case <-vm.pending: return common.PendingTxs, nil } } ``` ### API Design ```go func (vm *VM) CreateHandlers(ctx context.Context) (map[string]http.Handler, error) { return map[string]http.Handler{ "/rpc": vm.newJSONRPCHandler(), "/ws": vm.newWebSocketHandler(), "/health": vm.newHealthHandler(), }, nil } ``` ## Next Steps Learn how VMs communicate over the network Start building your own Virtual Machine # Avalanche L1 Configs (/docs/nodes/configure/avalanche-l1-configs) --- title: "Avalanche L1 Configs" description: "This page describes the configuration options available for Avalanche L1s." edit_url: https://github.com/ava-labs/avalanchego/edit/master/subnets/config.md --- # Subnet Configs It is possible to provide parameters for a Subnet. Parameters here apply to all chains in the specified Subnet. AvalancheGo looks for files specified with `{subnetID}.json` under `--subnet-config-dir` as documented [here](https://build.avax.network/docs/nodes/configure/configs-flags#subnet-configs). Here is an example of Subnet config file: ```json { "validatorOnly": false, "snowParameters": { "k": 25, "alpha": 18 } } ``` ## Parameters ### Private Subnet #### `validatorOnly` (bool) If `true` this node does not expose Subnet blockchain contents to non-validators via P2P messages. Defaults to `false`. Avalanche Subnets are public by default. It means that every node can sync and listen ongoing transactions/blocks in Subnets, even they're not validating the listened Subnet. Subnet validators can choose not to publish contents of blockchains via this configuration. If a node sets `validatorOnly` to true, the node exchanges messages only with this Subnet's validators. Other peers will not be able to learn contents of this Subnet from this node. This is a node-specific configuration. Every validator of this Subnet has to use this configuration in order to create a full private Subnet. #### `allowedNodes` (string list) If `validatorOnly=true` this allows explicitly specified NodeIDs to be allowed to sync the Subnet regardless of validator status. Defaults to be empty. This is a node-specific configuration. Every validator of this Subnet has to use this configuration in order to properly allow a node in the private Subnet. ### Consensus Config Subnet configs supports loading new consensus parameters or even consensus engines(Snowman or Simplex). JSON keys are different from their matching `CLI` keys. The snow parameters of a Subnet default to the same values used for the Primary Network, which are given [CLI Snow Parameters](https://build.avax.network/docs/nodes/configure/configs-flags#snow-parameters). | CLI Key | JSON Key | | :-------------------------------- | :----------------------------------------- | | --snow-sample-size | `snowParameters.k` | | --snow-quorum-size | `snowParameters.alpha` | | --snow-commit-threshold | `snowParameters.beta` | | --snow-concurrent-repolls | `snowParameters.concurrentRepolls` | | --snow-optimal-processing | `snowParameters.optimalProcessing` | | --snow-max-processing | `snowParameters.maxOutstandingItems` | | --snow-max-time-processing | `snowParameters.maxItemProcessingTime` | | --snow-avalanche-batch-size | `snowParameters.batchSize` | | --snow-avalanche-num-parents | `snowParameters.parentSize` | | --simplex-max-network-delay | `simplexParameters.maxNetworkDelay` | | --simplex-max-rebroadcast-wait | `simplexParameters.maxRebroadcastWait` | ### Gossip Configs It's possible to define different Gossip configurations for each Subnet without changing values for Primary Network. JSON keys of these parameters are different from their matching `CLI` keys. These parameters default to the same values used for the Primary Network. For more information see [CLI Gossip Configs](https://build.avax.network/docs/nodes/configure/configs-flags#gossiping). | CLI Key | JSON Key | | :------------------------------------------------------ | :------------------------------------- | | --consensus-accepted-frontier-gossip-validator-size | gossipAcceptedFrontierValidatorSize | | --consensus-accepted-frontier-gossip-non-validator-size | gossipAcceptedFrontierNonValidatorSize | | --consensus-accepted-frontier-gossip-peer-size | gossipAcceptedFrontierPeerSize | | --consensus-on-accept-gossip-validator-size | gossipOnAcceptValidatorSize | | --consensus-on-accept-gossip-non-validator-size | gossipOnAcceptNonValidatorSize | | --consensus-on-accept-gossip-peer-size | gossipOnAcceptPeerSize | # AvalancheGo Config Flags (/docs/nodes/configure/configs-flags) --- title: "AvalancheGo Config Flags" description: "This page lists all available configuration options for AvalancheGo nodes." edit_url: https://github.com/ava-labs/avalanchego/edit/master/config/config.md --- # AvalancheGo Configs and Flags This document lists all available configuration options for AvalancheGo nodes. You can configure your node using either command-line flags or environment variables. > **Note:** For comparison with the previous documentation format (using individual flag headings), see the [archived version](https://gist.github.com/navillanueva/cdb9c49c411bd89a9480f05a7afbab37). ## Environment Variable Naming Convention All environment variables follow the pattern: `AVAGO_` + flag name where the flag name is converted to uppercase with hyphens replaced by underscores. For example: - Flag: `--api-admin-enabled` - Environment Variable: `AVAGO_API_ADMIN_ENABLED` ## Example Usage ### Using Command-Line Flags ```bash avalanchego --network-id=fuji --http-host=0.0.0.0 --log-level=debug ``` ### Using Environment Variables ```bash export AVAGO_NETWORK_ID=fuji export AVAGO_HTTP_HOST=0.0.0.0 export AVAGO_LOG_LEVEL=debug avalanchego ``` ### Using Config File Create a JSON config file: ```json { "network-id": "fuji", "http-host": "0.0.0.0", "log-level": "debug" } ``` Run with: ```bash avalanchego --config-file=/path/to/config.json ``` ## Configuration Precedence Configuration sources are applied in the following order (highest to lowest precedence): 1. Command-line flags 2. Environment variables 3. Config file 4. Default values # Configuration Options
### APIs Configuration for various APIs exposed by the node. | Flag | Env Var | Type | Default | Description | |--------|--------|------|----|--------------------| | `--api-admin-enabled` | `AVAGO_API_ADMIN_ENABLED` | bool | `false` | If set to `true`, this node will expose the Admin API. See [here](https://build.avax.network/docs/api-reference/admin-api) for more information. | | `--api-health-enabled` | `AVAGO_API_HEALTH_ENABLED` | bool | `true` | If set to `false`, this node will not expose the Health API. See [here](https://build.avax.network/docs/api-reference/health-api) for more information. | | `--index-enabled` | `AVAGO_INDEX_ENABLED` | bool | `false` | If set to `true`, this node will enable the indexer and the Index API will be available. See [here](https://build.avax.network/docs/api-reference/index-api) for more information. | | `--api-info-enabled` | `AVAGO_API_INFO_ENABLED` | bool | `true` | If set to `false`, this node will not expose the Info API. See [here](https://build.avax.network/docs/api-reference/info-api) for more information. | | `--api-metrics-enabled` | `AVAGO_API_METRICS_ENABLED` | bool | `true` | If set to `false`, this node will not expose the Metrics API. See [here](https://build.avax.network/docs/api-reference/metrics-api) for more information. | ### Avalanche Community Proposals Support for [Avalanche Community Proposals](https://github.com/avalanche-foundation/ACPs). | Flag | Env Var | Type | Default | Description | |--------|--------|------|----|--------------------| | `--acp-support` | `AVAGO_ACP_SUPPORT` | []int | `[]` | The `--acp-support` flag allows an AvalancheGo node to indicate support for a set of [Avalanche Community Proposals](https://github.com/avalanche-foundation/ACPs). | | `--acp-object` | `AVAGO_ACP_OBJECT` | []int | `[]` | The `--acp-object` flag allows an AvalancheGo node to indicate objection for a set of [Avalanche Community Proposals](https://github.com/avalanche-foundation/ACPs). | ### Bootstrapping Configuration for node bootstrapping process. | Flag | Env Var | Type | Default | Description | |--------|--------|------|----|--------------------| | `--bootstrap-ancestors-max-containers-sent` | `AVAGO_BOOTSTRAP_ANCESTORS_MAX_CONTAINERS_SENT` | uint | `2000` | Max number of containers in an `Ancestors` message sent by this node. | | `--bootstrap-ancestors-max-containers-received` | `AVAGO_BOOTSTRAP_ANCESTORS_MAX_CONTAINERS_RECEIVED` | uint | `2000` | This node reads at most this many containers from an incoming `Ancestors` message. | | `--bootstrap-beacon-connection-timeout` | `AVAGO_BOOTSTRAP_BEACON_CONNECTION_TIMEOUT` | duration | `1m` | Timeout when attempting to connect to bootstrapping beacons. | | `--bootstrap-ids` | `AVAGO_BOOTSTRAP_IDS` | string | network dependent | Bootstrap IDs is a comma-separated list of validator IDs. These IDs will be used to authenticate bootstrapping peers. An example setting of this field would be `--bootstrap-ids="NodeID-7Xhw2mDxuDS44j42TCB6U5579esbSt3Lg,NodeID-MFrZFVCXPv5iCn6M9K6XduxGTYp891xXZ"`. The number of given IDs here must be same with number of given `--bootstrap-ips`. The default value depends on the network ID. | | `--bootstrap-ips` | `AVAGO_BOOTSTRAP_IPS` | string | network dependent | Bootstrap IPs is a comma-separated list of IP:port pairs. These IP Addresses will be used to bootstrap the current Avalanche state. An example setting of this field would be `--bootstrap-ips="127.0.0.1:12345,1.2.3.4:5678"`. The number of given IPs here must be same with number of given `--bootstrap-ids`. The default value depends on the network ID. | | `--bootstrap-max-time-get-ancestors` | `AVAGO_BOOTSTRAP_MAX_TIME_GET_ANCESTORS` | duration | `50ms` | Max Time to spend fetching a container and its ancestors when responding to a GetAncestors message. | | `--bootstrap-retry-enabled` | `AVAGO_BOOTSTRAP_RETRY_ENABLED` | bool | `true` | If set to `false`, will not retry bootstrapping if it fails. | | `--bootstrap-retry-warn-frequency` | `AVAGO_BOOTSTRAP_RETRY_WARN_FREQUENCY` | uint | `50` | Specifies how many times bootstrap should be retried before warning the operator. | ### Chain Configuration Some blockchains allow the node operator to provide custom configurations for individual blockchains. These custom configurations are broken down into two categories: network upgrades and optional chain configurations. AvalancheGo reads in these configurations from the chain configuration directory and passes them into the VM on initialization. | Flag | Env Var | Type | Default | Description | |--------|--------|------|----|--------------------| | `--chain-config-dir` | `AVAGO_CHAIN_CONFIG_DIR` | string | `$HOME/.avalanchego/configs/chains` | Specifies the directory that contains chain configs, as described [here](https://build.avax.network/docs/nodes/chain-configs). If this flag is not provided and the default directory does not exist, AvalancheGo will not exit since custom configs are optional. However, if the flag is set, the specified folder must exist, or AvalancheGo will exit with an error. This flag is ignored if `--chain-config-content` is specified. Network upgrades are passed in from the location: `chain-config-dir`/`blockchainID`/`upgrade.*`. The chain configs are passed in from the location `chain-config-dir`/`blockchainID`/`config.*`. See [here](https://build.avax.network/docs/nodes/chain-configs) for more information. | | `--chain-config-content` | `AVAGO_CHAIN_CONFIG_CONTENT` | string | - | As an alternative to `--chain-config-dir`, chains custom configurations can be loaded altogether from command line via `--chain-config-content` flag. Content must be base64 encoded. Example: First, encode the chain config: `echo -n '{"log-level":"trace"}' \| base64`. This will output something like `eyJsb2ctbGV2ZWwiOiJ0cmFjZSJ9`. Then create the full config JSON and encode it: `echo -n '{"C":{"Config":"eyJsb2ctbGV2ZWwiOiJ0cmFjZSJ9","Upgrade":null}}' \| base64`. Finally run: `avalanchego --chain-config-content "eyJDIjp7IkNvbmZpZyI6ImV5SnNiMmN0YkdWMlpXd2lPaUowY21GalpTSjkiLCJVcGdyYWRlIjpudWxsfX0="` | | `--chain-aliases-file` | `AVAGO_CHAIN_ALIASES_FILE` | string | `~/.avalanchego/configs/chains/aliases.json` | Path to JSON file that defines aliases for Blockchain IDs. This flag is ignored if `--chain-aliases-file-content` is specified. Example content: `{"q2aTwKuyzgs8pynF7UXBZCU7DejbZbZ6EUyHr3JQzYgwNPUPi": ["DFK"]}`. The above example aliases the Blockchain whose ID is `"q2aTwKuyzgs8pynF7UXBZCU7DejbZbZ6EUyHr3JQzYgwNPUPi"` to `"DFK"`. Chain aliases are added after adding primary network aliases and before any changes to the aliases via the admin API. This means that the first alias included for a Blockchain on a Subnet will be treated as the `"Primary Alias"` instead of the full blockchainID. The Primary Alias is used in all metrics and logs. | | `--chain-aliases-file-content` | `AVAGO_CHAIN_ALIASES_FILE_CONTENT` | string | - | As an alternative to `--chain-aliases-file`, it allows specifying base64 encoded aliases for Blockchains. | | `--chain-data-dir` | `AVAGO_CHAIN_DATA_DIR` | string | `$HOME/.avalanchego/chainData` | Chain specific data directory. | ### Config File | Flag | Env Var | Type | Default | Description | |--------|--------|------|----|--------------------| |`--config-file` | `AVAGO_CONFIG_FILE` | string | - | Path to a JSON file that specifies this node's configuration. Command line arguments will override arguments set in the config file. This flag is ignored if `--config-file-content` is specified. Example JSON config file: `{"log-level": "debug"}`. [Install Script](https://build.avax.network/docs/tooling/avalanche-go-installer) creates the node config file at `~/.avalanchego/configs/node.json`. No default file is created if [AvalancheGo is built from source](https://build.avax.network/docs/nodes/run-a-node/from-source), you would need to create it manually if needed. | | `--config-file-content` | `AVAGO_CONFIG_FILE_CONTENT` | string | - | As an alternative to `--config-file`, it allows specifying base64 encoded config content. | | `--config-file-content-type` | `AVAGO_CONFIG_FILE_CONTENT_TYPE` | string | `JSON` | Specifies the format of the base64 encoded config content. JSON, TOML, YAML are among currently supported file format (see [here](https://github.com/spf13/viper#reading-config-files) for full list). | ### Data Directory | Flag | Env Var | Type | Default | Description | |--------|--------|------|----|--------------------| | `--data-dir` | `AVAGO_DATA_DIR` | string | `$HOME/.avalanchego` | Sets the base data directory where default sub-directories will be placed unless otherwise specified. | ### Database | Flag | Env Var | Type | Default | Description | |--------|--------|------|----|--------------------| | `--db-dir` | `AVAGO_DB_DIR` | string | `$HOME/.avalanchego/db` | Specifies the directory to which the database is persisted. | | `--db-type` | `AVAGO_DB_TYPE` | string | `leveldb` | Specifies the type of database to use. Must be one of `leveldb`, `memdb`, or `pebbledb`. `memdb` is an in-memory, non-persisted database. Note: `memdb` stores everything in memory. So if you have a 900 GiB LevelDB instance, then using `memdb` you'd need 900 GiB of RAM. `memdb` is useful for fast one-off testing, not for running an actual node (on Fuji or Mainnet). Also note that `memdb` doesn't persist after restart. So any time you restart the node it would start syncing from scratch. | #### Database Config | Flag | Env Var | Type | Default | Description | |--------|--------|------|----|--------------------| | `--db-config-file` | `AVAGO_DB_CONFIG_FILE` | string | - | Path to the database config file. Ignored if `--db-config-file-content` is specified. | | `--db-config-file-content` | `AVAGO_DB_CONFIG_FILE_CONTENT` | string | - | As an alternative to `--db-config-file`, it allows specifying base64 encoded database config content. | A LevelDB config file must be JSON and may have these keys. Any keys not given will receive the default value. See [here](https://pkg.go.dev/github.com/syndtr/goleveldb/leveldb/opt#Options) for more information. ### File Descriptor Limit | Flag | Env Var | Type | Default | Description | |--------|--------|------|----|--------------------| | `--fd-limit` | `AVAGO_FD_LIMIT` | int | `32768` | Attempts to raise the process file descriptor limit to at least this value and error if the value is above the system max. | ### Genesis | Flag | Env Var | Type | Default | Description | |--------|--------|------|----|--------------------| | `--genesis-file` | `AVAGO_GENESIS_FILE` | string | - | Path to a JSON file containing the genesis data to use. Ignored when running standard networks (Mainnet, Fuji Testnet), or when `--genesis-file-content` is specified. If not given, uses default genesis data. See the documentation for the genesis JSON format [here](https://github.com/ava-labs/avalanchego/blob/master/genesis/README.md) and an example for a local network [here](https://github.com/ava-labs/avalanchego/blob/master/genesis/genesis_local.json). | | `--genesis-file-content` | `AVAGO_GENESIS_FILE_CONTENT` | string | - | As an alternative to `--genesis-file`, it allows specifying base64 encoded genesis data to use. | ### HTTP Server | Flag | Env Var | Type | Default | Description | |--------|--------|------|----|--------------------| | `--http-allowed-hosts` | `AVAGO_HTTP_ALLOWED_HOSTS` | string | `localhost` | List of acceptable host names in API requests. Provide the wildcard (`'*'`) to accept requests from all hosts. API requests where the `Host` field is empty or an IP address will always be accepted. An API call whose HTTP `Host` field isn't acceptable will receive a 403 error code. | | `--http-allowed-origins` | `AVAGO_HTTP_ALLOWED_ORIGINS` | string | `*` | Origins to allow on the HTTP port. Example: `"https://*.avax.network https://*.avax-test.network"` | | `--http-host` | `AVAGO_HTTP_HOST` | string | `127.0.0.1` | The address that HTTP APIs listen on. This means that by default, your node can only handle API calls made from the same machine. To allow API calls from other machines, use `--http-host=`. You can also enter domain names as parameter. | | `--http-port` | `AVAGO_HTTP_PORT` | int | `9650` | Each node runs an HTTP server that provides the APIs for interacting with the node and the Avalanche network. This argument specifies the port that the HTTP server will listen on. | | `--http-idle-timeout` | `AVAGO_HTTP_IDLE_TIMEOUT` | duration | `120s` | Maximum duration to wait for the next request when keep-alives are enabled. If `--http-idle-timeout` is zero, the value of `--http-read-timeout` is used. If both are zero, there is no timeout. | | `--http-read-timeout` | `AVAGO_HTTP_READ_TIMEOUT` | duration | `30s` | Maximum duration for reading the entire request, including the body. A zero or negative value means there will be no timeout. | | `--http-read-header-timeout` | `AVAGO_HTTP_READ_HEADER_TIMEOUT` | duration | `30s` | Maximum duration to read request headers. The connection's read deadline is reset after reading the headers. If `--http-read-header-timeout` is zero, the value of `--http-read-timeout` is used. If both are zero, there is no timeout. | | `--http-write-timeout` | `AVAGO_HTTP_WRITE_TIMEOUT` | duration | `30s` | Maximum duration before timing out writes of the response. It is reset whenever a new request's header is read. A zero or negative value means there will be no timeout. | | `--http-shutdown-timeout` | `AVAGO_HTTP_SHUTDOWN_TIMEOUT` | duration | `10s` | Maximum duration to wait for existing connections to complete during node shutdown. | | `--http-shutdown-wait` | `AVAGO_HTTP_SHUTDOWN_WAIT` | duration | `0s` | Duration to wait after receiving SIGTERM or SIGINT before initiating shutdown. The `/health` endpoint will return unhealthy during this duration (if the Health API is enabled.) | | `--http-tls-enabled` | `AVAGO_HTTP_TLS_ENABLED` | boolean | `false` | If set to `true`, this flag will attempt to upgrade the server to use HTTPS. | | `--http-tls-cert-file` | `AVAGO_HTTP_TLS_CERT_FILE` | string | - | This argument specifies the location of the TLS certificate used by the node for the HTTPS server. This must be specified when `--http-tls-enabled=true`. There is no default value. This flag is ignored if `--http-tls-cert-file-content` is specified. | | `--http-tls-cert-file-content` | `AVAGO_HTTP_TLS_CERT_FILE_CONTENT` | string | - | As an alternative to `--http-tls-cert-file`, it allows specifying base64 encoded content of the TLS certificate used by the node for the HTTPS server. Note that full certificate content, with the leading and trailing header, must be base64 encoded. This must be specified when `--http-tls-enabled=true`. | | `--http-tls-key-file` | `AVAGO_HTTP_TLS_KEY_FILE` | string | - | This argument specifies the location of the TLS private key used by the node for the HTTPS server. This must be specified when `--http-tls-enabled=true`. There is no default value. This flag is ignored if `--http-tls-key-file-content` is specified. | | `--http-tls-key-file-content` | `AVAGO_HTTP_TLS_KEY_FILE_CONTENT` | string | - | As an alternative to `--http-tls-key-file`, it allows specifying base64 encoded content of the TLS private key used by the node for the HTTPS server. Note that full private key content, with the leading and trailing header, must be base64 encoded. This must be specified when `--http-tls-enabled=true`. | ### Logging | Flag | Env Var | Type | Default | Description | |--------|--------|------|----|--------------------| | `--log-level=off` | `AVAGO_LOG_LEVEL` | string | `info` | No logs. | | `--log-level=fatal` | `AVAGO_LOG_LEVEL` | string | `info` | Fatal errors that are not recoverable. | | `--log-level=error` | `AVAGO_LOG_LEVEL` | string | `info` | Errors that the node encounters, these errors were able to be recovered. | | `--log-level=warn` | `AVAGO_LOG_LEVEL` | string | `info` | Warnings that might be indicative of a spurious byzantine node, or potential future error. | | `--log-level=info` | `AVAGO_LOG_LEVEL` | string | `info` | Useful descriptions of node status updates. | | `--log-level=trace` | `AVAGO_LOG_LEVEL` | string | `info` | Traces container job results, useful for tracing container IDs and their outcomes. | | `--log-level=debug` | `AVAGO_LOG_LEVEL` | string | `info` | Useful when attempting to understand possible bugs in the code. | | `--log-level=verbo` | `AVAGO_LOG_LEVEL` | string | `info` | Tracks extensive amounts of information the node is processing, including message contents and binary dumps of data for extremely low level protocol analysis. | | `--log-display-level` | `AVAGO_LOG_DISPLAY_LEVEL` | string | value of `--log-level` | The log level determines which events to display to stdout. If left blank, will default to the value provided to `--log-level`. | | `--log-format=auto` | `AVAGO_LOG_FORMAT` | string | `auto` | Formats terminal-like logs when the output is a terminal. | | `--log-format=plain` | `AVAGO_LOG_FORMAT` | string | `auto` | Plain text log format. | | `--log-format=colors` | `AVAGO_LOG_FORMAT` | string | `auto` | Colored log format. | | `--log-format=json` | `AVAGO_LOG_FORMAT` | string | `auto` | JSON log format. | | `--log-dir` | `AVAGO_LOG_DIR` | string | `$HOME/.avalanchego/logs` | Specifies the directory in which system logs are kept. If you are running the node as a system service (ex. using the installer script) logs will also be stored in `$HOME/var/log/syslog`. | | `--log-disable-display-plugin-logs` | `AVAGO_LOG_DISABLE_DISPLAY_PLUGIN_LOGS` | boolean | `false` | Disables displaying plugin logs in stdout. | | `--log-rotater-max-size` | `AVAGO_LOG_ROTATER_MAX_SIZE` | uint | `8` | The maximum file size in megabytes of the log file before it gets rotated. | | `--log-rotater-max-files` | `AVAGO_LOG_ROTATER_MAX_FILES` | uint | `7` | The maximum number of old log files to retain. 0 means retain all old log files. | | `--log-rotater-max-age` | `AVAGO_LOG_ROTATER_MAX_AGE` | uint | `0` | The maximum number of days to retain old log files based on the timestamp encoded in their filename. 0 means retain all old log files. | | `--log-rotater-compress-enabled` | `AVAGO_LOG_ROTATER_COMPRESS_ENABLED` | boolean | `false` | Enables the compression of rotated log files through gzip. | ### Continuous Profiling You can configure your node to continuously run memory/CPU profiles and save the most recent ones. Continuous memory/CPU profiling is enabled if `--profile-continuous-enabled` is set. | Flag | Env Var | Type | Default | Description | |--------|--------|------|----|--------------------| | `--profile-continuous-enabled` | `AVAGO_PROFILE_CONTINUOUS_ENABLED` | boolean | `false` | Whether the app should continuously produce performance profiles. | | `--profile-dir` | `AVAGO_PROFILE_DIR` | string | `$HOME/.avalanchego/profiles/` | If profiling is enabled, node continuously runs memory/CPU profiles and puts them at this directory. | | `--profile-continuous-freq` | `AVAGO_PROFILE_CONTINUOUS_FREQ` | duration | `15m` | How often a new CPU/memory profile is created. | | `--profile-continuous-max-files` | `AVAGO_PROFILE_CONTINUOUS_MAX_FILES` | int | `5` | Maximum number of CPU/memory profile files to keep. | ### Network | Flag | Env Var | Type | Default | Description | |--------|--------|------|----|--------------------| | `--network-id=mainnet` | `AVAGO_NETWORK_ID` | string | `mainnet` | Connect to Mainnet (default). | | `--network-id=fuji` | `AVAGO_NETWORK_ID` | string | `mainnet` | Connect to the Fuji test-network. | | `--network-id=testnet` | `AVAGO_NETWORK_ID` | string | `mainnet` | Connect to the current test-network (currently Fuji). | | `--network-id=local` | `AVAGO_NETWORK_ID` | string | `mainnet` | Connect to a local test-network. | | `--network-id=network-[id]` | `AVAGO_NETWORK_ID` | string | `mainnet` | Connect to the network with the given ID. `id` must be in the range \[0, 2^32\). | ### OpenTelemetry AvalancheGo supports collecting and exporting [OpenTelemetry](https://opentelemetry.io/) traces. This might be useful for debugging, performance analysis, or monitoring. | Flag | Env Var | Type | Default | Description | |--------|--------|------|----|--------------------| | `--tracing-endpoint` | `AVAGO_TRACING_ENDPOINT` | string | `localhost:4317` (gRPC) or `localhost:4318` (HTTP) | The endpoint to export trace data to. Default depends on `--tracing-exporter-type`. | | `--tracing-exporter-type` | `AVAGO_TRACING_EXPORTER_TYPE` | string | `disabled` | Type of exporter to use for tracing. Options are \`disabled\`, \`grpc\`, \`http\`. | | `--tracing-insecure` | `AVAGO_TRACING_INSECURE` | boolean | `true` | If true, don't use TLS when exporting trace data. | | `--tracing-sample-rate` | `AVAGO_TRACING_SAMPLE_RATE` | float | `0.1` | The fraction of traces to sample. If \>= 1, always sample. If \<= 0, never sample. | ### Partial Sync Primary Network | Flag | Env Var | Type | Default | Description | |--------|--------|------|----|--------------------| | `--partial-sync-primary-network` | `AVAGO_PARTIAL_SYNC_PRIMARY_NETWORK` | boolean | `false` | Partial sync enables nodes that are not primary network validators to optionally sync only the P-chain on the primary network. Nodes that use this option can still track Subnets. After the Etna upgrade, nodes that use this option can also validate L1s. | ### Public IP Validators must know one of their public facing IP addresses so they can enable other nodes to connect to them. By default, the node will attempt to perform NAT traversal to get the node's IP according to its router. | Flag | Env Var | Type | Default | Description | |--------|--------|------|----|--------------------| | `--public-ip` | `AVAGO_PUBLIC_IP` | string | - | If this argument is provided, the node assumes this is its public IP. When running a local network it may be easiest to set this value to `127.0.0.1`. | | `--public-ip-resolution-frequency` | `AVAGO_PUBLIC_IP_RESOLUTION_FREQUENCY` | duration | `5m` | Frequency at which this node resolves/updates its public IP and renew NAT mappings, if applicable. | | `--public-ip-resolution-service` | `AVAGO_PUBLIC_IP_RESOLUTION_SERVICE` | string | - | When provided, the node will use that service to periodically resolve/update its public IP. Only acceptable values are `ifconfigCo`, `opendns` or `ifconfigMe`. | ### State Syncing | Flag | Env Var | Type | Default | Description | |--------|--------|------|----|--------------------| | `--state-sync-ids` | `AVAGO_STATE_SYNC_IDS` | string | - | State sync IDs is a comma-separated list of validator IDs. The specified validators will be contacted to get and authenticate the starting point (state summary) for state sync. An example setting of this field would be `--state-sync-ids="NodeID-7Xhw2mDxuDS44j42TCB6U5579esbSt3Lg,NodeID-MFrZFVCXPv5iCn6M9K6XduxGTYp891xXZ"`. The number of given IDs here must be same with number of given `--state-sync-ips`. The default value is empty, which results in all validators being sampled. | | `--state-sync-ips` | `AVAGO_STATE_SYNC_IPS` | string | - | State sync IPs is a comma-separated list of IP:port pairs. These IP Addresses will be contacted to get and authenticate the starting point (state summary) for state sync. An example setting of this field would be `--state-sync-ips="127.0.0.1:12345,1.2.3.4:5678"`. The number of given IPs here must be the same with the number of given `--state-sync-ids`. | ### Staking | Flag | Env Var | Type | Default | Description | |--------|--------|------|----|--------------------| | `--staking-port` | `AVAGO_STAKING_PORT` | int | `9651` | The port through which the network peers will connect to this node externally. Having this port accessible from the internet is required for correct node operation. | | `--staking-tls-cert-file` | `AVAGO_STAKING_TLS_CERT_FILE` | string | `$HOME/.avalanchego/staking/staker.crt` | Avalanche uses two-way authenticated TLS connections to securely connect nodes. This argument specifies the location of the TLS certificate used by the node. This flag is ignored if `--staking-tls-cert-file-content` is specified. | | `--staking-tls-cert-file-content` | `AVAGO_STAKING_TLS_CERT_FILE_CONTENT` | string | - | As an alternative to `--staking-tls-cert-file`, it allows specifying base64 encoded content of the TLS certificate used by the node. Note that full certificate content, with the leading and trailing header, must be base64 encoded. | | `--staking-tls-key-file` | `AVAGO_STAKING_TLS_KEY_FILE` | string | `$HOME/.avalanchego/staking/staker.key` | Avalanche uses two-way authenticated TLS connections to securely connect nodes. This argument specifies the location of the TLS private key used by the node. This flag is ignored if `--staking-tls-key-file-content` is specified. | | `--staking-tls-key-file-content` | `AVAGO_STAKING_TLS_KEY_FILE_CONTENT` | string | - | As an alternative to `--staking-tls-key-file`, it allows specifying base64 encoded content of the TLS private key used by the node. Note that full private key content, with the leading and trailing header, must be base64 encoded. | ### Subnets #### Subnet Tracking | Flag | Env Var | Type | Default | Description | |--------|--------|------|----|--------------------| | `--track-subnets` | `AVAGO_TRACK_SUBNETS` | string | - | Comma separated list of Subnet IDs that this node would track if added to. Defaults to empty (will only validate the Primary Network). | #### Subnet Configs It is possible to provide parameters for Subnets. Parameters here apply to all chains in the specified Subnets. Parameters must be specified with a `[subnetID].json` config file under `--subnet-config-dir`. AvalancheGo loads configs for Subnets specified in `--track-subnets` parameter. Full reference for all configuration options for a Subnet can be found in a separate [Subnet Configs](https://build.avax.network/docs/nodes/configure/avalanche-l1-configs) document. | Flag | Env Var | Type | Default | Description | |--------|--------|------|----|--------------------| | `--subnet-config-dir` | `AVAGO_SUBNET_CONFIG_DIR` | string | `$HOME/.avalanchego/configs/subnets` | Specifies the directory that contains Subnet configs, as described above. If the flag is set explicitly, the specified folder must exist, or AvalancheGo will exit with an error. This flag is ignored if `--subnet-config-content` is specified. Example: Let's say we have a Subnet with ID `p4jUwqZsA2LuSftroCd3zb4ytH8W99oXKuKVZdsty7eQ3rXD6`. We can create a config file under the default `subnet-config-dir` at `$HOME/.avalanchego/configs/subnets/p4jUwqZsA2LuSftroCd3zb4ytH8W99oXKuKVZdsty7eQ3rXD6.json`. An example config file is: `{"validatorOnly": false, "snowParameters":{"k":25,"alpha":18}}`. By default, none of these directories and/or files exist. You would need to create them manually if needed. | | `--subnet-config-content` | `AVAGO_SUBNET_CONFIG_CONTENT` | string | - | As an alternative to `--subnet-config-dir`, it allows specifying base64 encoded parameters for a Subnet. | ### Version | Flag | Env Var | Type | Default | Description | |--------|--------|------|----|--------------------| | `--version` | `AVAGO_VERSION` | boolean | `false` | If this is `true`, print the version and quit. | # Advanced Configuration Options ⚠️ **Warning**: The following options may affect the correctness of a node. Only power users should change these. ### Gossiping Consensus gossiping parameters. | Flag | Env Var | Type | Default | Description | |--------|--------|------|----|--------------------| | `--consensus-accepted-frontier-gossip-validator-size` | `AVAGO_CONSENSUS_ACCEPTED_FRONTIER_GOSSIP_VALIDATOR_SIZE` | uint | `0` | Number of validators to gossip to when gossiping accepted frontier. | | `--consensus-accepted-frontier-gossip-non-validator-size` | `AVAGO_CONSENSUS_ACCEPTED_FRONTIER_GOSSIP_NON_VALIDATOR_SIZE` | uint | `0` | Number of non-validators to gossip to when gossiping accepted frontier. | | `--consensus-accepted-frontier-gossip-peer-size` | `AVAGO_CONSENSUS_ACCEPTED_FRONTIER_GOSSIP_PEER_SIZE` | uint | `15` | Number of peers to gossip to when gossiping accepted frontier. | | `--consensus-accepted-frontier-gossip-frequency` | `AVAGO_CONSENSUS_ACCEPTED_FRONTIER_GOSSIP_FREQUENCY` | duration | `10s` | Time between gossiping accepted frontiers. | | `--consensus-on-accept-gossip-validator-size` | `AVAGO_CONSENSUS_ON_ACCEPT_GOSSIP_VALIDATOR_SIZE` | uint | `0` | Number of validators to gossip to each accepted container to. | | `--consensus-on-accept-gossip-non-validator-size` | `AVAGO_CONSENSUS_ON_ACCEPT_GOSSIP_NON_VALIDATOR_SIZE` | uint | `0` | Number of non-validators to gossip to each accepted container to. | | `--consensus-on-accept-gossip-peer-size` | `AVAGO_CONSENSUS_ON_ACCEPT_GOSSIP_PEER_SIZE` | uint | `10` | Number of peers to gossip to each accepted container to. | ### Sybil Protection Sybil protection configuration. These settings affect how the node participates in consensus. | Flag | Env Var | Type | Default | Description | |--------|--------|------|----|--------------------| | `--sybil-protection-enabled` | `AVAGO_SYBIL_PROTECTION_ENABLED` | boolean | `true` | Avalanche uses Proof of Stake (PoS) as sybil resistance to make it prohibitively expensive to attack the network. If false, sybil resistance is disabled and all peers will be sampled during consensus. Note that this can not be disabled on public networks (`Fuji` and `Mainnet`). Setting this flag to `false` **does not** mean "this node is not a validator." It means that this node will sample all nodes, not just validators. **You should not set this flag to false unless you understand what you are doing.** | | `--sybil-protection-disabled-weight` | `AVAGO_SYBIL_PROTECTION_DISABLED_WEIGHT` | uint | `100` | Weight to provide to each peer when staking is disabled. | ### Benchlist Peer benchlisting configuration using an EWMA (Exponentially Weighted Moving Average) failure probability model. | Flag | Env Var | Type | Default | Description | |--------|--------|------|----|--------------------| | `--benchlist-halflife` | `AVAGO_BENCHLIST_HALFLIFE` | duration | `1m` | Halflife of the EWMA averager used for benchlisting. | | `--benchlist-unbench-probability` | `AVAGO_BENCHLIST_UNBENCH_PROBABILITY` | float | `0.2` | EWMA failure probability below which a node is unbenched. | | `--benchlist-bench-probability` | `AVAGO_BENCHLIST_BENCH_PROBABILITY` | float | `0.5` | EWMA failure probability above which a node is benched. | | `--benchlist-duration` | `AVAGO_BENCHLIST_DURATION` | duration | `5m` | Max amount of time a peer is benchlisted. | ### Consensus Parameters Some of these parameters can only be set on a local or private network, not on Fuji Testnet or Mainnet | Flag | Env Var | Type | Default | Description | |--------|--------|------|----|--------------------| | `--consensus-shutdown-timeout` | `AVAGO_CONSENSUS_SHUTDOWN_TIMEOUT` | duration | `5s` | Timeout before killing an unresponsive chain. | | `--create-asset-tx-fee` | `AVAGO_CREATE_ASSET_TX_FEE` | int | `10000000` | Transaction fee, in nAVAX, for transactions that create new assets. This can only be changed on a local network. | | `--tx-fee` | `AVAGO_TX_FEE` | int | `1000000` | The required amount of nAVAX to be burned for a transaction to be valid on the X-Chain, and for import/export transactions on the P-Chain. This parameter requires network agreement in its current form. Changing this value from the default should only be done on private networks or local network. | | `--uptime-requirement` | `AVAGO_UPTIME_REQUIREMENT` | float | `0.8` | Fraction of time a validator must be online to receive rewards. This can only be changed on a local network. | | `--uptime-metric-freq` | `AVAGO_UPTIME_METRIC_FREQ` | duration | `30s` | Frequency of renewing this node's average uptime metric. | ### Staking Parameters Staking economics configuration. | Flag | Env Var | Type | Default | Description | |--------|--------|------|----|--------------------| | `--min-validator-stake` | `AVAGO_MIN_VALIDATOR_STAKE` | int | network dependent | The minimum stake, in nAVAX, required to validate the Primary Network. This can only be changed on a local network. Defaults to `2000000000000` (2,000 AVAX) on Mainnet. Defaults to `5000000` (.005 AVAX) on Test Net. | | `--max-validator-stake` | `AVAGO_MAX_VALIDATOR_STAKE` | int | network dependent | The maximum stake, in nAVAX, that can be placed on a validator on the primary network. This includes stake provided by both the validator and by delegators to the validator. This can only be changed on a local network. | | `--min-delegator-stake` | `AVAGO_MIN_DELEGATOR_STAKE` | int | network dependent | The minimum stake, in nAVAX, that can be delegated to a validator of the Primary Network. Defaults to `25000000000` (25 AVAX) on Mainnet. Defaults to `5000000` (.005 AVAX) on Test Net. This can only be changed on a local network. | | `--min-delegation-fee` | `AVAGO_MIN_DELEGATION_FEE` | int | `20000` | The minimum delegation fee that can be charged for delegation on the Primary Network, multiplied by \`10,000\`. Must be in the range \[0, 1000000\]. This can only be changed on a local network. | | `--min-stake-duration` | `AVAGO_MIN_STAKE_DURATION` | duration | `336h` | Minimum staking duration. This can only be changed on a local network. This applies to both delegation and validation periods. | | `--max-stake-duration` | `AVAGO_MAX_STAKE_DURATION` | duration | `8760h` | The maximum staking duration, in hours. This can only be changed on a local network. | | `--stake-minting-period` | `AVAGO_STAKE_MINTING_PERIOD` | duration | `8760h` | Consumption period of the staking function, in hours. This can only be changed on a local network. | | `--stake-max-consumption-rate` | `AVAGO_STAKE_MAX_CONSUMPTION_RATE` | uint | `120000` | The maximum percentage of the consumption rate for the remaining token supply in the minting period, which is 1 year on Mainnet. This can only be changed on a local network. | | `--stake-min-consumption-rate` | `AVAGO_STAKE_MIN_CONSUMPTION_RATE` | uint | `100000` | The minimum percentage of the consumption rate for the remaining token supply in the minting period, which is 1 year on Mainnet. This can only be changed on a local network. | | `--stake-supply-cap` | `AVAGO_STAKE_SUPPLY_CAP` | uint | `720000000000000000` | The maximum stake supply, in nAVAX, that can be placed on a validator. This can only be changed on a local network. | ### Snow Consensus Snow consensus protocol parameters. | Flag | Env Var | Type | Default | Description | |--------|--------|------|----|--------------------| | `--snow-concurrent-repolls` | `AVAGO_SNOW_CONCURRENT_REPOLLS` | int | `4` | Snow consensus requires repolling transactions that are issued during low time of network usage. This parameter lets one define how aggressive the client will be in finalizing these pending transactions. This should only be changed after careful consideration of the tradeoffs of Snow consensus. The value must be at least `1` and at most `--snow-commit-threshold`. | | `--snow-sample-size` | `AVAGO_SNOW_SAMPLE_SIZE` | int | `20` | Snow consensus defines `k` as the number of validators that are sampled during each network poll. This parameter lets one define the `k` value used for consensus. This should only be changed after careful consideration of the tradeoffs of Snow consensus. The value must be at least `1`. | | `--snow-quorum-size` | `AVAGO_SNOW_QUORUM_SIZE` | int | `15` | Snow consensus defines `alpha` as the number of validators that must prefer a transaction during each network poll to increase the confidence in the transaction. This parameter lets us define the `alpha` value used for consensus. This should only be changed after careful consideration of the tradeoffs of Snow consensus. The value must be at greater than `k/2`. | | `--snow-commit-threshold` | `AVAGO_SNOW_COMMIT_THRESHOLD` | int | `20` | Snow consensus defines `beta` as the number of consecutive polls that a container must increase its confidence for it to be accepted. This parameter lets us define the `beta` value used for consensus. This should only be changed after careful consideration of the tradeoffs of Snow consensus. The value must be at least `1`. | | `--snow-optimal-processing` | `AVAGO_SNOW_OPTIMAL_PROCESSING` | int | `50` | Optimal number of processing items in consensus. The value must be at least `1`. | | `--snow-max-processing` | `AVAGO_SNOW_MAX_PROCESSING` | int | `1024` | Maximum number of processing items to be considered healthy. Reports unhealthy if more than this number of items are outstanding. The value must be at least `1`. | | `--snow-max-time-processing` | `AVAGO_SNOW_MAX_TIME_PROCESSING` | duration | `2m` | Maximum amount of time an item should be processing and still be healthy. Reports unhealthy if there is an item processing for longer than this duration. The value must be greater than `0`. | ### ProposerVM ProposerVM configuration. | Flag | Env Var | Type | Default | Description | |--------|--------|------|----|--------------------| | `--proposervm-use-current-height` | `AVAGO_PROPOSERVM_USE_CURRENT_HEIGHT` | boolean | `false` | Have the ProposerVM always report the last accepted P-chain block height. | | `--proposervm-min-block-delay` | `AVAGO_PROPOSERVM_MIN_BLOCK_DELAY` | duration | `1s` | The minimum delay to enforce when building a snowman++ block for the primary network chains and the default minimum delay for subnets. A non-default value is only suggested for non-production nodes. | ### Health Checks Health monitoring configuration. | Flag | Env Var | Type | Default | Description | |--------|--------|------|----|--------------------| | `--health-check-frequency` | `AVAGO_HEALTH_CHECK_FREQUENCY` | duration | `30s` | Health check runs with this frequency. | | `--health-check-averager-halflife` | `AVAGO_HEALTH_CHECK_AVERAGER_HALFLIFE` | duration | `10s` | Half life of averagers used in health checks (to measure the rate of message failures, for example.) Larger value -> less volatile calculation of averages. | ### Network Configuration Advanced network settings. | Flag | Env Var | Type | Default | Description | |--------|--------|------|----|--------------------| | `--network-allow-private-ips` | `AVAGO_NETWORK_ALLOW_PRIVATE_IPS` | boolean | `true` | Allows the node to connect peers with private IPs. | | `--network-compression-type` | `AVAGO_NETWORK_COMPRESSION_TYPE` | string | `gzip` | The type of compression to use when sending messages to peers. Must be one of \`gzip\`, \`zstd\`, \`none\`. Nodes can handle inbound \`gzip\` compressed messages but by default send \`zstd\` compressed messages. | | `--network-initial-timeout` | `AVAGO_NETWORK_INITIAL_TIMEOUT` | duration | `5s` | Initial timeout value of the adaptive timeout manager. | | `--network-initial-reconnect-delay` | `AVAGO_NETWORK_INITIAL_RECONNECT_DELAY` | duration | `1s` | Initial delay duration must be waited before attempting to reconnect a peer. | | `--network-max-reconnect-delay` | `AVAGO_NETWORK_MAX_RECONNECT_DELAY` | duration | `1h` | Maximum delay duration must be waited before attempting to reconnect a peer. | | `--network-minimum-timeout` | `AVAGO_NETWORK_MINIMUM_TIMEOUT` | duration | `2s` | Minimum timeout value of the adaptive timeout manager. | | `--network-maximum-timeout` | `AVAGO_NETWORK_MAXIMUM_TIMEOUT` | duration | `10s` | Maximum timeout value of the adaptive timeout manager. | | `--network-maximum-inbound-timeout` | `AVAGO_NETWORK_MAXIMUM_INBOUND_TIMEOUT` | duration | `10s` | Maximum timeout value of an inbound message. Defines duration within which an incoming message must be fulfilled. Incoming messages containing deadline higher than this value will be overridden with this value. | | `--network-timeout-halflife` | `AVAGO_NETWORK_TIMEOUT_HALFLIFE` | duration | `5m` | Half life used when calculating average network latency. Larger value -> less volatile network latency calculation. | | `--network-timeout-coefficient` | `AVAGO_NETWORK_TIMEOUT_COEFFICIENT` | float | `2` | Requests to peers will time out after \[network-timeout-coefficient\] \* \[average request latency\]. | | `--network-read-handshake-timeout` | `AVAGO_NETWORK_READ_HANDSHAKE_TIMEOUT` | duration | `15s` | Timeout value for reading handshake messages. | | `--network-ping-timeout` | `AVAGO_NETWORK_PING_TIMEOUT` | duration | `30s` | Timeout value for Ping-Pong with a peer. | | `--network-ping-frequency` | `AVAGO_NETWORK_PING_FREQUENCY` | duration | `22.5s` | Frequency of pinging other peers. | | `--network-health-min-conn-peers` | `AVAGO_NETWORK_HEALTH_MIN_CONN_PEERS` | uint | `1` | Node will report unhealthy if connected to less than this many peers. | | `--network-health-max-time-since-msg-received` | `AVAGO_NETWORK_HEALTH_MAX_TIME_SINCE_MSG_RECEIVED` | duration | `1m` | Node will report unhealthy if it hasn't received a message for this amount of time. | | `--network-health-max-time-since-msg-sent` | `AVAGO_NETWORK_HEALTH_MAX_TIME_SINCE_MSG_SENT` | duration | `1m` | Network layer returns unhealthy if haven't sent a message for at least this much time. | | `--network-health-max-portion-send-queue-full` | `AVAGO_NETWORK_HEALTH_MAX_PORTION_SEND_QUEUE_FULL` | float | `0.9` | Node will report unhealthy if its send queue is more than this portion full. Must be in \[0,1\]. | | `--network-health-max-send-fail-rate` | `AVAGO_NETWORK_HEALTH_MAX_SEND_FAIL_RATE` | float | `0.25` | Node will report unhealthy if more than this portion of message sends fail. Must be in \[0,1\]. | | `--network-health-max-outstanding-request-duration` | `AVAGO_NETWORK_HEALTH_MAX_OUTSTANDING_REQUEST_DURATION` | duration | `5m` | Node reports unhealthy if there has been a request outstanding for this duration. | | `--network-max-clock-difference` | `AVAGO_NETWORK_MAX_CLOCK_DIFFERENCE` | duration | `1m` | Max allowed clock difference value between this node and peers. | | `--network-require-validator-to-connect` | `AVAGO_NETWORK_REQUIRE_VALIDATOR_TO_CONNECT` | boolean | `false` | If true, this node will only maintain a connection with another node if this node is a validator, the other node is a validator, or the other node is a beacon. | | `--network-tcp-proxy-enabled` | `AVAGO_NETWORK_TCP_PROXY_ENABLED` | boolean | `false` | Require all P2P connections to be initiated with a TCP proxy header. | | `--network-tcp-proxy-read-timeout` | `AVAGO_NETWORK_TCP_PROXY_READ_TIMEOUT` | duration | `3s` | Maximum duration to wait for a TCP proxy header. | | `--network-outbound-connection-timeout` | `AVAGO_NETWORK_OUTBOUND_CONNECTION_TIMEOUT` | duration | `30s` | Timeout while dialing a peer. | ### Message Rate-Limiting These flags govern rate-limiting of inbound and outbound messages. For more information on rate-limiting and the flags below, see package `throttling` in AvalancheGo. #### CPU Based Rate-Limiting Rate-limiting based on how much CPU usage a peer causes. | Flag | Env Var | Type | Default | Description | |--------|--------|------|----|--------------------| | `--throttler-inbound-cpu-validator-alloc` | `AVAGO_THROTTLER_INBOUND_CPU_VALIDATOR_ALLOC` | float | half of CPUs | Number of CPU allocated for use by validators. Value should be in range \(0, total core count\]. | | `--throttler-inbound-cpu-max-recheck-delay` | `AVAGO_THROTTLER_INBOUND_CPU_MAX_RECHECK_DELAY` | duration | `5s` | In the CPU rate-limiter, check at least this often whether the node's CPU usage has fallen to an acceptable level. | | `--throttler-inbound-disk-max-recheck-delay` | `AVAGO_THROTTLER_INBOUND_DISK_MAX_RECHECK_DELAY` | duration | `5s` | In the disk-based network throttler, check at least this often whether the node's disk usage has fallen to an acceptable level. | | `--throttler-inbound-cpu-max-non-validator-usage` | `AVAGO_THROTTLER_INBOUND_CPU_MAX_NON_VALIDATOR_USAGE` | float | 80% of CPUs | Number of CPUs that if fully utilized, will rate limit all non-validators. Value should be in range \[0, total core count\]. | | `--throttler-inbound-cpu-max-non-validator-node-usage` | `AVAGO_THROTTLER_INBOUND_CPU_MAX_NON_VALIDATOR_NODE_USAGE` | float | CPUs / 8 | Maximum number of CPUs that a non-validator can utilize. Value should be in range \[0, total core count\]. | | `--throttler-inbound-disk-validator-alloc` | `AVAGO_THROTTLER_INBOUND_DISK_VALIDATOR_ALLOC` | float | `1000 GiB/s` | Maximum number of disk reads/writes per second to allocate for use by validators. Must be > 0. | | `--throttler-inbound-disk-max-non-validator-usage` | `AVAGO_THROTTLER_INBOUND_DISK_MAX_NON_VALIDATOR_USAGE` | float | `1000 GiB/s` | Number of disk reads/writes per second that, if fully utilized, will rate limit all non-validators. Must be \>= 0. | | `--throttler-inbound-disk-max-non-validator-node-usage` | `AVAGO_THROTTLER_INBOUND_DISK_MAX_NON_VALIDATOR_NODE_USAGE` | float | `1000 GiB/s` | Maximum number of disk reads/writes per second that a non-validator can utilize. Must be \>= 0. | #### Bandwidth Based Rate-Limiting Rate-limiting based on the bandwidth a peer uses. | Flag | Env Var | Type | Default | Description | |--------|--------|------|----|--------------------| | `--throttler-inbound-bandwidth-refill-rate` | `AVAGO_THROTTLER_INBOUND_BANDWIDTH_REFILL_RATE` | uint | `512` | Max average inbound bandwidth usage of a peer, in bytes per second. See interface `throttling.BandwidthThrottler`. | | `--throttler-inbound-bandwidth-max-burst-size` | `AVAGO_THROTTLER_INBOUND_BANDWIDTH_MAX_BURST_SIZE` | uint | `2 MiB` | Max inbound bandwidth a node can use at once. See interface `throttling.BandwidthThrottler`. | #### Message Size Based Rate-Limiting Rate-limiting based on the total size, in bytes, of unprocessed messages. | Flag | Env Var | Type | Default | Description | |--------|--------|------|----|--------------------| | `--throttler-inbound-at-large-alloc-size` | `AVAGO_THROTTLER_INBOUND_AT_LARGE_ALLOC_SIZE` | uint | `6 MiB` | Size, in bytes, of at-large allocation in the inbound message throttler. | | `--throttler-inbound-validator-alloc-size` | `AVAGO_THROTTLER_INBOUND_VALIDATOR_ALLOC_SIZE` | uint | `32 MiB` | Size, in bytes, of validator allocation in the inbound message throttler. | | `--throttler-inbound-node-max-at-large-bytes` | `AVAGO_THROTTLER_INBOUND_NODE_MAX_AT_LARGE_BYTES` | uint | `2 MiB` | Maximum number of bytes a node can take from the at-large allocation of the inbound message throttler. | #### Message Based Rate-Limiting Rate-limiting based on the number of unprocessed messages. | Flag | Env Var | Type | Default | Description | |--------|--------|------|----|--------------------| | `--throttler-inbound-node-max-processing-msgs` | `AVAGO_THROTTLER_INBOUND_NODE_MAX_PROCESSING_MSGS` | uint | `1024` | Node will stop reading messages from a peer when it is processing this many messages from the peer. Will resume reading messages from the peer when it is processing less than this many messages. | #### Outbound Rate-Limiting Rate-limiting for outbound messages. | Flag | Env Var | Type | Default | Description | |--------|--------|------|----|--------------------| | `--throttler-outbound-at-large-alloc-size` | `AVAGO_THROTTLER_OUTBOUND_AT_LARGE_ALLOC_SIZE` | uint | `32 MiB` | Size, in bytes, of at-large allocation in the outbound message throttler. | | `--throttler-outbound-validator-alloc-size` | `AVAGO_THROTTLER_OUTBOUND_VALIDATOR_ALLOC_SIZE` | uint | `32 MiB` | Size, in bytes, of validator allocation in the outbound message throttler. | | `--throttler-outbound-node-max-at-large-bytes` | `AVAGO_THROTTLER_OUTBOUND_NODE_MAX_AT_LARGE_BYTES` | uint | `2 MiB` | Maximum number of bytes a node can take from the at-large allocation of the outbound message throttler. | ### Connection Rate-Limiting | Flag | Env Var | Type | Default | Description | |--------|--------|------|----|--------------------| | `--network-inbound-connection-throttling-cooldown` | `AVAGO_NETWORK_INBOUND_CONNECTION_THROTTLING_COOLDOWN` | duration | `10s` | Node will upgrade an inbound connection from a given IP at most once within this duration. If 0 or negative, will not consider recency of last upgrade when deciding whether to upgrade. | | `--network-inbound-connection-throttling-max-conns-per-sec` | `AVAGO_NETWORK_INBOUND_CONNECTION_THROTTLING_MAX_CONNS_PER_SEC` | uint | `512` | Node will accept at most this many inbound connections per second. | | `--network-outbound-connection-throttling-rps` | `AVAGO_NETWORK_OUTBOUND_CONNECTION_THROTTLING_RPS` | uint | `50` | Node makes at most this many outgoing peer connection attempts per second. | ### Peer List Gossiping Nodes gossip peers to each other so that each node can have an up-to-date peer list. A node gossips `--network-peer-list-num-validator-ips` validator IPs to `--network-peer-list-validator-gossip-size` validators, `--network-peer-list-non-validator-gossip-size` non-validators and `--network-peer-list-peers-gossip-size` peers every `--network-peer-list-gossip-frequency`. | Flag | Env Var | Type | Default | Description | |--------|--------|------|----|--------------------| | `--network-peer-list-num-validator-ips` | `AVAGO_NETWORK_PEER_LIST_NUM_VALIDATOR_IPS` | int | `15` | Number of validator IPs to gossip to other nodes. | | `--network-peer-list-validator-gossip-size` | `AVAGO_NETWORK_PEER_LIST_VALIDATOR_GOSSIP_SIZE` | int | `20` | Number of validators that the node will gossip peer list to. | | `--network-peer-list-non-validator-gossip-size` | `AVAGO_NETWORK_PEER_LIST_NON_VALIDATOR_GOSSIP_SIZE` | int | `0` | Number of non-validators that the node will gossip peer list to. | | `--network-peer-list-peers-gossip-size` | `AVAGO_NETWORK_PEER_LIST_PEERS_GOSSIP_SIZE` | int | `0` | Number of total peers (including non-validator or validator) that the node will gossip peer list to. | | `--network-peer-list-gossip-frequency` | `AVAGO_NETWORK_PEER_LIST_GOSSIP_FREQUENCY` | duration | `1m` | Frequency to gossip peers to other nodes. | | `--network-peer-read-buffer-size` | `AVAGO_NETWORK_PEER_READ_BUFFER_SIZE` | int | `8 KiB` | Size of the buffer that peer messages are read into (there is one buffer per peer). | | `--network-peer-write-buffer-size` | `AVAGO_NETWORK_PEER_WRITE_BUFFER_SIZE` | int | `8 KiB` | Size of the buffer that peer messages are written into (there is one buffer per peer). | ### Resource Usage Tracking | Flag | Env Var | Type | Default | Description | |--------|--------|------|----|--------------------| | `--meter-vms-enabled` | `AVAGO_METER_VMS_ENABLED` | boolean | `true` | Enable Meter VMs to track VM performance with more granularity. | | `--system-tracker-frequency` | `AVAGO_SYSTEM_TRACKER_FREQUENCY` | duration | `500ms` | Frequency to check the real system usage of tracked processes. More frequent checks -> usage metrics are more accurate, but more expensive to track. | | `--system-tracker-processing-halflife` | `AVAGO_SYSTEM_TRACKER_PROCESSING_HALFLIFE` | duration | `15s` | Half life to use for the processing requests tracker. Larger half life -> usage metrics change more slowly. | | `--system-tracker-cpu-halflife` | `AVAGO_SYSTEM_TRACKER_CPU_HALFLIFE` | duration | `15s` | Half life to use for the CPU tracker. Larger half life -> CPU usage metrics change more slowly. | | `--system-tracker-disk-halflife` | `AVAGO_SYSTEM_TRACKER_DISK_HALFLIFE` | duration | `1m` | Half life to use for the disk tracker. Larger half life -> disk usage metrics change more slowly. | | `--system-tracker-disk-required-available-space` | `AVAGO_SYSTEM_TRACKER_DISK_REQUIRED_AVAILABLE_SPACE` | uint | `536870912` | Minimum number of available bytes on disk, under which the node will shutdown. | | `--system-tracker-disk-warning-threshold-available-space` | `AVAGO_SYSTEM_TRACKER_DISK_WARNING_THRESHOLD_AVAILABLE_SPACE` | uint | `1073741824` | Warning threshold for the number of available bytes on disk, under which the node will be considered unhealthy. Must be \>= `--system-tracker-disk-required-available-space`. | ### Plugins | Flag | Env Var | Type | Default | Description | |--------|--------|------|----|--------------------| | `--plugin-dir` | `AVAGO_PLUGIN_DIR` | string | `$HOME/.avalanchego/plugins` | Sets the directory for [VM plugins](https://build.avax.network/docs/virtual-machines). | ### Virtual Machine (VM) Configs | Flag | Env Var | Type | Default | Description | |--------|--------|------|----|--------------------| | `--vm-aliases-file` | `AVAGO_VM_ALIASES_FILE` | string | `~/.avalanchego/configs/vms/aliases.json` | Path to JSON file that defines aliases for Virtual Machine IDs. This flag is ignored if `--vm-aliases-file-content` is specified. Example content: `{"tGas3T58KzdjLHhBDMnH2TvrddhqTji5iZAMZ3RXs2NLpSnhH": ["timestampvm", "timerpc"]}`. The above example aliases the VM whose ID is `"tGas3T58KzdjLHhBDMnH2TvrddhqTji5iZAMZ3RXs2NLpSnhH"` to `"timestampvm"` and `"timerpc"`. | | `--vm-aliases-file-content` | `AVAGO_VM_ALIASES_FILE_CONTENT` | string | - | As an alternative to `--vm-aliases-file`, it allows specifying base64 encoded aliases for Virtual Machine IDs. | ### Indexing | Flag | Env Var | Type | Default | Description | |--------|--------|------|----|--------------------| | `--index-allow-incomplete` | `AVAGO_INDEX_ALLOW_INCOMPLETE` | boolean | `false` | If true, allow running the node in such a way that could cause an index to miss transactions. Ignored if index is disabled. | ### Router | Flag | Env Var | Type | Default | Description | |--------|--------|------|----|--------------------| | `--router-health-max-drop-rate` | `AVAGO_ROUTER_HEALTH_MAX_DROP_RATE` | float | `1` | Node reports unhealthy if the router drops more than this portion of messages. | | `--router-health-max-outstanding-requests` | `AVAGO_ROUTER_HEALTH_MAX_OUTSTANDING_REQUESTS` | uint | `1024` | Node reports unhealthy if there are more than this many outstanding consensus requests (Get, PullQuery, etc.) over all chains. | ## Additional Resources - [Full documentation](https://build.avax.network/docs/quick-start) - [Example configurations](https://github.com/ava-labs/avalanchego/tree/master/config) - [Network upgrade schedules](https://build.avax.network/docs/quick-start/primary-network)
# Backup and Restore (/docs/nodes/maintain/backup-restore) --- title: Backup and Restore --- Once you have your node up and running, it's time to prepare for disaster recovery. Should your machine ever have a catastrophic failure due to either hardware or software issues, or even a case of natural disaster, it's best to be prepared for such a situation by making a backup. When running, a complete node installation along with the database can grow to be multiple gigabytes in size. Having to back up and restore such a large volume of data can be expensive, complicated and time-consuming. Luckily, there is a better way. Instead of having to back up and restore everything, we need to back up only what is essential, that is, those files that cannot be reconstructed because they are unique to your node. For AvalancheGo node, unique files are those that identify your node on the network, in other words, files that define your NodeID. Even if your node is a validator on the network and has multiple delegations on it, you don't need to worry about backing up anything else, because the validation and delegation transactions are also stored on the blockchain and will be restored during bootstrapping, along with the rest of the blockchain data. The installation itself can be easily recreated by installing the node on a new machine, and all the remaining gigabytes of blockchain data can be easily recreated by the process of bootstrapping, which copies the data over from other network peers. However, if you would like to speed up the process, see the [Database Backup and Restore section](#database) NodeID[​](#nodeid "Direct link to heading") ------------------------------------------- If more than one running nodes share the same NodeID, the communications from other nodes in the Avalanche network to this NodeID will be random to one of these nodes. If this NodeID is of a validator, it will dramatically impact the uptime calculation of the validator which will very likely disqualify the validator from receiving the staking rewards. Please make sure only one node with the same NodeID run at one time. NodeID is a unique identifier that differentiates your node from all the other peers on the network. It's a string formatted like `NodeID-5mb46qkSBj81k9g9e4VFjGGSbaaSLFRzD`. You can look up the technical background of how the NodeID is constructed [here](/docs/rpcs/other/standards/cryptographic-primitives#tls-addresses). In essence, NodeID is defined by two files: - `staker.crt` - `staker.key` NodePOP is this node's BLS key and proof of possession. Nodes must register a BLS key to act as a validator on the Primary Network. Your node's POP is logged on startup and is accessible over this endpoint. - `publicKey` is the 48 byte hex representation of the BLS key. - `proofOfPossession` is the 96 byte hex representation of the BLS signature. NodePOP is defined by the `signer.key` file. For enhanced security, you can use [CubeSigner remote signing](/docs/nodes/maintain/cube-signer-sidecar) instead of storing BLS keys locally. CubeSigner stores keys in hardware-backed enclaves and eliminates the need to back up `signer.key` files. In the default installation, they can be found in the working directory, specifically in `~/.avalanchego/staking/`. All we need to do to recreate the node on another machine is to run a new installation with those same three files. If `staker.key` and `staker.crt` are removed from a node, which is restarted afterwards, they will be recreated and a new node ID will be assigned. If the `signer.key` is regenerated, the node will lose its previous BLS identity, which includes its public key and proof of possession. This change means that the node's former identity on the network will no longer be recognized, affecting its ability to participate in the consensus mechanism as before. Consequently, the node may lose its established reputation and any associated staking rewards. If you have users defined in the keystore of your node, then you need to back up and restore those as well. [Keystore API](/docs/rpcs/other) has methods that can be used to export and import user keys. Note that Keystore API is used by developers only and not intended for use in production nodes. If you don't know what a keystore API is and have not used it, you don't need to worry about it. ### Backup[​](#backup "Direct link to heading") To back up your node, we need to store `staker.crt` and `staker.key` files somewhere safe and private, preferably to a different computer, to your private To back up your node, we need to store `staker.crt`, `staker.key` and `signer.key` files somewhere safe and private, preferably to a different computer. If someone gets a hold of your staker files, they still cannot get to your funds, as they are controlled by the wallet private keys, not by the node. But, they could re-create your node somewhere else, and depending on the circumstances make you lose the staking rewards. So make sure your staker files are secure. If someone gains access to your `signer.key`, they could potentially sign transactions on behalf of your node, which might disrupt the operations and integrity of your node on the network. Let's get the files off the machine running the node. #### From Local Node[​](#from-local-node "Direct link to heading") If you're running the node locally, on your desktop computer, just navigate to where the files are and copy them somewhere safe. On a default Linux installation, the path to them will be `/home/USERNAME/.avalanchego/staking/`, where `USERNAME` needs to be replaced with the actual username running the node. Select and copy the files from there to a backup location. You don't need to stop the node to do that. #### From Remote Node Using `scp`[​](#from-remote-node-using-scp "Direct link to heading") `scp` is a 'secure copy' command line program, available built-in on Linux and MacOS computers. There is also a Windows version, `pscp`, as part of the [PuTTY](https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html) package. If using `pscp`, in the following commands replace each usage of `scp` with `pscp -scp`. To copy the files from the node, you will need to be able to remotely log into the machine. You can use account password, but the secure and recommended way is to use the SSH keys. The procedure for acquiring and setting up SSH keys is highly dependent on your cloud provider and machine configuration. You can refer to our [Amazon Web Services](/docs/nodes/run-a-node/on-third-party-services/amazon-web-services) and [Microsoft Azure](/docs/nodes/run-a-node/on-third-party-services/microsoft-azure) setup guides for those providers. Other providers will have similar procedures. When you have means of remote login into the machine, you can copy the files over with the following command: ```bash scp -r ubuntu@PUBLICIP:/home/ubuntu/.avalanchego/staking ~/avalanche_backup ``` This assumes the username on the machine is `ubuntu`, replace with correct username in both places if it is different. Also, replace `PUBLICIP` with the actual public IP of the machine. If `scp` doesn't automatically use your downloaded SSH key, you can point to it manually: ```bash scp -i /path/to/the/key.pem -r ubuntu@PUBLICIP:/home/ubuntu/.avalanchego/staking ~/avalanche_backup ``` Once executed, this command will create `avalanche_backup` directory and place those three files in it. You need to store them somewhere safe. ### Restore[​](#restore "Direct link to heading") To restore your node from a backup, we need to do the reverse: restore `staker.key`, `staker.crt` and `signer.key` from the backup to the working directory of the new node. First, we need to do the usual [installation](/docs/nodes/run-a-node/using-install-script/installing-avalanche-go) of the node. This will create a new NodeID, a new BLS key and a new BLS signature, which we need to replace. When the node is installed correctly, log into the machine where the node is running and stop it: ```bash sudo systemctl stop avalanchego ``` We're ready to restore the node. #### To Local Node[​](#to-local-node "Direct link to heading") If you're running the node locally, just copy the `staker.key`, `staker.crt` and `signer.key` files from the backup location into the working directory, which on the default Linux installation will be `/home/USERNAME/.avalanchego/staking/`. Replace `USERNAME` with the actual username used to run the node. #### To Remote Node Using `scp`[​](#to-remote-node-using-scp "Direct link to heading") Again, the process is just the reverse operation. Using `scp` we need to copy the `staker.key`, `staker.crt` and `signer.key` files from the backup location into the remote working directory. Assuming the backed up files are located in the directory where the above backup procedure placed them: ```bash scp ~/avalanche_backup/{staker.*,signer.key} ubuntu@PUBLICIP:/home/ubuntu/.avalanchego/staking ``` Or if you need to specify the path to the SSH key: ```bash scp -i /path/to/the/key.pem ~/avalanche_backup/{staker.*,signer.key} ubuntu@PUBLICIP:/home/ubuntu/.avalanchego/staking ``` And again, replace `ubuntu` with correct username if different, and `PUBLICIP` with the actual public IP of the machine running the node, as well as the path to the SSH key if used. #### Restart the Node and Verify[​](#restart-the-node-and-verify "Direct link to heading") Once the files have been replaced, log into the machine and start the node using: ```bash sudo systemctl start avalanchego ``` You can now check that the node is restored with the correct NodeID and NodePOP by issuing the [getNodeID](/docs/rpcs/other/info-rpc#infogetnodeid) API call in the same console you ran the previous command: ```bash curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"info.getNodeID" }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info ``` You should see your original NodeID and NodePOP (BLS key and BLS signature). Restore process is done. Database[​](#database "Direct link to heading") ----------------------------------------------- Normally, when starting a new node, you can just bootstrap from scratch. However, there are situations when you may prefer to reuse an existing database (ex: preserve keystore records, reduce sync time). This tutorial will walk you through compressing your node's DB and moving it to another computer using `zip` and `scp`. ### Database Backup[​](#database-backup "Direct link to heading") First, make sure to stop AvalancheGo, run: ```bash sudo systemctl stop avalanchego ``` You must stop the Avalanche node before you back up the database otherwise data could become corrupted. Once the node is stopped, you can `zip` the database directory to reduce the size of the backup and speed up the transfer using `scp`: ```bash zip -r avalanche_db_backup.zip .avalanchego/db ``` _Note: It may take > 30 minutes to zip the node's DB._ Next, you can transfer the backup to another machine: ```bash scp -r ubuntu@PUBLICIP:/home/ubuntu/avalanche_db_backup.zip ~/avalanche_db_backup.zip ``` This assumes the username on the machine is `ubuntu`, replace with correct username in both places if it is different. Also, replace `PUBLICIP` with the actual public IP of the machine. If `scp` doesn't automatically use your downloaded SSH key, you can point to it manually: ```bash scp -i /path/to/the/key.pem -r ubuntu@PUBLICIP:/home/ubuntu/avalanche_db_backup.zip ~/avalanche_db_backup.zip ``` Once executed, this command will create `avalanche_db_backup.zip` directory in you home directory. ### Database Restore[​](#database-restore "Direct link to heading") _This tutorial assumes you have already completed "Database Backup" and have a backup at ~/avalanche\_db\_backup.zip._ First, we need to do the usual [installation](/docs/nodes/run-a-node/using-install-script/installing-avalanche-go) of the node. When the node is installed correctly, log into the machine where the node is running and stop it: ```bash sudo systemctl stop avalanchego ``` You must stop the Avalanche node before you restore the database otherwise data could become corrupted. We're ready to restore the database. First, let's move the DB on the existing node (you can remove this old DB later if the restore was successful): ```bash mv .avalanchego/db .avalanchego/db-old ``` Next, we'll unzip the backup we moved from another node (this will place the unzipped files in `~/.avalanchego/db` when the command is run in the home directory): ```bash unzip avalanche_db_backup.zip ``` After the database has been restored on a new node, use this command to start the node: ```bash sudo systemctl start avalanchego ``` Node should now be running from the database on the new instance. To check that everything is in order and that node is not bootstrapping from scratch (which would indicate a problem), use: ```bash sudo journalctl -u avalanchego -f ``` The node should be catching up to the network and fetching a small number of blocks before resuming normal operation (all the ones produced from the time when the node was stopped before the backup). Once the backup has been restored and is working as expected, the zip can be deleted: ```bash rm avalanche_db_backup.zip ``` ### Database Direct Copy[​](#database-direct-copy "Direct link to heading") You may be in a situation where you don't have enough disk space to create the archive containing the whole database, so you cannot complete the backup process as described previously. In that case, you can still migrate your database to a new computer, by using a different approach: `direct copy`. Instead of creating the archive, moving the archive and unpacking it, we can do all of that on the fly. To do so, you will need `ssh` access from the destination machine (where you want the database to end up) to the source machine (where the database currently is). Setting up `ssh` is the same as explained for `scp` earlier in the document. Same as shown previously, you need to stop the node (on both machines): ```bash sudo systemctl stop avalanchego ``` You must stop the Avalanche node before you back up the database otherwise data could become corrupted. Then, on the destination machine, change to a directory where you would like to the put the database files, enter the following command: ```bash ssh -i /path/to/the/key.pem ubuntu@PUBLICIP 'tar czf - .avalanchego/db' | tar xvzf - -C . ``` Make sure to replace the correct path to the key, and correct IP of the source machine. This will compress the database, but instead of writing it to a file it will pipe it over `ssh` directly to destination machine, where it will be decompressed and written to disk. The process can take a long time, make sure it completes before continuing. After copying is done, all you need to do now is move the database to the correct location on the destination machine. Assuming there is a default AvalancheGo node installation, we remove the old database and replace it with the new one: ```bash rm -rf ~/.avalanchego/db mv db ~/.avalanchego/db ``` You can now start the node on the destination machine: ```bash sudo systemctl start avalanchego ``` Node should now be running from the copied database. To check that everything is in order and that node is not bootstrapping from scratch (which would indicate a problem), use: ```bash sudo journalctl -u avalanchego -f ``` The node should be catching up to the network and fetching a small number of blocks before resuming normal operation (all the ones produced from the time when the node was stopped before the backup). Summary[​](#summary "Direct link to heading") --------------------------------------------- Essential part of securing your node is the backup that enables full and painless restoration of your node. Following this tutorial you can rest easy knowing that should you ever find yourself in a situation where you need to restore your node from scratch, you can easily and quickly do so. If you have any problems following this tutorial, comments you want to share with us or just want to chat, you can reach us on our [Discord](https://chat.avalabs.org/) server. # CubeSigner Remote BLS Signing (/docs/nodes/maintain/cube-signer-sidecar) --- title: CubeSigner Remote BLS Signing description: Learn how to use CubeSigner for secure hardware-backed BLS key management with AvalancheGo validators. --- The CubeSigner sidecar enables AvalancheGo validators to use hardware-backed remote signing for BLS keys instead of storing them locally. This guide walks you through setting up and configuring the CubeSigner sidecar for enhanced security. ## Introduction By default, AvalancheGo nodes store their BLS signing keys locally in a `signer.key` file. While functional, this approach has security limitations: - Keys stored on disk are vulnerable to theft or compromise - Lost or corrupted keys mean permanent loss of validator identity and staking rewards - No protection against unauthorized signing operations The CubeSigner sidecar solves these problems by delegating all BLS signing operations to [CubeSigner](https://cubist.dev/), a hardware-backed key management platform. Your BLS keys remain in secure AWS Nitro Enclaves and never touch local storage. ### Benefits - **Hardware Security**: Keys stored in AWS Nitro Enclaves, never exposed in memory - **Anti-Slashing Protection**: Built-in safeguards prevent double signing - **High Availability**: 99.99% uptime with millisecond latency - **Policy Enforcement**: Control what operations can be signed at the platform level - **Disaster Recovery**: Keys remain safe even if validator node is compromised ## Prerequisites Before you begin, ensure you have: - **AvalancheGo v1.13.4 or later**: The `--staking-rpc-signer-endpoint` flag was added in the Fortuna.4 release - **Cubist Account**: Sign up at [cubist.dev](https://cubist.dev/) for CubeSigner access - **CubeSigner CLI**: Install the `cs` command-line tool ([installation guide](https://docs.cubist.dev/)) - **Shell Access**: Ability to configure and restart your AvalancheGo node The CubeSigner sidecar is an advanced configuration for production validators. Make sure you understand the setup process before implementing on mainnet. ## Architecture Overview The CubeSigner sidecar acts as a gRPC proxy between AvalancheGo and the CubeSigner API: ``` AvalancheGo Node ↓ gRPC Request (localhost:50051) ↓ CubeSigner Sidecar ↓ HTTPS Request ↓ CubeSigner API (AWS Nitro Enclaves) ↓ BLS Signature ↓ Returns to AvalancheGo ``` The sidecar implements AvalancheGo's `signer.proto` gRPC interface, translating node signing requests into CubeSigner API calls. All cryptographic operations happen inside CubeSigner's secure enclaves. ## Step 1: Set Up CubeSigner ### Create a Role First, create a CubeSigner role for your BLS signing operations: ```bash cs role create --role-name avalanche-bls-signer ``` This command returns a role ID. Save this ID, as you'll need it in subsequent steps. ### Generate a BLS Key Create a new BLS key for Avalanche ICM (Interchain Messaging): ```bash cs keys create --key-type=bls-ava-icm ``` CubeSigner uses the key type `bls-ava-icm` specifically for Avalanche BLS signing operations. This ensures the correct signing algorithm is used. The command outputs a key ID in the format `Key#BlsAvaIcm_0x...`. Copy this key ID. ### Configure Signing Policy Set the policy to allow raw BLS blob signing: ```bash cs key set-policy --key-id --policy '"AllowRawBlobSigning"' ``` Replace `` with the key ID from the previous step. The `AllowRawBlobSigning` policy is required for AvalancheGo to sign messages. Without this policy, signing requests will be rejected. ### Associate Key with Role Link your BLS key to the role you created: ```bash cs role add-key --role-id --key-id ``` ### Generate Authentication Token Create a token file that the sidecar will use to authenticate with CubeSigner: ```bash cs token create --role-id > token.json ``` This creates a JSON file containing authentication credentials. Keep this file secure. The `token.json` file grants access to your BLS signing key. Store it securely with restricted file permissions (`chmod 600 token.json`) and never commit it to version control. The sidecar refreshes this file automatically, so it must remain writable by the process. ## Step 2: Run the Sidecar You can run the CubeSigner sidecar using Docker or as a standalone binary. ### Using Docker Pull and run the official Docker image: ```bash docker run -d \ --name cube-signer-sidecar \ -p 50051:50051 \ -v $(pwd)/token.json:/token.json \ -e SIGNER_ENDPOINT=https://gamma.signer.cubist.dev \ -e KEY_ID=Key#BlsAvaIcm_0x... \ -e TOKEN_FILE_PATH=/token.json \ avaplatform/cube-signer-sidecar:0.0.0-rc9 start ``` Replace the `KEY_ID` value with your actual key ID from Step 1. Check [Docker Hub](https://hub.docker.com/r/avaplatform/cube-signer-sidecar/tags) for the latest available image tag. The `:latest` tag will be available once a stable release is published. Do not mount `token.json` as read-only; the sidecar writes refreshed session data back to this file. The default bind mount is read/write, which is required. The default CubeSigner endpoint for production is `https://gamma.signer.cubist.dev`. For testnet or development, CubeSigner may provide alternative endpoints. ### Running Locally If you prefer to build from source: ```bash # Clone the repository git clone https://github.com/ava-labs/cube-signer-sidecar.git cd cube-signer-sidecar # Build the binary go build -o cube-signer-sidecar main/main.go # Run the sidecar export SIGNER_ENDPOINT=https://gamma.signer.cubist.dev export KEY_ID=Key#BlsAvaIcm_0x... export TOKEN_FILE_PATH=./token.json ./cube-signer-sidecar start ``` ### Configuration Options The sidecar supports configuration via command-line flags, environment variables, or a JSON config file: | Option | Environment Variable | Required | Default | Description | |--------|---------------------|----------|---------|-------------| | `--token-file-path` | `TOKEN_FILE_PATH` | Yes | - | Path to the token JSON file | | `--signer-endpoint` | `SIGNER_ENDPOINT` | Yes | - | CubeSigner API endpoint URL | | `--key-id` | `KEY_ID` | Yes | - | BLS key identifier | | `--port` | `PORT` | No | 50051 | gRPC server listening port | | `--config-file` | `CONFIG_FILE` | No | - | Path to JSON configuration file | **Example JSON Configuration:** ```json { "token-file-path": "/path/to/token.json", "signer-endpoint": "https://gamma.signer.cubist.dev", "key-id": "Key#BlsAvaIcm_0x...", "port": 50051 } ``` Use with: ```bash ./cube-signer-sidecar start --config-file config.json ``` ## Step 3: Configure AvalancheGo Once the sidecar is running, configure AvalancheGo to use it for BLS signing. ### Add the Signer Endpoint Flag Update your AvalancheGo startup command to include the `--staking-rpc-signer-endpoint` flag: ```bash avalanchego \ --staking-rpc-signer-endpoint=127.0.0.1:50051 \ [other flags...] ``` If your sidecar is running on a different machine, replace `127.0.0.1` with the appropriate IP address. Ensure network connectivity and firewall rules allow gRPC traffic on port 50051. ### Using a Configuration File Alternatively, add the setting to your AvalancheGo configuration JSON: ```json { "staking-rpc-signer-endpoint": "127.0.0.1:50051" } ``` ### Using Systemd If you run AvalancheGo as a systemd service, edit the service file: ```bash sudo systemctl edit avalanchego ``` Add the flag to the `ExecStart` line or add an environment variable: ```ini [Service] Environment="AVALANCHEGO_STAKING_RPC_SIGNER_ENDPOINT=127.0.0.1:50051" ``` Then restart the service: ```bash sudo systemctl daemon-reload sudo systemctl restart avalanchego ``` ## Verifying the Setup After starting both the sidecar and AvalancheGo, verify the configuration is working correctly. ### Check Sidecar Logs If running via Docker: ```bash docker logs cube-signer-sidecar ``` You should see log messages indicating the gRPC server is running and receiving requests from AvalancheGo. ### Verify Node BLS Key Call the AvalancheGo Info API to confirm your node is using the CubeSigner BLS key: ```bash curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"info.getNodeID" }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info ``` The response should include your NodeID and NodePOP (BLS public key and proof of possession): ```json { "jsonrpc": "2.0", "result": { "nodeID": "NodeID-...", "nodePOP": { "publicKey": "0x...", "proofOfPossession": "0x..." } }, "id": 1 } ``` The `publicKey` value should match the BLS key you created in CubeSigner. ### Monitor Node Logs Check AvalancheGo logs to ensure there are no signing errors: ```bash sudo journalctl -u avalanchego -f ``` Look for successful connection messages to the RPC signer endpoint. Any signing failures will appear as errors in these logs. ## Security Considerations When using the CubeSigner sidecar, follow these security best practices: ### Token Management - **Restrict File Permissions**: Set `token.json` to read-only for the user running the sidecar: ```bash chmod 600 token.json chown avalanchego:avalanchego token.json ``` - **Never Commit Tokens**: Add `token.json` to `.gitignore` to prevent accidental commits - **Rotate Regularly**: Generate new tokens periodically and update your configuration - **Monitor Usage**: Check CubeSigner logs for unauthorized signing attempts ### Network Security - **Isolate the Sidecar**: Run the sidecar on the same machine as AvalancheGo or on a private network - **Firewall Rules**: Restrict access to port 50051 to only the AvalancheGo process - **TLS for Remote Connections**: The sidecar serves plaintext gRPC only; if you need TLS, place it behind a terminating reverse proxy or tunnel traffic over a private/secure network. ### Key Management - **One Key Per Validator**: Each validator node should have its own unique BLS key - **Backup Policies**: Document your CubeSigner role and key IDs for disaster recovery - **Test First**: Always test the configuration on a testnet validator before deploying to mainnet If someone gains access to your `token.json` file, they can sign messages on behalf of your validator. Treat this file with the same security as you would a private key. ## Troubleshooting ### Connection Refused Errors **Problem**: AvalancheGo logs show "connection refused" when trying to reach the sidecar. **Solution**: - Verify the sidecar is running: `docker ps` or check the process - Confirm the sidecar is listening on the correct port: `netstat -tlnp | grep 50051` - Check firewall rules allow connections on port 50051 ### Invalid Token Errors **Problem**: Sidecar logs show authentication failures or invalid token errors. **Solution**: - Verify `token.json` contains valid JSON - Ensure the token hasn't expired (tokens have a limited lifetime) - Regenerate the token with `cs token create` and restart the sidecar ### Key Not Found Errors **Problem**: Sidecar reports the key ID doesn't exist or isn't accessible. **Solution**: - Double-check the `KEY_ID` matches exactly what `cs keys create` returned - Verify the key is associated with the role: `cs role keys --role-id ` - Ensure the key has the `AllowRawBlobSigning` policy set ### Signing Policy Errors **Problem**: Signing requests are rejected with policy errors. **Solution**: - Confirm the key policy allows raw blob signing: ```bash cs key set-policy --key-id --policy '"AllowRawBlobSigning"' ``` - Restart the sidecar after policy changes ### AvalancheGo Won't Start **Problem**: AvalancheGo fails to start after adding the `--staking-rpc-signer-endpoint` flag. **Solution**: - Verify you're running AvalancheGo v1.13.4 or later: `avalanchego --version` - Remove any existing `signer.key` file (it conflicts with remote signing) - Check the sidecar is reachable before starting AvalancheGo ## Migration from Local BLS Keys If you're migrating an existing validator from local `signer.key` to CubeSigner, you have two options: ### Option 1: New BLS Key (Recommended for Testnet) Generate a new BLS key in CubeSigner and update your validator registration. This is the cleanest approach but requires re-registering your validator. ### Option 2: Import Existing Key (Production Validators) Importing existing BLS keys into CubeSigner requires coordination with the CubeSigner team. This is typically only done for production validators with active stake. Contact [CubeSigner support](https://cubist.dev/contact) for assistance. ## Alternative: Local BLS Key Backup If CubeSigner's remote signing doesn't fit your needs, consider traditional backup approaches for local BLS keys. See the [Backup and Restore](/docs/nodes/maintain/backup-restore) guide for instructions on backing up your `signer.key` file. Traditional backups are simpler but lack the security benefits of hardware-backed signing. ## Next Steps - [Monitor your node](/docs/nodes/maintain/monitoring) to ensure signing operations are working correctly - [Upgrade AvalancheGo](/docs/nodes/maintain/upgrade) when new versions are released - [Learn about Avalanche L1 validators](/docs/avalanche-l1s) if you're validating additional Subnets ## Resources - [CubeSigner Documentation](https://docs.cubist.dev/) - [CubeSigner for Validators](https://cubist.dev/cubesigner-hardware-backed-remote-signing-for-validator-infrastructure) - [cube-signer-sidecar GitHub Repository](https://github.com/ava-labs/cube-signer-sidecar) - [AvalancheGo Release Notes (v1.13.4)](https://github.com/ava-labs/avalanchego/releases/tag/v1.13.4) # Enroll in Avalanche Notify (/docs/nodes/maintain/enroll-in-avalanche-notify) --- title: Enroll in Avalanche Notify --- To receive email alerts if a validator becomes unresponsive or out-of-date, sign up with the Avalanche Notify tool: [http://notify.avax.network](http://notify.avax.network/). Avalanche Notify is an active monitoring system that checks a validator's responsiveness each minute. An email alert is sent if a validator is down for 5 consecutive checks and when a validator recovers (is responsive for 5 checks in a row). } > When signing up for email alerts, consider using a new, alias, or auto-forwarding email address to protect your privacy. Otherwise, it will be possible to link your NodeID to your email. This tool is currently in BETA and validator alerts may erroneously be triggered, not triggered, or delayed. The best way to maximize the likelihood of earning staking rewards is to run redundant monitoring/alerting. # Monitoring (/docs/nodes/maintain/monitoring) --- title: Monitoring description: Learn how to monitor an AvalancheGo node. --- This tutorial demonstrates how to set up infrastructure to monitor an instance of [AvalancheGo](https://github.com/ava-labs/avalanchego). We will use: - [Prometheus](https://prometheus.io/) to gather and store data - [`node_exporter`](https://github.com/prometheus/node_exporter) to get information about the machine, - AvalancheGo's [Metrics API](/docs/api-reference/metrics-api) to get information about the node - [Grafana](https://grafana.com/) to visualize data on a dashboard. - A set of pre-made [Avalanche dashboards](https://github.com/ava-labs/avalanche-monitoring/tree/main/grafana/dashboards) ## Prerequisites: - A running AvalancheGo node - Shell access to the machine running the node - Administrator privileges on the machine This tutorial assumes you have Ubuntu 20.04 or later running on your node. Other Linux flavors that use `systemd` for running services and `apt-get` for package management might work but have not been tested. Community members have reported it works on Debian 10 and later versions. ### Caveat: Security The system as described here **should not** be opened to the public internet. Neither Prometheus nor Grafana as shown here is hardened against unauthorized access. Make sure that both of them are accessible only over a secured proxy, local network, or VPN. Setting that up is beyond the scope of this tutorial, but exercise caution. Bad security practices could lead to attackers gaining control over your node! It is your responsibility to follow proper security practices. Monitoring Installer Script[​](#monitoring-installer-script "Direct link to heading") ------------------------------------------------------------------------------------- In order to make node monitoring easier to install, we have made a script that does most of the work for you. To download and run the script, log into the machine the node runs on with a user that has administrator privileges and enter the following command: ```bash wget -nd -m https://raw.githubusercontent.com/ava-labs/avalanche-monitoring/main/grafana/monitoring-installer.sh ;\ chmod 755 monitoring-installer.sh; ``` This will download the script and make it executable. Script itself is run multiple times with different arguments, each installing a different tool or part of the environment. To make sure it downloaded and set up correctly, begin by running: ```bash ./monitoring-installer.sh --help ``` It should display: ```bash Usage: ./monitoring-installer.sh [--1|--2|--3|--4|--5|--help] Options: --help Shows this message --1 Step 1: Installs Prometheus --2 Step 2: Installs Grafana --3 Step 3: Installs node_exporter --4 Step 4: Installs AvalancheGo Grafana dashboards --5 Step 5: (Optional) Installs additional dashboards Run without any options, script will download and install latest version of AvalancheGo dashboards. ``` Let's get to it. Step 1: Set up Prometheus [​](#step-1-set-up-prometheus- "Direct link to heading") ---------------------------------------------------------------------------------- Run the script to execute the first step: ```bash ./monitoring-installer.sh --1 ``` It should produce output something like this: ```bash AvalancheGo monitoring installer -------------------------------- STEP 1: Installing Prometheus Checking environment... Found arm64 architecture... Prometheus install archive found: https://github.com/prometheus/prometheus/releases/download/v3.x.x/prometheus-3.x.x.linux-arm64.tar.gz Attempting to download... prometheus.tar.gz 100%[=========================>] 70.2M 120MB/s in 0.6s ... ``` The script automatically downloads the latest Prometheus release for your architecture. You may be prompted to confirm additional package installs, do that if asked. Script run should end with instructions on how to check that Prometheus installed correctly. Let's do that, run: ```bash sudo systemctl status prometheus ``` It should output something like: ```bash ● prometheus.service - Prometheus Loaded: loaded (/etc/systemd/system/prometheus.service; enabled; vendor preset: enabled) Active: active (running) since Fri 2021-11-12 11:38:32 UTC; 17min ago Docs: https://prometheus.io/docs/introduction/overview/ Main PID: 548 (prometheus) Tasks: 10 (limit: 9300) Memory: 95.6M CGroup: /system.slice/prometheus.service └─548 /usr/local/bin/prometheus --config.file=/etc/prometheus/prometheus.yml --storage.tsdb.path=/var/lib/prometheus --web.console.templates=/etc/prometheus/con> Nov 12 11:38:33 ip-172-31-36-200 prometheus[548]: ts=2021-11-12T11:38:33.644Z caller=head.go:590 level=info component=tsdb msg="WAL segment loaded" segment=81 maxSegment=84 Nov 12 11:38:33 ip-172-31-36-200 prometheus[548]: ts=2021-11-12T11:38:33.773Z caller=head.go:590 level=info component=tsdb msg="WAL segment loaded" segment=82 maxSegment=84 ``` Note the `active (running)` status (press `q` to exit). You can also check Prometheus web interface, available on `http://your-node-host-ip:9090/` You may need to do `sudo ufw allow 9090/tcp` if the firewall is on, and/or adjust the security settings to allow connections to port 9090 if the node is running on a cloud instance. For AWS, you can look it up [here](/docs/nodes/run-a-node/on-third-party-services/amazon-web-services#create-a-security-group). If on public internet, make sure to only allow your IP to connect! If everything is OK, let's move on. Step 2: Install Grafana [​](#step-2-install-grafana- "Direct link to heading") ------------------------------------------------------------------------------ Run the script to execute the second step: ```bash ./monitoring-installer.sh --2 ``` It should produce output something like this: ```bash AvalancheGo monitoring installer -------------------------------- STEP 2: Installing Grafana OK deb https://packages.grafana.com/oss/deb stable main Hit:1 http://us-east-2.ec2.ports.ubuntu.com/ubuntu-ports focal InRelease Get:2 http://us-east-2.ec2.ports.ubuntu.com/ubuntu-ports focal-updates InRelease [114 kB] Get:3 http://us-east-2.ec2.ports.ubuntu.com/ubuntu-ports focal-backports InRelease [101 kB] Hit:4 http://ppa.launchpad.net/longsleep/golang-backports/ubuntu focal InRelease Get:5 http://ports.ubuntu.com/ubuntu-ports focal-security InRelease [114 kB] Get:6 https://packages.grafana.com/oss/deb stable InRelease [12.1 kB] ... ``` To make sure it's running properly: ```bash sudo systemctl status grafana-server ``` which should again show Grafana as `active`. Grafana should now be available at `http://your-node-host-ip:3000/` from your browser. Log in with username: admin, password: admin, and you will be prompted to set up a new, secure password. Do that. You may need to do `sudo ufw allow 3000/tcp` if the firewall is on, and/or adjust the cloud instance settings to allow connections to port 3000. If on public internet, make sure to only allow your IP to connect! Prometheus and Grafana are now installed, we're ready for the next step. Step 3: Set up `node_exporter` [​](#step-3-set-up-node_exporter- "Direct link to heading") ------------------------------------------------------------------------------------------ In addition to metrics from AvalancheGo, let's set up monitoring of the machine itself, so we can check CPU, memory, network and disk usage and be aware of any anomalies. For that, we will use `node_exporter`, a Prometheus plugin. Run the script to execute the third step: ```bash ./monitoring-installer.sh --3 ``` The output should look something like this: ```bash AvalancheGo monitoring installer -------------------------------- STEP 3: Installing node_exporter Checking environment... Found arm64 architecture... Downloading archive... https://github.com/prometheus/node_exporter/releases/download/v1.x.x/node_exporter-1.x.x.linux-arm64.tar.gz node_exporter.tar.gz 100%[=========================>] 10.2M --.-KB/s in 0.1s ... ``` The script automatically downloads the latest node_exporter release for your architecture. Again, we check that the service is running correctly: ```bash sudo systemctl status node_exporter ``` If the service is running, Prometheus, Grafana and `node_exporter` should all work together now. To check, in your browser visit Prometheus web interface on `http://your-node-host-ip:9090/targets`. You should see three targets enabled: - Prometheus - AvalancheGo - `avalanchego-machine` Make sure that all of them have `State` as `UP`. If you run your AvalancheGo node with TLS enabled on your API port, you will need to manually edit the `/etc/prometheus/prometheus.yml` file and change the `avalanchego` job to look like this: ```yml - job_name: "avalanchego" metrics_path: "/ext/metrics" scheme: "https" tls_config: insecure_skip_verify: true static_configs: - targets: ["localhost:9650"] ``` Mind the spacing (leading spaces too)! You will need admin privileges to do that (use `sudo`). Restart Prometheus service afterwards with `sudo systemctl restart prometheus`. All that's left to do now is to provision the data source and install the actual dashboards that will show us the data. Step 4: Dashboards [​](#step-4-dashboards- "Direct link to heading") -------------------------------------------------------------------- Run the script to install the dashboards: ```bash ./monitoring-installer.sh --4 ``` It will produce output showing download progress for each dashboard: ```bash AvalancheGo monitoring installer -------------------------------- Downloading... c_chain.json 100%[=========================>] ... database.json 100%[=========================>] ... machine.json 100%[=========================>] ... main.json 100%[=========================>] ... network.json 100%[=========================>] ... p_chain.json 100%[=========================>] ... x_chain.json 100%[=========================>] ... ... ``` The script downloads the following core dashboards: - **Avalanche Main Dashboard** - Overview of key node metrics - **C-Chain** - C-Chain specific metrics and performance - **Database** - Database operations and performance metrics - **Machine Metrics** - System metrics (CPU, memory, disk, network) - **Network** - Network connectivity and peer metrics - **P-Chain** - P-Chain specific metrics - **X-Chain** - X-Chain specific metrics This will download the latest versions of the dashboards from GitHub and provision Grafana to load them, as well as defining Prometheus as a data source. It may take up to 30 seconds for the dashboards to show up. In your browser, go to: `http://your-node-host-ip:3000/dashboards`. You should see 7 Avalanche dashboards: ![Imported dashboards](/images/monitoring1.png) After completing Step 5 (optional), you will have 8 dashboards including the Avalanche L1s dashboard. Select 'Avalanche Main Dashboard' by clicking its title. It should load, and look similar to this: ![Main Dashboard](/images/monitoring2.png) Some graphs may take some time to populate fully, as they need a series of data points in order to render correctly. You can bookmark the main dashboard as it shows the most important information about the node at a glance. Every dashboard has a link to all the others as the first row, so you can move between them easily. Step 5: Additional Dashboards (Optional)[​](#step-5-additional-dashboards-optional "Direct link to heading") ------------------------------------------------------------------------------------------------------------ Step 4 installs the basic set of dashboards that make sense to have on any node. Step 5 is for installing additional dashboards that may not be useful for every installation. Currently, there is only one additional dashboard: Avalanche L1s. If your node is running any Avalanche L1s, you may want to add this as well. Do: ```bash ./monitoring-installer.sh --5 ``` This will add the Avalanche L1s dashboard. It allows you to monitor operational data for any Avalanche L1 that is synced on the node. There is an Avalanche L1 switcher that allows you to switch between different Avalanche L1s. As there are many Avalanche L1s and not every node will have all of them, by default, it comes populated only with Spaces and WAGMI Avalanche L1s that exist on Fuji testnet: ![Avalanche L1s switcher](/images/monitoring3.png) To configure the dashboard and add any Layer 1s that your node is syncing, you will need to edit the dashboard. Select the `dashboard settings` icon (image of a cog) in the upper right corner of the dashboard display and switch to `Variables` section and select the `subnet` variable. It should look something like this: ![Variables screen](/images/monitoring4.png) The variable format is: ```bash L1 name: ``` and the separator between entries is a comma. Entries for Spaces and WAGMI look like: ```bash Spaces (Fuji) : 2ebCneCbwthjQ1rYT41nhd7M76Hc6YmosMAQrTFhBq8qeqh6tt, WAGMI (Fuji) : 2AM3vsuLoJdGBGqX2ibE8RGEq4Lg7g4bot6BT1Z7B9dH5corUD ``` The dashboard variable is still named `subnet` for backward compatibility, but it represents Avalanche L1 blockchains. After editing the values, press `Update` and then click `Save dashboard` button and confirm. Press the back arrow in the upper left corner to return to the dashboard. New values should now be selectable from the dropdown and data for the selected Avalanche L1 will be shown in the panels. Updating[​](#updating "Direct link to heading") ----------------------------------------------- Available node metrics are updated constantly, new ones are added and obsolete removed, so it is good a practice to update the dashboards from time to time, especially if you notice any missing data in panels. Updating the dashboards is easy, just run the script with no arguments, and it will refresh the dashboards with the latest available versions. Allow up to 30s for dashboards to update in Grafana. ```bash ./monitoring-installer.sh ``` If you added the optional extra dashboards (step 5), they will be updated as well. If you're experiencing broken or missing metrics in your dashboards, running an update is the recommended first step. The dashboards are regularly updated with new metrics and fixes. Recent updates include MeterVM metrics for C-Chain, improved trie operation metrics, and consolidated chain dashboards. Advanced Dashboards[​](#advanced-dashboards "Direct link to heading") --------------------------------------------------------------------- The [avalanche-monitoring repository](https://github.com/ava-labs/avalanche-monitoring/tree/main/grafana/dashboards) contains additional dashboards that are not installed by the script but can be manually imported into Grafana: ### C-Chain Load Dashboard The `c_chain_load.json` dashboard is designed for load testing and monitoring C-Chain transaction throughput. It tracks: - Issued transactions - Confirmed transactions - Failed transactions - In-flight transactions (pending confirmation) To install, download the dashboard JSON from the repository and import it via Grafana's dashboard import feature (Dashboards → Import → Upload JSON file). ### Logs Dashboard (Requires Loki) The `logs.json` dashboard provides log aggregation and visualization using [Loki](https://grafana.com/oss/loki/). This dashboard requires a separate Loki installation and configuration. Features include: - Real-time log search - Timeline visualization of log volume - Ad-hoc filtering capabilities To use this dashboard, you must: 1. Install and configure Loki to collect AvalancheGo logs 2. Add Loki as a data source in Grafana 3. Import the `logs.json` dashboard from the repository Summary[​](#summary "Direct link to heading") --------------------------------------------- Using the script to install node monitoring is easy, and it gives you insight into how your node is behaving and what's going on under the hood. Also, pretty graphs! If you have feedback on this tutorial, problems with the script or following the steps, send us a message on [Discord](https://chat.avalabs.org/). # Run Avalanche Node in Background (/docs/nodes/maintain/run-as-background-service) --- title: Run Avalanche Node in Background --- This page demonstrates how to set up a `avalanchego.service` file to enable a manually deployed validator node to run in the background of a server instead of in the terminal directly. Make sure that AvalancheGo is already installed on your machine. Steps[​](#steps "Direct link to heading") ----------------------------------------- ### Fuji Testnet Config[​](#fuji-testnet-config "Direct link to heading") Run this command in your terminal to create the `avalanchego.service` file ```bash sudo nano /etc/systemd/system/avalanchego.service ``` Paste the following configuration into the `avalanchego.service` file Remember to modify the values of: - _**user=**_ - _**group=**_ - _**WorkingDirectory=**_ - _**ExecStart=**_ For those that you have configured on your Server: ```toml [Unit] Description=Avalanche Node service After=network.target [Service] User='YourUserHere' Group='YourUserHere' Restart=always PrivateTmp=true TimeoutStopSec=60s TimeoutStartSec=10s StartLimitInterval=120s StartLimitBurst=5 WorkingDirectory=/Your/Path/To/avalanchego ExecStart=/Your/Path/To/avalanchego/./avalanchego \ --network-id=fuji \ --api-metrics-enabled=true [Install] WantedBy=multi-user.target ``` Press **Ctrl + X** then **Y** then **Enter** to save and exit. Now, run: ```bash sudo systemctl daemon-reload ``` ### Mainnet Config[​](#mainnet-config "Direct link to heading") Run this command in your terminal to create the `avalanchego.service` file ```bash sudo nano /etc/systemd/system/avalanchego.service ``` Paste the following configuration into the `avalanchego.service` file ```toml [Unit] Description=Avalanche Node service After=network.target [Service] User='YourUserHere' Group='YourUserHere' Restart=always PrivateTmp=true TimeoutStopSec=60s TimeoutStartSec=10s StartLimitInterval=120s StartLimitBurst=5 WorkingDirectory=/Your/Path/To/avalanchego ExecStart=/Your/Path/To/avalanchego/./avalanchego \ --api-metrics-enabled=true [Install] WantedBy=multi-user.target ``` Press **Ctrl + X** then **Y** then **Enter** to save and exit. Now, run: ```bash sudo systemctl daemon-reload ``` Start the Node[​](#start-the-node "Direct link to heading") ----------------------------------------------------------- This command makes your node start automatically in case of a reboot, run it: ```bash sudo systemctl enable avalanchego ``` To start the node, run: ```bash sudo systemctl start avalanchego sudo systemctl status avalanchego ``` Output: ```bash socopower@avalanche-node-01:~$ sudo systemctl status avalanchego ● avalanchego.service - Avalanche Node service Loaded: loaded (/etc/systemd/system/avalanchego.service; enabled; vendor p> Active: active (running) since Tue 2023-08-29 23:14:45 UTC; 5h 46min ago Main PID: 2226 (avalanchego) Tasks: 27 (limit: 38489) Memory: 8.7G CPU: 5h 50min 31.165s CGroup: /system.slice/avalanchego.service └─2226 /usr/local/bin/avalanchego/./avalanchego --network-id=fuji Aug 30 03:02:50 avalanche-node-01 avalanchego[2226]: INFO [08-30|03:02:50.685] > Aug 30 03:02:51 avalanche-node-01 avalanchego[2226]: INFO [08-30|03:02:51.185] > Aug 30 03:03:09 avalanche-node-01 avalanchego[2226]: [08-30|03:03:09.380] INFO > Aug 30 03:03:23 avalanche-node-01 avalanchego[2226]: [08-30|03:03:23.983] INFO > Aug 30 03:05:15 avalanche-node-01 avalanchego[2226]: [08-30|03:05:15.192] INFO > Aug 30 03:05:15 avalanche-node-01 avalanchego[2226]: [08-30|03:05:15.237] INFO > Aug 30 03:05:15 avalanche-node-01 avalanchego[2226]: [08-30|03:05:15.238] INFO > Aug 30 03:05:19 avalanche-node-01 avalanchego[2226]: [08-30|03:05:19.809] INFO > Aug 30 03:05:19 avalanche-node-01 avalanchego[2226]: [08-30|03:05:19.809] INFO > Aug 30 05:00:47 avalanche-node-01 avalanchego[2226]: [08-30|05:00:47.001] INFO ``` To see the synchronization process, you can run the following command: ```bash sudo journalctl -fu avalanchego ``` # Upgrade Your AvalancheGo Node (/docs/nodes/maintain/upgrade) --- title: Upgrade Your AvalancheGo Node --- Backup Your Node[​](#backup-your-node "Direct link to heading") --------------------------------------------------------------- Before upgrading your node, it is recommended you backup your staker files which are used to identify your node on the network. In the default installation, you can copy them by running following commands: ```bash cd cp ~/.avalanchego/staking/staker.crt . cp ~/.avalanchego/staking/staker.key . ``` Then download `staker.crt` and `staker.key` files and keep them somewhere safe and private. If anything happens to your node or the machine node runs on, these files can be used to fully recreate your node. If you use your node for development purposes and have keystore users on your node, you should back up those too. Node Installed Using the Installer Script[​](#node-installed-using-the-installer-script "Direct link to heading") ----------------------------------------------------------------------------------------------------------------- If you installed your node using the [installer script](/docs/nodes/run-a-node/using-install-script/installing-avalanche-go), to upgrade your node, just run the installer script again. ```bash ./avalanchego-installer.sh ``` It will detect that you already have AvalancheGo installed: ```bash AvalancheGo installer --------------------- Preparing environment... Found 64bit Intel/AMD architecture... Found AvalancheGo systemd service already installed, switching to upgrade mode. Stopping service... ``` It will then upgrade your node to the latest version, and after it's done, start the node back up, and print out the information about the latest version: ```bash Node upgraded, starting service... New node version: avalanche/1.1.1 [network=mainnet, database=v1.0.0, commit=f76f1fd5f99736cf468413bbac158d6626f712d2] Done! ``` And that is it, your node is upgraded to the latest version. If you installed your node manually, proceed with the rest of the tutorial. Stop the Old Node Version[​](#stop-the-old-node-version "Direct link to heading") --------------------------------------------------------------------------------- After the backup is secured, you may start upgrading your node. Begin by stopping the currently running version. ### Node Running from Terminal[​](#node-running-from-terminal "Direct link to heading") If your node is running in a terminal stop it by pressing `ctrl+c`. ### Node Running as a Service[​](#node-running-as-a-service "Direct link to heading") If your node is running as a service, stop it by entering: `sudo systemctl stop avalanchego.service` (your service may be named differently, `avalanche.service`, or similar) ### Node Running in Background[​](#node-running-in-background "Direct link to heading") If your node is running in the background (by running with `nohup`, for example) then find the process running the node by running `ps aux | grep avalanche`. This will produce output like: ```bash ubuntu 6834 0.0 0.0 2828 676 pts/1 S+ 19:54 0:00 grep avalanche ubuntu 2630 26.1 9.4 2459236 753316 ? Sl Dec02 1220:52 /home/ubuntu/build/avalanchego ``` In this example, second line shows information about your node. Note the process id, in this case, `2630`. Stop the node by running `kill -2 2630`. Now we are ready to download the new version of the node. You can either download the source code and then build the binary program, or you can download the pre-built binary. You don't need to do both. Downloading pre-built binary is easier and recommended if you're just looking to run your own node and stake on it. Building the node [from source](/docs/nodes/maintain/upgrade#build-from-source) is recommended if you're a developer looking to experiment and build on Avalanche. Download Pre-Built Binary[​](#download-pre-built-binary "Direct link to heading") --------------------------------------------------------------------------------- If you want to download a pre-built binary instead of building it yourself, go to our [releases page](https://github.com/ava-labs/avalanchego/releases), and select the release you want (probably the latest one.) If you have a node, you can subscribe to the [avalanche notify service](/docs/nodes/maintain/enroll-in-avalanche-notify) with your node ID to be notified about new releases. In addition, or if you don't have a node ID, you can get release notifications from github. To do so, you can go to our [repository](https://github.com/ava-labs/avalanchego) and look on the top-right corner for the **Watch** option. After you click on it, select **Custom**, and then **Releases**. Press **Apply** and it is done. Under `Assets`, select the appropriate file. For MacOS: Download: `avalanchego-macos-.zip` Unzip: `unzip avalanchego-macos-.zip` The resulting folder, `avalanchego-`, contains the binaries. For Linux on PCs or cloud providers: Download: `avalanchego-linux-amd64-.tar.gz` Unzip: `tar -xvf avalanchego-linux-amd64-.tar.gz` The resulting folder, `avalanchego--linux`, contains the binaries. For Linux on Arm64-based computers: Download: `avalanchego-linux-arm64-.tar.gz` Unzip: `tar -xvf avalanchego-linux-arm64-.tar.gz` The resulting folder, `avalanchego--linux`, contains the binaries. You are now ready to run the new version of the node. ### Running the Node from Terminal[​](#running-the-node-from-terminal "Direct link to heading") If you are using the pre-built binaries on MacOS: ```bash ./avalanchego-/build/avalanchego ``` If you are using the pre-built binaries on Linux: ```bash ./avalanchego--linux/avalanchego ``` Add `nohup` at the start of the command if you want to run the node in the background. ### Running the Node as a Service[​](#running-the-node-as-a-service "Direct link to heading") If you're running the node as a service, you need to replace the old binaries with the new ones. ```bash cp -r avalanchego--linux/* ``` and then restart the service with: `sudo systemctl start avalanchego.service`. Build from Source[​](#build-from-source "Direct link to heading") ----------------------------------------------------------------- First clone our GitHub repo (you can skip this step if you've done this before): ```bash git clone https://github.com/ava-labs/avalanchego.git ``` The repository cloning method used is HTTPS, but SSH can be used too: `git clone git@github.com:ava-labs/avalanchego.git` You can find more about SSH and how to use it [here](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/about-ssh). Then move to the AvalancheGo directory: ```bash cd avalanchego ``` Pull the latest code: ```bash git pull ``` If the master branch has not been updated with the latest release tag, you can get to it directly via first running `git fetch --all --tags` and then `git checkout --force tags/` (where `` is the latest release tag; for example `v1.3.2`) instead of `git pull`. Note that your local copy will be in a 'detached HEAD' state, which is not an issue if you do not make changes to the source that you want push back to the repository (in which case you should check out to a branch and to the ordinary merges). Note also that the `--force` flag will disregard any local changes you might have. Check that your local code is up to date. Do: ```bash git rev-parse HEAD ``` and check that the first 7 characters printed match the Latest commit field on our [GitHub](https://github.com/ava-labs/avalanchego). If you used the `git checkout tags/` then these first 7 characters should match commit hash of that tag. Now build the binary: ```bash ./scripts/build.sh ``` This should print: `Build Successful` You can check what version you're running by doing: ```bash ./build/avalanchego --version ``` You can run your node with: ```bash ./build/avalanchego ``` # Chain State Management (/docs/nodes/node-storage/chain-state-management) --- title: Chain State Management description: Understanding active state vs archival state in EVM chains, and node configuration options. --- When running an EVM-based blockchain (C-Chain or Subnet-EVM L1s), your node stores blockchain state on disk. Understanding the difference between **active state** and **archival state** is crucial for choosing the right configuration. ## State Sync State sync is a method of bootstrapping a node by syncing from a state sync snapshot instead of full replay of all historical blocks. This means instead of downloading and replaying all transactions of all blocks since genesis, the node downloads only a latest result of these transactions from the other validators. This is a faster way to bootstrap a node and is recommended for new validator nodes that do not require archival state. State sync is enabled by default for the C-Chain. For Avalanche L1s, you can configure it per-chain: - **C-Chain configuration**: See [C-Chain Config](/docs/nodes/chain-configs/primary-network/c-chain#state-sync-enabled) - **Avalanche L1 configuration**: See [Subnet-EVM Config](/docs/nodes/chain-configs/avalanche-l1s/subnet-evm#state-sync-enabled) To provide this feature, the all Avalanche nodes need to store state sync snapshots every 4000 blocks. This requires additional disk space. ## State Types Your node's storage requirements depend on which type of state you're maintaining: | Property | Active State | Active State with Snapshots | Archival State | |----------|--------------|------------------------------|----------------| | **Size (C-Chain)** | ~500 GB | ~750 GB - 1 TB | ~13 TB+ (and growing) | | **Contents** | Current account balances, contract storage, code | Active state + state sync snapshots for serving peers | Complete state history at every block | | **Required for** | Validating, sending transactions, reading current state | Same as Active State, helps other nodes bootstrap | Historical queries at any block height, block explorers, analytics | | **Sync method** | State sync (fast, hours) | State sync, then grows over time | Full sync from genesis (slow, days) | | **Maintenance** | Periodic state sync snapshot deletion or resync recommended | Periodic pruning or resync recommended | None needed (intentional full history) | ![State Growth Visualization](/images/State_growth_visualization.png) ### Archival State (gray line) The **archival state** includes the complete history of all state changes since genesis. This allows querying historical state at any block height (e.g., "What was this account's balance at block 1,000,000?"). By default Archive nodes are typically only required for block explorers, indexers, and specialized analytics applications. Their disk usage will grow fastest over time. Most validators and RPC nodes only need **active state**. Archive nodes are specialized infrastructure for historical data access. ### Active State (black line) The **active state** represents the current state of the blockchain: all account balances, contract storage, and code as of the latest block. This is what your node needs to validate new transactions and participate in consensus. When you bootstrap with state sync, you start with just the active state. Freshly state-synced nodes will only have the active state. ### Active State with State Sync Snapshots (red line) Nodes with the configuration `pruning-enabled: true` accumulate not the full historical state but only the active state and state sync snapshots over time after starting with active state. As blocks are processed, state sync snapshots are retained every 4000 blocks for serving other nodes that want to bootstrap via state sync. This causes disk usage to grow beyond the active state size. Most long-running validators operate in this state. [Firewood](https://github.com/ava-labs/firewood) is an upcoming database upgrade that will address the issue of state growing too large. This next-generation storage layer is designed to efficiently manage state growth and reduce disk space requirements for node operators. ### Active State with periodic snapshot deletion (green line) Nodes that manually perform some maintenance can reduce their storage requirements by deleting state sync snapshots. This can be achieved by periodically deteleting the state sync snapshots or replacing the node with a freshly-state-synced node. ## Monitoring Disk Usage Track your node's disk usage over time to plan maintenance: ```bash # Check database size du -sh ~/.avalanchego/db # Check available disk space df -h / ``` Consider setting up alerts when disk usage exceeds 80% to give yourself time to plan maintenance. ## State Growth Rates Even with the same configuration, different types of state grow at different rates: | Growth Type | Rate | Description | |-------------|------|-------------| | Archival state | ~500 GB/month | Complete history stored at every block | | Active state + snapshots | ~150-200 GB/month | Active state + Snapshots every 4000 blocks for serving peers | | Active state | Minimal or <10 GB/month | Current blockchain state only (with pruning enabled) | State sync snapshots are retained to help other nodes bootstrap. Even if you don't need archival state, these snapshots accumulate over time and increase disk usage. ## Node Configuration Matrix Your node's final state depends on two factors: **how you bootstrap** and **whether pruning is enabled**. | Bootstrap Method | Pruning Disabled | Pruning Enabled | |------------------|------------------|-----------------| | **State Sync** | Active + Snapshots (~1TB)
To get full archival state, you must
do a **full sync from genesis**. | Active State only (~500GB) | | **Full Sync** | Full Archival (~13TB+) | N/A | ## Choosing Your Configuration | Use Case | Bootstrap | Pruning | Result | Disk Size | |----------|-----------|---------|--------|-----------| | Validator | State Sync | Periodic | Active state, minimal disk | ~500 GB | | Standard RPC | State Sync | Optional | Current state queries | ~500 GB - 1 TB | | Archival RPC | State Sync | Disabled | Full state after sync point | ~750 GB - 1 TB | | Block Explorer / Indexer | Full Sync | Disabled | Complete archival history | ~12.5 TB+ | **Archival RPC vs Block Explorer**: An archival RPC started via state sync can answer queries from the sync point forward. For complete historical queries from genesis, you need a full sync. # Periodic State Sync (/docs/nodes/node-storage/periodic-state-sync) --- title: Periodic State Sync description: Instructions for performing a periodic state sync. --- By bootstrapping a new node via state sync and transfering your validator identity you can reduce disk usage with zero downtime. | Pros | Cons | |------|------| | No downtime for validator | Needs separate machine to bootstrap | | Fresh, clean database | Network bandwidth for sync | | No bloom filter disk overhead | More complex multi-step process | If you don't have access to a separate machine, you can also do a [state sync snapshot deletion](/docs/nodes/node-storage/state-sync-snapshot-deletion) instead. ### How It Works This works because your validator identity is determined by cryptographic keys in the staking directory, not the database. Your validator identity consists of three key files in `~/.avalanchego/staking/`: - **staker.crt** - TLS certificate (determines your Node ID) - **staker.key** - TLS private key (for encrypted P2P communication) - **signer.key** - BLS signing key (for consensus signatures) These files define your validator identity. The Node ID shown on the P-Chain is cryptographically derived from `staker.crt`, so copying these files transfers your complete validator identity. ![Node Replacement Process](/images/State_sync_new_node.png) The diagram shows the process: stop the old node, let a new node state sync, then transfer the staking keys to continue validating with a fresh database. ![Frequent Pruning Pattern](/images/State_sync_frequent_pruning.png) ### Step-by-Step Process ## Save the Node ID of the old validator To verify that the Node ID of the old validator matches the Node ID of the new validator note down the node ID of the old validator: ```bash # On old validator curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"info.getNodeID" }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info ``` ## Provision a new server with the same or better specs than your current validator Don't copy the database at `~/.avalanchego/db/`. The new node sync a smaller fresh, synced database from the other nodes. ## Install and configure AvalancheGo Follow the instructions to set up a new node. If you have custom configuration in `~/.avalanchego/configs/`, copy those as well to maintain the same node behavior. Make sure that you are not manually deactivating state sync in that config file. ## Start and monitor the node state sync Start the node according to the instructions. State sync is enabled by default in the node configuration. You can monitor the sync progress by checking the `info.isBootstrapped` RPC endpoint: ```bash # Monitor sync progress (wait until fully synced) # This may take several hours curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"info.isBootstrapped", "params": { "chain":"C" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info ``` ## Stop both nodes Once the state sync has completed, stop both nodes to prepare for the identity transfer. The entire stop → transfer → restart process typically takes 5-15 minutes. Your validator will miss some blocks during this window, but won't be penalized as long as you're back online before your uptime drops below 80%. ## Backup the new server's auto-generated keys Backup the new server's auto-generated keys (optional but recommended): ```bash # On new server mv ~/.avalanchego/staking ~/.avalanchego/staking.backup ``` ## Transfer the staking keys Copy the staking directory from your old validator to the new server ```bash # From your old validator, copy to new server scp -r ~/.avalanchego/staking/ user@new-server:~/.avalanchego/ # Or use rsync for better control: rsync -avz ~/.avalanchego/staking/ user@new-server:~/.avalanchego/staking/ ``` ## Verify file permissions on the new server ```bash # On new server chmod 700 ~/.avalanchego/staking chmod 400 ~/.avalanchego/staking/staker.key chmod 400 ~/.avalanchego/staking/staker.crt chmod 400 ~/.avalanchego/staking/signer.key chown -R avalanche:avalanche ~/.avalanchego/staking # If using avalanche user ``` ## Start the new node with your validator identity **Don't run both nodes simultaneously**: Running two nodes with the same staking keys simultaneously can cause network issues and potential penalties. Always stop the old node before starting the new one. ## Verify the Node ID matches ```bash # On new server - confirm this matches your registered validator Node ID curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"info.getNodeID" }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info ``` ## Monitor for successful validation ```bash # Check if you're validating curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"platform.getCurrentValidators", "params": { "subnetID": null } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/P ``` # State Sync Snapshot Deletion (Offline Pruning) (/docs/nodes/node-storage/state-sync-snapshot-deletion) --- title: State Sync Snapshot Deletion (Offline Pruning) description: Options for reducing disk usage on non-archival nodes through offline pruning or fresh state sync. --- Removes accumulated state sync snapshots while keeping your database intact. | Pros | Cons | |------|------| | Only need a single node | Need to stop the node | | Preserves transaction index | Downtime required (duration varies) | | No network bandwidth required | Requires temporary disk space for bloom filter | The duration of offline pruning depends on how many state sync snapshots have accumulated since the last pruning. A node pruned regularly may complete quickly, while one never pruned could take significantly longer. If you don't prune regularly, consider doing a [fresh state sync](/docs/nodes/node-storage/periodic-state-sync) instead. ![State Growth Visualization](/images/State_growth_visualization.png) The green line shows a node performing periodic offline pruning. Each black vertical drop represents a pruning event: the node's state drops from "Active + Snapshots" back to just "Active State". Frequent pruning is recommended: it keeps disk usage low and each pruning operation completes faster since there are fewer snapshots to remove. ### How Offline Pruning Works Offline Pruning is ported from `go-ethereum` to reduce the amount of disk space taken up by the TrieDB (storage for the Merkle Forest). Offline pruning creates a bloom filter and adds all trie nodes in the active state to the bloom filter to mark the data as protected. This ensures that any part of the active state will not be removed during offline pruning. After generating the bloom filter, offline pruning iterates over the database and searches for trie nodes that are safe to be removed from disk. A bloom filter is a probabilistic data structure that reports whether an item is definitely not in a set or possibly in a set. Therefore, for each key we iterate, we check if it is in the bloom filter. If the key is definitely not in the bloom filter, then it is not in the active state and we can safely delete it. If the key is possibly in the set, then we skip over it to ensure we do not delete any active state. During iteration, the underlying database (LevelDB) writes deletion markers, causing a temporary increase in disk usage. After iterating over the database and deleting any old trie nodes that it can, offline pruning then runs compaction to minimize the DB size after the potentially large number of delete operations. ## Stopping the Node In order to enable offline pruning, you need to stop the node. ## Finding the C-Chain Config File In order to enable offline pruning, you need to update the C-Chain config file to include the parameters `offline-pruning-enabled` and `offline-pruning-data-directory`. The default location of the C-Chain config file is `~/.avalanchego/configs/chains/C/config.json`. **Please note that by default, this file does not exist. You would need to create it manually.** ## Configure Offline Pruning In order to enable offline pruning, update the C-Chain config file to include the following parameters: ```json { "offline-pruning-enabled": true, "offline-pruning-data-directory": "/home/ubuntu/offline-pruning" } ``` This will set `/home/ubuntu/offline-pruning` as the directory to be used by the offline pruner. Offline pruning will store the bloom filter in this location, so you must ensure that the path exists. ## Restart the Node Now that the C-Chain config file has been updated, you can restart your node. Once AvalancheGo starts the C-Chain, you can expect to see update logs from the offline pruner: ```bash INFO [02-09|00:20:15.625] Iterating state snapshot accounts=297,231 slots=6,669,708 elapsed=16.001s eta=1m29.03s INFO [02-09|00:20:23.626] Iterating state snapshot accounts=401,907 slots=10,698,094 elapsed=24.001s eta=1m32.522s INFO [02-09|00:20:31.626] Iterating state snapshot accounts=606,544 slots=13,891,948 elapsed=32.002s eta=1m10.927s ... INFO [02-09|00:21:47.342] Iterated snapshot accounts=1,950,875 slots=49,667,870 elapsed=1m47.718s INFO [02-09|00:21:47.351] Writing state bloom to disk name=/home/ubuntu/offline-pruning/statebloom.0xd6fca36db4b60b34330377040ef6566f6033ed8464731cbb06dc35c8401fa38e.bf.gz INFO [02-09|00:23:04.421] State bloom filter committed name=/home/ubuntu/offline-pruning/statebloom.0xd6fca36db4b60b34330377040ef6566f6033ed8464731cbb06dc35c8401fa38e.bf.gz ``` The bloom filter should be populated and committed to disk after about 5 minutes. At this point, if the node shuts down, it will resume the offline pruning session when it restarts (note: this operation cannot be cancelled). ## Disable Offline Pruning In order to ensure that users do not mistakenly leave offline pruning enabled for the long term (which could result in an hour of downtime on each restart), we have added a manual protection which requires that after an offline pruning session, the node must be started with offline pruning disabled at least once before it will start with offline pruning enabled again. Therefore, once the bloom filter has been committed to disk, you should update the C-Chain config file to include the following parameters: ```json { "offline-pruning-enabled": false, "offline-pruning-data-directory": "/home/ubuntu/offline-pruning" } ``` It is important to keep the same data directory in the config file, so that the node knows where to look for the bloom filter on a restart if offline pruning has not finished. Now if your node restarts, it will be marked as having correctly disabled offline pruning after the run and be allowed to resume normal operation once offline pruning has finished running. ## Monitor Offline Pruning Progress You will see progress logs throughout the offline pruning run which will indicate the session's progress: ```bash INFO [02-09|00:31:51.920] Pruning state data nodes=40,116,759 size=10.08GiB elapsed=8m47.499s eta=12m50.961s INFO [02-09|00:31:59.921] Pruning state data nodes=41,659,059 size=10.47GiB elapsed=8m55.499s eta=12m13.822s ... INFO [02-09|00:42:45.359] Pruned state data nodes=98,744,430 size=24.82GiB elapsed=19m40.938s INFO [02-09|00:42:45.360] Compacting database range=0x00-0x10 elapsed="2.157µs" ... INFO [02-09|00:59:34.367] Database compaction finished elapsed=16m49.006s INFO [02-09|00:59:34.367] State pruning successful pruned=24.82GiB elapsed=39m34.749s INFO [02-09|00:59:34.367] Completed offline pruning. Re-initializing blockchain. ``` At this point, the node will go into bootstrapping and (once bootstrapping completes) resume consensus and operate as normal. ### Disk Space Considerations To ensure the node does not enter an inconsistent state, the bloom filter used for pruning is persisted to `offline-pruning-data-directory` for the duration of the operation. This directory should have `offline-pruning-bloom-filter-size` available in disk space (default 512 MB). The underlying database (LevelDB) uses deletion markers (tombstones) to identify newly deleted keys. These markers are temporarily persisted to disk until they are removed during a process known as compaction. This will lead to an increase in disk usage during pruning. If your node runs out of disk space during pruning, you may safely restart the pruning operation. This may succeed as restarting the node triggers compaction. If restarting the pruning operation does not succeed, additional disk space should be provisioned. # Subnet-EVM Configs (/docs/nodes/chain-configs/subnet-evm) --- title: "Subnet-EVM Configs" description: "This page describes the configuration options available for the Subnet-EVM." edit_url: https://github.com/ava-labs/avalanchego/edit/master/graft/subnet-evm/plugin/evm/config/config.md --- # Subnet-EVM Configuration > **Note**: These are the configuration options available in the Subnet-EVM codebase. To set these values, you need to create a configuration file at `~/.avalanchego/configs/chains//config.json`. > > For the AvalancheGo node configuration options, see the AvalancheGo Configuration page. This document describes all configuration options available for Subnet-EVM. ## Example Configuration ```json { "eth-apis": ["eth", "eth-filter", "net", "web3"], "pruning-enabled": true, "commit-interval": 4096, "trie-clean-cache": 512, "trie-dirty-cache": 512, "snapshot-cache": 256, "rpc-gas-cap": 50000000, "log-level": "info", "metrics-expensive-enabled": true, "continuous-profiler-dir": "./profiles", "state-sync-enabled": false, "accepted-cache-size": 32 } ``` ## Configuration Format Configuration is provided as a JSON object. All fields are optional unless otherwise specified. ## API Configuration ### Ethereum APIs | Option | Type | Description | Default | |--------|------|-------------|---------| | `eth-apis` | array of strings | List of Ethereum services that should be enabled | `["eth", "eth-filter", "net", "web3", "internal-eth", "internal-blockchain", "internal-transaction"]` | ### Subnet-EVM Specific APIs | Option | Type | Description | Default | |--------|------|-------------|---------| | `validators-api-enabled` | bool | Enable the validators API | `true` | | `admin-api-enabled` | bool | Enable the admin API for administrative operations | `false` | | `admin-api-dir` | string | Directory for admin API operations | - | | `warp-api-enabled` | bool | Enable the Warp API for cross-chain messaging | `false` | ### API Limits and Security | Option | Type | Description | Default | |--------|------|-------------|---------| | `rpc-gas-cap` | uint64 | Maximum gas limit for RPC calls | `50,000,000` | | `rpc-tx-fee-cap` | float64 | Maximum transaction fee cap in AVAX | `100` | | `api-max-duration` | duration | Maximum duration for API calls (0 = no limit) | `0` | | `api-max-blocks-per-request` | int64 | Maximum number of blocks per getLogs request (0 = no limit) | `0` | | `http-body-limit` | uint64 | Maximum size of HTTP request bodies | - | | `batch-request-limit` | uint64 | Maximum number of requests that can be batched in an RPC call. For no limit, set either this or `batch-response-max-size` to 0 | `1000` | | `batch-response-max-size` | uint64 | Maximum size (in bytes) of response that can be returned from a batched RPC call. For no limit, set either this or `batch-request-limit` to 0. Defaults to `25 MB`| `1000` | ### WebSocket Settings | Option | Type | Description | Default | |--------|------|-------------|---------| | `ws-cpu-refill-rate` | duration | Rate at which WebSocket CPU usage quota is refilled (0 = no limit) | `0` | | `ws-cpu-max-stored` | duration | Maximum stored WebSocket CPU usage quota (0 = no limit) | `0` | ## Cache Configuration ### Trie Caches | Option | Type | Description | Default | |--------|------|-------------|---------| | `trie-clean-cache` | int | Size of the trie clean cache in MB | `512` | | `trie-dirty-cache` | int | Size of the trie dirty cache in MB | `512` | | `trie-dirty-commit-target` | int | Memory limit to target in the dirty cache before performing a commit in MB | `20` | | `trie-prefetcher-parallelism` | int | Maximum concurrent disk reads trie prefetcher should perform | `16` | ### Other Caches | Option | Type | Description | Default | |--------|------|-------------|---------| | `snapshot-cache` | int | Size of the snapshot disk layer clean cache in MB | `256` | | `accepted-cache-size` | int | Depth to keep in the accepted headers and logs cache (blocks) | `32` | | `state-sync-server-trie-cache` | int | Trie cache size for state sync server in MB | `64` | ## Ethereum Settings ### Transaction Processing | Option | Type | Description | Default | |--------|------|-------------|---------| | `preimages-enabled` | bool | Enable preimage recording | `false` | | `allow-unfinalized-queries` | bool | Allow queries for unfinalized blocks | `false` | | `allow-unprotected-txs` | bool | Allow unprotected transactions (without EIP-155) | `false` | | `allow-unprotected-tx-hashes` | array | List of specific transaction hashes allowed to be unprotected | EIP-1820 registry tx | | `local-txs-enabled` | bool | Enable treatment of transactions from local accounts as local | `false` | ### Snapshots | Option | Type | Description | Default | |--------|------|-------------|---------| | `snapshot-wait` | bool | Wait for snapshot generation on startup | `false` | | `snapshot-verification-enabled` | bool | Enable snapshot verification | `false` | ## Pruning and State Management > **Note**: If a node is ever run with `pruning-enabled` as `false` (archival mode), setting `pruning-enabled` to `true` will result in a warning and the node will shut down. This is to protect against unintentional misconfigurations of an archival node. To override this and switch to pruning mode, in addition to `pruning-enabled: true`, `allow-missing-tries` should be set to `true` as well. ### Basic Pruning | Option | Type | Description | Default | |--------|------|-------------|---------| | `pruning-enabled` | bool | Enable state pruning to save disk space | `true` | | `commit-interval` | uint64 | Interval at which to persist EVM and atomic tries (blocks) | `4096` | | `accepted-queue-limit` | int | Maximum blocks to queue before blocking during acceptance | `64` | ### State Reconstruction | Option | Type | Description | Default | |--------|------|-------------|---------| | `allow-missing-tries` | bool | Suppress warnings about incomplete trie index | `false` | | `populate-missing-tries` | uint64 | Starting block for re-populating missing tries (null = disabled) | `null` | | `populate-missing-tries-parallelism` | int | Concurrent readers for re-populating missing tries | `1024` | ### Offline Pruning > **Note**: If offline pruning is enabled it will run on startup and block until it completes (approximately one hour on Mainnet). This will reduce the size of the database by deleting old trie nodes. **While performing offline pruning, your node will not be able to process blocks and will be considered offline.** While ongoing, the pruning process consumes a small amount of additional disk space (for deletion markers and the bloom filter). For more information see the [disk space considerations documentation](https://build.avax.network/docs/nodes/maintain/reduce-disk-usage#disk-space-considerations). Since offline pruning deletes old state data, this should not be run on nodes that need to support archival API requests. This is meant to be run manually, so after running with this flag once, it must be toggled back to false before running the node again. Therefore, you should run with this flag set to true and then set it to false on the subsequent run. | Option | Type | Description | Default | |--------|------|-------------|---------| | `offline-pruning-enabled` | bool | Enable offline pruning | `false` | | `offline-pruning-bloom-filter-size` | uint64 | Bloom filter size for offline pruning in MB | `512` | | `offline-pruning-data-directory` | string | Directory for offline pruning data | - | ### Historical Data | Option | Type | Description | Default | |--------|------|-------------|---------| | `historical-proof-query-window` | uint64 | Number of blocks before last accepted for proof queries (archive mode only, ~24 hours) | `43200` | | `state-history` | uint64 | Number of most recent states that are accesible on disk (pruning mode only) | `32` | ## Transaction Pool Configuration | Option | Type | Description | Default | |--------|------|-------------|---------| | `tx-pool-price-limit` | uint64 | Minimum gas price for transaction acceptance | - | | `tx-pool-price-bump` | uint64 | Minimum price bump percentage for transaction replacement | - | | `tx-pool-account-slots` | uint64 | Maximum number of executable transaction slots per account | - | | `tx-pool-global-slots` | uint64 | Maximum number of executable transaction slots for all accounts | - | | `tx-pool-account-queue` | uint64 | Maximum number of non-executable transaction slots per account | - | | `tx-pool-global-queue` | uint64 | Maximum number of non-executable transaction slots for all accounts | - | | `tx-pool-lifetime` | duration | Maximum time transactions can stay in the pool | - | ## Gossip Configuration ### Push Gossip Settings | Option | Type | Description | Default | |--------|------|-------------|---------| | `push-gossip-percent-stake` | float64 | Percentage of total stake to push gossip to (range: [0, 1]) | `0.9` | | `push-gossip-num-validators` | int | Number of validators to push gossip to | `100` | | `push-gossip-num-peers` | int | Number of non-validator peers to push gossip to | `0` | ### Regossip Settings | Option | Type | Description | Default | |--------|------|-------------|---------| | `push-regossip-num-validators` | int | Number of validators to regossip to | `10` | | `push-regossip-num-peers` | int | Number of non-validator peers to regossip to | `0` | | `priority-regossip-addresses` | array | Addresses to prioritize for regossip | - | ### Timing Configuration | Option | Type | Description | Default | |--------|------|-------------|---------| | `push-gossip-frequency` | duration | Frequency of push gossip | `100ms` | | `pull-gossip-frequency` | duration | Frequency of pull gossip | `1s` | | `regossip-frequency` | duration | Frequency of regossip | `30s` | ## Logging and Monitoring ### Logging | Option | Type | Description | Default | |--------|------|-------------|---------| | `log-level` | string | Logging level (trace, debug, info, warn, error, crit) | `"info"` | | `log-json-format` | bool | Use JSON format for logs | `false` | ### Profiling | Option | Type | Description | Default | |--------|------|-------------|---------| | `continuous-profiler-dir` | string | Directory for continuous profiler output (empty = disabled) | - | | `continuous-profiler-frequency` | duration | Frequency to run continuous profiler | `15m` | | `continuous-profiler-max-files` | int | Maximum number of profiler files to maintain | `5` | ### Metrics | Option | Type | Description | Default | |--------|------|-------------|---------| | `metrics-expensive-enabled` | bool | Enable expensive debug-level metrics | `true` | ## Security and Access ### Keystore | Option | Type | Description | Default | |--------|------|-------------|---------| | `keystore-directory` | string | Directory for keystore files (absolute or relative path) | - | | `keystore-external-signer` | string | External signer configuration | - | | `keystore-insecure-unlock-allowed` | bool | Allow insecure account unlocking | `false` | ### Fee Configuration | Option | Type | Description | Default | |--------|------|-------------|---------| | `feeRecipient` | string | Address to send transaction fees to (leave empty if not supported) | - | ## Network and Sync ### Network | Option | Type | Description | Default | |--------|------|-------------|---------| | `max-outbound-active-requests` | int64 | Maximum number of outbound active requests for VM2VM network | `16` | ### State Sync > **Note:** If state-sync is enabled, the peer will download chain state from peers up to a recent block near tip, then proceed with normal bootstrapping. Please note that if you need historical data, state sync isn't the right option. However, it is sufficient if you are just running a validator. | Option | Type | Description | Default | |--------|------|-------------|---------| | `state-sync-enabled` | bool | Enable state sync | `false` | | `state-sync-skip-resume` | bool | Force state sync to use highest available summary block | `false` | | `state-sync-ids` | string | Comma-separated list of state sync IDs | - | | `state-sync-commit-interval` | uint64 | Commit interval for state sync (blocks) | `16384` | | `state-sync-min-blocks` | uint64 | Minimum blocks ahead required for state sync | `300000` | | `state-sync-request-size` | uint16 | Number of key/values to request per state sync request | `1024` | ## Database Configuration > **WARNING**: `firewood` and `path` schemes are untested in production. Using `path` is strongly discouraged. To use `firewood`, you must also set the following config options: > > - `populate-missing-tries: nil` > - `state-sync-enabled: false` > - `snapshot-cache: 0` Failing to set these options will result in errors on VM initialization. Additionally, not all APIs are available - see these portions of the config documentation for more details. | Option | Type | Description | Default | |--------|------|-------------|---------| | `database-type` | string | Type of database to use | `"pebbledb"` | | `database-path` | string | Path to database directory | - | | `database-read-only` | bool | Open database in read-only mode | `false` | | `database-config` | string | Inline database configuration | - | | `database-config-file` | string | Path to database configuration file | - | | `use-standalone-database` | bool | Use standalone database instead of shared one | - | | `inspect-database` | bool | Inspect database on startup | `false` | | `state-scheme` | string | EXPERIMENTAL: specifies the database scheme to store state data; can be one of `hash` or `firewood` | `hash` | ## Transaction Indexing | Option | Type | Description | Default | |--------|------|-------------|---------| | `transaction-history` | uint64 | Maximum number of blocks from head whose transaction indices are reserved (0 = no limit) | - | | `tx-lookup-limit` | uint64 | **Deprecated** - use `transaction-history` instead | - | | `skip-tx-indexing` | bool | Skip indexing transactions entirely | `false` | ## Warp Configuration | Option | Type | Description | Default | |--------|------|-------------|---------| | `warp-off-chain-messages` | array | Off-chain messages the node should be willing to sign | - | | `prune-warp-db-enabled` | bool | Clear warp database on startup | `false` | ## Miscellaneous | Option | Type | Description | Default | |--------|------|-------------|---------| | `airdrop` | string | Path to airdrop file | - | | `skip-upgrade-check` | bool | Skip checking that upgrades occur before last accepted block ⚠️ **Warning**: Only use when you understand the implications | `false` | | `min-delay-target` | integer | The minimum delay between blocks (in milliseconds) that this node will attempt to use when creating blocks | Parent block's target | ## Gossip Constants The following constants are defined for transaction gossip behavior and cannot be configured without a custom build of Subnet-EVM: | Constant | Type | Description | Value | |----------|------|-------------|-------| | Bloom Filter Min Target Elements | int | Minimum target elements for bloom filter | `8,192` | | Bloom Filter Target False Positive Rate | float | Target false positive rate | `1%` | | Bloom Filter Reset False Positive Rate | float | Reset false positive rate | `5%` | | Bloom Filter Churn Multiplier | int | Churn multiplier | `3` | | Push Gossip Discarded Elements | int | Number of discarded elements | `16,384` | | Tx Gossip Target Message Size | size | Target message size for transaction gossip | `20 KiB` | | Tx Gossip Throttling Period | duration | Throttling period | `10s` | | Tx Gossip Throttling Limit | int | Throttling limit | `2` | | Tx Gossip Poll Size | int | Poll size | `1` | ## Validation Notes - Cannot enable `populate-missing-tries` while pruning or offline pruning is enabled - Cannot run offline pruning while pruning is disabled - Commit interval must be non-zero when pruning is enabled - `push-gossip-percent-stake` must be in range `[0, 1]` - Some settings may require node restart to take effect # Avalanche L1 Nodes (/docs/nodes/run-a-node/avalanche-l1-nodes) --- title: Avalanche L1 Nodes description: Learn how to run an Avalanche node that tracks an Avalanche L1. --- For an easier way to set up an L1 node, try the [Avalanche Console L1 Node Setup Tool](/console/layer-1/l1-node-setup). This article describes how to run a node that tracks an Avalanche L1. It requires building AvalancheGo, adding Virtual Machine binaries as plugins to your local data directory, and running AvalancheGo to track these binaries. This tutorial specifically covers tracking an Avalanche L1 built with Avalanche's [Subnet-EVM](https://github.com/ava-labs/avalanchego/tree/master/graft/subnet-evm), the default [Virtual Machine](/docs/primary-network/virtual-machines) run by Avalanche L1s on Avalanche. Subnet-EVM is now part of the [AvalancheGo monorepo](https://github.com/ava-labs/avalanchego). The standalone `ava-labs/subnet-evm` repository has been archived. ## Build AvalancheGo It is recommended that you must complete [this comprehensive guide](/docs/nodes/run-a-node/from-source) which demonstrates how to build and run a basic Avalanche node from source. ## Build Avalanche L1 Binaries After building AvalancheGo successfully, you need to build the Subnet-EVM plugin. Since Subnet-EVM is now part of the AvalancheGo monorepo, you build it from within the monorepo. Navigate to the Subnet-EVM directory within AvalancheGo and run the build script. Save the plugin to the `plugins` folder of your `.avalanchego` data directory, naming it after the `VMID` of the Avalanche L1 you wish to track. The `VMID` of the WAGMI Avalanche L1 is the value beginning with **srEX...** ```bash cd $GOPATH/src/github.com/ava-labs/avalanchego/graft/subnet-evm ./scripts/build.sh ~/.avalanchego/plugins/srEXiWaHuhNyGwPUi444Tu47ZEDwxTWrbQiuD7FmgSAQ6X7Dy ``` VMID, Avalanche L1 ID (SubnetID), ChainID, and all other parameters can be found in the "Chain Info" section of the Avalanche L1 Explorer. - [Avalanche Mainnet](https://subnets.avax.network/c-chain) - [Fuji Testnet](https://subnets-test.avax.network/c-chain) Create a file named `config.json` and add a `track-subnets` field that is populated with the `SubnetID` you wish to track. The `SubnetID` of the WAGMI Avalanche L1 is the value beginning with **28nr...** ```bash cd ~/.avalanchego echo '{"track-subnets": "28nrH5T2BMvNrWecFcV3mfccjs6axM1TVyqe79MCv2Mhs8kxiY"}' > config.json ``` ## Run the Node Run AvalancheGo with the `—config-file` flag to start your node and ensure it tracks the Avalanche L1s included in the configuration file. ```bash cd $GOPATH/src/github.com/ava-labs/avalanchego ./build/avalanchego --config-file ~/.avalanchego/config.json --network-id=fuji ``` Note: The above command includes the `--network-id=fuji` command because the WAGMI Avalanche L1 is deployed on Fuji Testnet. If you would prefer to track Avalanche L1s using a command line flag, you can instead use the `--track-subnets` flag. For example: ```bash ./build/avalanchego --track-subnets 28nrH5T2BMvNrWecFcV3mfccjs6axM1TVyqe79MCv2Mhs8kxiY --network-id=fuji ``` You should now see terminal filled with logs and information to suggest the node is properly running and has began bootstrapping to the network. ## Bootstrapping and RPC Details It may take a few hours for the node to fully [bootstrap](/docs/nodes/run-a-node/from-source#bootstrapping) to the Avalanche Primary Network and tracked Avalanche L1s. When finished bootstrapping, the endpoint will be: ```bash localhost:9650/ext/bc//rpc ``` if run locally, or: ```bash XXX.XX.XX.XXX:9650/ext/bc//rpc ``` if run on a cloud provider. The “X”s should be replaced with the public IP of your EC2 instance. For more information on the requests available at these endpoints, please see the [Subnet-EVM API Reference](/docs/rpcs/subnet-evm) documentation. Because each node is also tracking the Primary Network, those [RPC endpoints](/docs/nodes/run-a-node/from-source#rpc) are available as well. # Common Errors (/docs/nodes/run-a-node/common-errors) --- title: Common Errors description: Common errors while running a node and their solutions. --- If you experience any issues running your node, here are common errors and their solutions. ## Bootstrap and Initialization Errors | Error | Cause | Solution | |-------|-------|----------| | `failed to connect to bootstrap nodes` | • No internet access
• NodeID already in use
• Old instance still running
• Firewall blocking outbound connections | • Check internet connection
• Ensure only one node instance is running
• Verify firewall allows outbound connections
• Confirm staking port (9651) is configured | | `subnets not bootstrapped` | • Node still syncing with network
• Health checks called too early
• Network connectivity issues | • Wait for bootstrap to complete (can take hours)
• Monitor `/api/health` endpoint
• Ensure stable network connection
• Check logs for progress | | `db contains invalid genesis hash` | • Database from different network
• Database corruption
• Incompatible database | • Delete database and resync from scratch
• Verify correct network connection
• Check `--network-id` flag matches database | ## Network and Connectivity Errors | Error | Cause | Solution | |-------|-------|----------| | `cannot query unfinalized data` | • Not connected to other validators
• Wrong public IP configured
• Port 9651 closed/blocked
• Insufficient validator connections | • Configure public IP with `--public-ip`
• Open port 9651 to internet
• Allow inbound connections in firewall
• Set up port forwarding if behind NAT
• Verify peers: `curl -X POST --data '{"jsonrpc":"2.0","id":1,"method":"info.peers"}' -H 'content-type:application/json;' http://127.0.0.1:9650/ext/info` | | `primary network validator has no inbound connections` | • Firewall blocking inbound traffic
• NAT/router not configured
• Wrong public IP advertised
• ISP blocking connections | • Configure port forwarding for 9651
• Verify firewall allows inbound
• Check public IP: `curl ifconfig.me`
• Test port with online checkers
• Use VPS if ISP blocks ports | | `not connected to enough stake` | • Insufficient validator connections
• Network partitioning
• Node isolated from network
• Bootstrap incomplete | • Check network connectivity
• Verify firewall rules
• Wait for more connections
• Synchronize system time (NTP) | | `throttled` (Code: -4) | • Too many connection attempts
• Rate limiting by peers
• Network congestion | • Wait before retrying
• Check for connection loops
• Reduce connection rate | ## Database and Storage Errors | Error | Cause | Solution | |-------|-------|----------| | `closed` | • Database accessed after shutdown
• Ungraceful termination
• Connection lost | • Restart the node
• Check for disk errors or full disk
• Verify database files not corrupted | | `blockdb: unrecoverable corruption detected` | • Ungraceful shutdown (power loss, kill -9)
• Disk errors during writes
• Hardware failure | • Delete database and resync
• Run SMART diagnostics on disk
• Ensure 10+ GiB free space
• Use UPS for power protection
• Maintain regular backups | | Disk space warnings | • Usage exceeds threshold
• Database growth without cleanup
• Log accumulation | • Keep at least 10 GiB free (20+ GiB recommended)
• Monitor disk usage regularly
• Clean up old logs
• Set up low-space alerts | | `blockdb: invalid block height` | • Database corruption
• Querying non-existent block
• Index corruption | • Verify block height is valid
• Resync if corrupted
• Check database integrity | ## Configuration Errors | Error | Cause | Solution | |-------|-------|----------| | `invalid TLS key` | • TLS key without certificate
• Certificate without key
• Invalid key format
• Corrupted certificate files | • Provide both key and certificate together
• Regenerate credentials if corrupted
• Verify file permissions
• Check certificate format | | `minimum validator stake can't be greater than maximum` | • Invalid stake configuration
• Conflicting parameters
• Configuration typos | • Review configuration file
• Ensure min < max stake
• Check for typos | | `uptime requirement must be in the range [0, 1]` | • Out-of-range uptime value | • Set uptime requirement between 0 and 1 | | `delegation fee must be in the range [0, 1,000,000]` | • Invalid delegation fee | • Set fee between 0 and 1,000,000 | | `min stake duration must be > 0` | • Invalid stake duration
• Min > max duration | • Set min duration > 0 and < max | | `sybil protection disabled on public network` | • Disabling protection on mainnet/testnet
• Security misconfiguration | • Only disable on private networks
• Verify network configuration
• Remove override for public networks | | `plugin dir is not a directory` | • Path points to file not directory
• Directory doesn't exist
• Permission issues | • Create plugin directory
• Verify path points to directory
• Check read/execute permissions | ## Resource and Capacity Errors | Error | Cause | Solution | |-------|-------|----------| | `insufficient funds` | • Insufficient balance for fees
• Transaction exceeds balance
• Gas estimation too low | • Ensure sufficient balance
• Account for transaction fees
• Verify balance before submitting | | `insufficient gas capacity to build block` | • Mempool exceeds block gas limit
• Complex transactions
• Network congestion | • Wait for congestion to clear
• Break into smaller transactions
• Increase gas limits if possible | | `insufficient history to generate proof` | • Partial sync mode
• Pruned historical data
• Incomplete state sync | • Use full sync for complete history
• Wait for state sync to finish
• Use archival node for historical data | ## Validator and Consensus Errors | Error | Cause | Solution | |-------|-------|----------| | `not a validator` (Code: -3) | • Validator-only operation on non-validator
• Stake expired or not active
• Not registered as validator | • Verify registration status
• Check stake is active
• Wait for validation period
• Use correct API for node type | | `unknown validator` | • Not in current validator set
• NodeID mismatch
• Validator expired/removed | • Verify validator is active
• Check end time hasn't passed
• Confirm correct NodeID
• Query validator set | ## Version and Upgrade Errors | Error | Cause | Solution | |-------|-------|----------| | `unknown network upgrade detected` | • Outdated node version
• Network upgrade scheduled/active
• Incompatible protocol | • **Update immediately** to latest version
• Monitor upgrade announcements
• Enable automatic updates
• Check version: `avalanchego --version` | | `unknown network upgrade - update as soon as possible` | • Network upgrade approaching
• Node version outdated | • Update within the day
• Check GitHub releases
• Plan for maintenance window | | `imminent network upgrade - update immediately` | • Network upgrade imminent (within hour) | • **Critical: Update immediately**
• Risk of network disconnection | | `invalid upgrade configuration` | • Upgrade times not chronological
• Conflicting schedules
• Invalid precompile config | • Review upgrade config files
• Ensure sequential timing
• Validate precompile settings
• Consult upgrade documentation | ## API and RPC Errors | Error | Cause | Solution | |-------|-------|----------| | Health check: `not yet run` | • Node still initializing
• Bootstrap incomplete
• Subnet sync in progress
• Network issues | • Wait for initialization
• Monitor `/api/health` for updates
• Check individual health checks
• Ensure subnets are synced | | `timed out` (Code: -1) | • Request exceeded timeout
• Node overloaded
• Network latency | • Increase timeout settings
• Check resource usage (CPU/memory/disk)
• Reduce request complexity
• Use retry with exponential backoff | | Invalid content-type | • Wrong Content-Type header
• Missing header | • Add `Content-Type: application/json`
• Verify API client config
• Example: `curl -H 'content-type:application/json;' ...` | ## State Sync Errors | Error | Cause | Solution | |-------|-------|----------| | `proof obtained an invalid root ID` | • State changed during sync
• Corrupted merkle proof
• Network issues | • Restart state sync
• Ensure stable connection
• Wait for state to stabilize | | `vm does not implement StateSyncableVM interface` | • Unsupported VM
• Outdated VM version | • Update VM to support state sync
• Use full bootstrap instead
• Check VM compatibility docs | --- ## Monitoring and Prevention ### Key Metrics to Monitor | Metric | Threshold | How to Check | |--------|-----------|--------------| | **Disk Space** | Keep 10+ GiB free (20+ GiB recommended) | `df -h` | | **Network Connectivity** | Inbound/outbound connections active | Check firewall, use port scanners | | **Bootstrap Status** | Should be `bootstrapped` | `/api/health` | | **Validator Connections** | Connected to sufficient stake | `/ext/info` API, check peer count | | **Database Health** | No corruption warnings in logs | Monitor `~/.avalanchego/logs/` | | **Node Version** | Current with latest release | `avalanchego --version` | ### Best Practices | Practice | Benefit | |----------|---------| | Use UPS (uninterruptible power supply) | Prevents database corruption from power loss | | Enable automatic updates | Stay current with security patches | | Monitor logs regularly | Early detection of issues | | Keep adequate disk space | Prevent database write failures | | Configure port forwarding properly | Ensure validator connectivity | | Synchronize system time with NTP | Prevent consensus issues | | Backup critical files | Quick recovery from failures | | Test changes on testnet first | Avoid production issues | ### Health Check Endpoints | Endpoint | Purpose | What It Checks | |----------|---------|----------------| | `/ext/health/liveness` | Basic process health | Is the node process running? | | `/ext/health/readiness` | Ready to serve traffic | Is bootstrapping complete? | | `/ext/health` | Comprehensive status | All health checks and details | ### Getting Help If you encounter errors not listed here: 1. **Check Logs**: Review `~/.avalanchego/logs/` for detailed error messages 2. **Search Forum**: [Avalanche Forum](https://forum.avax.network/) 3. **Join Discord**: [Avalanche Discord](https://chat.avax.network/) 4. **GitHub Issues**: [Review existing issues](https://github.com/ava-labs/avalanchego/issues) 5. **Provide Context**: Include specific error messages, logs, and configuration when asking for help ### Quick Diagnostic Commands ```bash # Check node version avalanchego --version # Check disk space df -h # Check if port 9651 is open nc -zv 9651 # Check node health curl -X POST --data '{"jsonrpc":"2.0","id":1,"method":"health.health"}' -H 'content-type:application/json;' http://127.0.0.1:9650/ext/health # Check peers curl -X POST --data '{"jsonrpc":"2.0","id":1,"method":"info.peers"}' -H 'content-type:application/json;' http://127.0.0.1:9650/ext/info # Check bootstrap status curl -X POST --data '{"jsonrpc":"2.0","id":1,"method":"info.isBootstrapped","params":{"chain":"X"}}' -H 'content-type:application/json;' http://127.0.0.1:9650/ext/info ``` # Using Source Code (/docs/nodes/run-a-node/from-source) --- title: Using Source Code description: Learn how to run an Avalanche node from AvalancheGo Source code. --- The following steps walk through downloading the AvalancheGo source code and locally building the binary program. If you would like to run your node using a pre-built binary, follow [this](/docs/nodes/run-a-node/using-binary) guide. ## Install Dependencies - Install [gcc](https://gcc.gnu.org/) - Install [go](https://go.dev/doc/install) ## Build the Node Binary Set the `$GOPATH`. You can follow [this](https://github.com/golang/go/wiki/SettingGOPATH) guide. Create a directory in your `$GOPATH`: ```bash mkdir -p $GOPATH/src/github.com/ava-labs ``` In the `$GOPATH`, clone [AvalancheGo](https://github.com/ava-labs/avalanchego), the consensus engine and node implementation that is the core of the Avalanche Network. ```bash cd $GOPATH/src/github.com/ava-labs git clone https://github.com/ava-labs/avalanchego.git ``` From the `avalanchego` directory, run the build script: ```bash cd $GOPATH/src/github.com/ava-labs/avalanchego ./scripts/build.sh ``` ## Start the Node To be able to make API calls to your node from other machines, include the argument `--http-host=` when starting the node. For running a node on the Avalanche Mainnet: ```bash cd $GOPATH/src/github.com/ava-labs/avalanchego ./build/avalanchego ``` For running a node on the Fuji Testnet: ```bash cd $GOPATH/src/github.com/ava-labs/avalanchego ./build/avalanchego --network-id=fuji ``` To kill the node, press `Ctrl + C`. ## Bootstrapping A new node needs to catch up to the latest network state before it can participate in consensus and serve API calls. This process (called bootstrapping) currently takes several days for a new node connected to Mainnet, and a day or so for a new node connected to Fuji Testnet. When a given chain is done bootstrapping, it will print logs like this: ```bash [09-09|17:01:45.295] INFO snowman/transitive.go:392 consensus starting {"lastAcceptedBlock": "2qaFwDJtmCCbMKP4jRpJwH8EFws82Q2yC1HhWgAiy3tGrpGFeb"} [09-09|17:01:46.199] INFO

snowman/transitive.go:392 consensus starting {"lastAcceptedBlock": "2ofmPJuWZbdroCPEMv6aHGvZ45oa8SBp2reEm9gNxvFjnfSGFP"} [09-09|17:01:51.628] INFO snowman/transitive.go:334 consensus starting {"lenFrontier": 1} ``` ### Check Bootstrapping Progress[​](#check-bootstrapping-progress "Direct link to heading") To check if a given chain is done bootstrapping, in another terminal window call [`info.isBootstrapped`](/docs/rpcs/other/info-rpc#infoisbootstrapped) by copying and pasting the following command: ```bash curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"info.isBootstrapped", "params": { "chain":"X" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info ``` If this returns `true`, the chain is bootstrapped; otherwise, it returns `false`. If you make other API calls to a chain that is not done bootstrapping, it will return `API call rejected because chain is not done bootstrapping`. If you are still experiencing issues please contact us on [Discord.](https://chat.avalabs.org/) The 3 chains will bootstrap in the following order: P-chain, X-chain, C-chain. Learn more about bootstrapping [here](/docs/nodes/maintain/bootstrapping). ## RPC When finished bootstrapping, the X, P, and C-Chain RPC endpoints will be: ```bash localhost:9650/ext/bc/P localhost:9650/ext/bc/X localhost:9650/ext/bc/C/rpc ``` if run locally, or ```bash XXX.XX.XX.XXX:9650/ext/bc/P XXX.XX.XX.XXX:9650/ext/bc/X XXX.XX.XX.XXX:9650/ext/bc/C/rpc ``` if run on a cloud provider. The “XXX.XX.XX.XXX" should be replaced with the public IP of your EC2 instance. For more information on the requests available at these endpoints, please see the [AvalancheGo API Reference](/docs/rpcs/p-chain) documentation. ## Going Further Your Avalanche node will perform consensus on its own, but it is not yet a validator on the network. This means that the rest of the network will not query your node when sampling the network during consensus. If you want to add your node as a validator, check out [Add a Validator](/docs/primary-network/validate/node-validator) to take it a step further. Also check out the [Maintain](/docs/nodes/maintain/bootstrapping) section to learn about how to maintain and customize your node to fit your needs. To track an Avalanche L1 with your node, head to the [Avalanche L1 Node](/docs/nodes/run-a-node/avalanche-l1-nodes) tutorial. # Node Setup Overview (/docs/nodes/run-a-node) --- title: Node Setup Overview description: Choose the right setup path for running an Avalanche node, whether you're operating on the Primary Network or an Avalanche L1. --- This section covers every way to get an AvalancheGo node running. Pick the path that matches your goal. ## Choose Your Path **Not sure which you need?** Primary Network nodes validate the C-Chain, P-Chain, and X-Chain. Avalanche L1 nodes track an L1 blockchain and the P-Chain (for validator set tracking), but do not need to sync the C-Chain or X-Chain. ### Primary Network Nodes Run a validator or API node for the Avalanche Primary Network (C/P/X chains). Run using the official AvalancheGo Docker image, or use the interactive Console tool to generate your Docker command. Automated script that installs AvalancheGo and configures it as a system service. Download a release binary and run it directly. Clone the AvalancheGo repository and compile it yourself. Deploy on Alibaba Cloud, AWS, Google Cloud, Latitude, Microsoft Azure, or Tencent Cloud. ### Avalanche L1 Nodes Run a node that tracks an Avalanche L1 blockchain and the P-Chain. Build AvalancheGo from source with Subnet-EVM plugins to track an L1, or use the interactive Console tool. ## What Type of Node Should I Run? | Type | Description | Use Case | |------|-------------|----------| | **Validator** | Stakes AVAX and participates in consensus | Earn rewards, secure the network | | **API / Non-Validating** | Tracks chains and serves RPC requests | Indexing, infrastructure, dApps | See the [Introduction](/docs/nodes) page for more on node roles, data retention modes, and validator requirements. ## Related Resources Hardware, storage, and networking requirements for different node profiles. Understand active vs. archival state, disk growth, and how to manage storage. Full reference for configuration flags and options. Keep your node healthy with upgrade procedures, monitoring, and backups. # Using Pre-Built Binary (/docs/nodes/run-a-node/using-binary) --- title: Using Pre-Built Binary description: Learn how to run an Avalanche node from a pre-built binary program. --- ## Download Binary To download a pre-built binary instead of building from source code, go to the official [AvalancheGo releases page](https://github.com/ava-labs/avalanchego/releases), and select the desired version. Scroll down to the **Assets** section, and select the appropriate file. You can follow below rules to find out the right binary. ### For MacOS Download the `avalanchego-macos-.zip` file and unzip using the below command: ```bash unzip avalanchego-macos-.zip ``` The resulting folder, `avalanchego-`, contains the binaries. ### Linux (PCs or Cloud Providers) Download the `avalanchego-linux-amd64-.tar.gz` file and unzip using the below command: ```bash tar -xvf avalanchego-linux-amd64-.tar.gz ``` The resulting folder, `avalanchego--linux`, contains the binaries. ### Linux (Arm64) Download the `avalanchego-linux-arm64-.tar.gz` file and unzip using the below command: ```bash tar -xvf avalanchego-linux-arm64-.tar.gz ``` The resulting folder, `avalanchego--linux`, contains the binaries. ## Start the Node To be able to make API calls to your node from other machines, include the argument `--http-host=` when starting the node. ### MacOS For running a node on the Avalanche Mainnet: ```bash ./avalanchego-/build/avalanchego ``` For running a node on the Fuji Testnet: ```bash ./avalanchego-/build/avalanchego --network-id=fuji ``` ### Linux For running a node on the Avalanche Mainnet: ```bash ./avalanchego--linux/avalanchego ``` For running a node on the Fuji Testnet: ```bash ./avalanchego--linux/avalanchego --network-id=fuji ``` ## Bootstrapping A new node needs to catch up to the latest network state before it can participate in consensus and serve API calls. This process (called bootstrapping) currently takes several days for a new node connected to Mainnet, and a day or so for a new node connected to Fuji Testnet. When a given chain is done bootstrapping, it will print logs like this: ```bash [09-09|17:01:45.295] INFO snowman/transitive.go:392 consensus starting {"lastAcceptedBlock": "2qaFwDJtmCCbMKP4jRpJwH8EFws82Q2yC1HhWgAiy3tGrpGFeb"} [09-09|17:01:46.199] INFO

--amount ``` | Flag | Description | |------|-------------| | `--to` | Destination P-Chain address (required) | | `--amount` | Amount in AVAX | | `--amount-navax` | Amount in nAVAX (mutually exclusive with `--amount`) | ### transfer p-to-c ```bash platform transfer p-to-c --amount ``` One-step P-Chain to C-Chain transfer (export + import). | Flag | Description | |------|-------------| | `--amount` | Amount in AVAX | | `--amount-navax` | Amount in nAVAX | ### transfer c-to-p ```bash platform transfer c-to-p --amount ``` One-step C-Chain to P-Chain transfer (export + import). ### transfer export ```bash platform transfer export --from --to --amount ``` Manual export step for two-step transfers. | Flag | Description | |------|-------------| | `--from` | Source chain: `p` or `c` (required) | | `--to` | Destination chain: `p` or `c` (required) | | `--amount` | Amount in AVAX | | `--amount-navax` | Amount in nAVAX | ### transfer import ```bash platform transfer import --from --to ``` Manual import step for two-step transfers. | Flag | Description | |------|-------------| | `--from` | Source chain: `p` or `c` (required) | | `--to` | Destination chain: `p` or `c` (required) | ## validator ### validator add ```bash platform validator add --node-id --stake ``` | Flag | Description | Default | |------|-------------|---------| | `--node-id` | Node ID (required) | | | `--stake` | Stake in AVAX (required) | | | `--duration` | Validation duration | `336h` | | `--start` | Start time (RFC3339 or `now`) | `now` | | `--delegation-fee` | Fee percentage (0.02 = 2%) | `0.02` | | `--reward-address` | Reward address | own address | | `--bls-public-key` | BLS public key hex (recommended) | | | `--bls-pop` | BLS proof of possession hex (recommended) | | | `--node-endpoint` | Node endpoint to auto-fetch BLS | | ### validator delegate ```bash platform validator delegate --node-id --stake ``` | Flag | Description | Default | |------|-------------|---------| | `--node-id` | Node ID to delegate to (required) | | | `--stake` | Stake in AVAX (required) | | | `--duration` | Delegation duration | `336h` | | `--start` | Start time (RFC3339 or `now`) | `now` | | `--reward-address` | Reward address | own address | ## subnet ### subnet create ```bash platform subnet create ``` Creates a new subnet owned by the wallet address. No additional flags required. ### subnet transfer-ownership ```bash platform subnet transfer-ownership --subnet-id --new-owner
``` | Flag | Description | |------|-------------| | `--subnet-id` | Subnet ID (required) | | `--new-owner` | New owner P-Chain address (required) | ### subnet convert-l1 ```bash platform subnet convert-l1 --subnet-id --chain-id ``` | Flag | Description | Default | |------|-------------|---------| | `--subnet-id` | Subnet ID to convert (required) | | | `--chain-id` | Chain ID for validator manager (required) | | | `--manager` | Validator manager contract address (hex) | | | `--contract-address` | Alias for `--manager` | | | `--validators` | Comma-separated node addresses (auto-discovery mode) | | | `--validator-node-ids` | Manual mode: comma-separated NodeIDs | | | `--validator-bls-public-keys` | Manual mode: comma-separated BLS public keys | | | `--validator-bls-pops` | Manual mode: comma-separated BLS PoPs | | | `--validator-balance` | Balance per validator in AVAX | `1.0` | | `--mock-validator` | Use mock validator for testing | `false` | ## l1 ### l1 register-validator ```bash platform l1 register-validator --balance --pop --message ``` | Flag | Description | |------|-------------| | `--balance` | Initial balance in AVAX (required) | | `--pop` | BLS proof of possession hex (required) | | `--message` | Warp message hex (required) | ### l1 set-weight ```bash platform l1 set-weight --message ``` | Flag | Description | |------|-------------| | `--message` | Warp message authorizing weight change (required) | ### l1 add-balance ```bash platform l1 add-balance --validation-id --balance ``` | Flag | Description | |------|-------------| | `--validation-id` | Validation ID (required) | | `--balance` | AVAX to add (required) | ### l1 disable-validator ```bash platform l1 disable-validator --validation-id ``` | Flag | Description | |------|-------------| | `--validation-id` | Validation ID to disable (required) | ## chain ### chain create ```bash platform chain create --subnet-id --genesis ``` | Flag | Description | Default | |------|-------------|---------| | `--subnet-id` | Subnet ID (required) | | | `--genesis` | Genesis JSON file path (required, max 1 MB) | | | `--name` | Chain name | `mychain` | | `--vm-id` | VM ID | Subnet-EVM | ## node ### node info ```bash platform node info --ip
``` | Flag | Description | |------|-------------| | `--ip` | Node IP address or hostname (required) | Returns Node ID, BLS Public Key, and BLS Proof of Possession. # Platform CLI Overview (/docs/tooling/platform-cli) --- title: Platform CLI Overview description: Manage Avalanche P-Chain operations from the command line --- Platform CLI is a lightweight command-line tool for Avalanche P-Chain operations. It handles key management, AVAX transfers, cross-chain transfers, primary network staking, subnet creation, and L1 validator management. ## Key Features | Feature | Description | |---------|-------------| | **Key Management** | Generate, import, export, and encrypt private keys with AES-256-GCM | | **P-Chain Transfers** | Send AVAX on P-Chain and transfer between P-Chain and C-Chain | | **Staking** | Add validators and delegators to the primary network | | **Subnets** | Create subnets, transfer ownership, and convert to L1 blockchains | | **L1 Validators** | Register, configure, and manage L1 blockchain validators | | **Chain Creation** | Deploy new blockchains on existing subnets | | **Ledger Support** | Optional hardware wallet integration for signing transactions | ## Supported Networks | Network | Usage | Min Validator Stake | Min Delegator Stake | |---------|-------|---------------------|---------------------| | **Local** | `--rpc-url http://127.0.0.1:9650` | 1 AVAX | 1 AVAX | | **Fuji** | `--network fuji` (default) | 1 AVAX | 1 AVAX | | **Mainnet** | `--network mainnet` | 2,000 AVAX | 25 AVAX | | **Custom** | `--rpc-url ` | Varies | Varies | ## Getting Started 1. [Install Platform CLI](/docs/tooling/platform-cli/installation) via the install script or build from source 2. [Create or import a key](/docs/tooling/platform-cli/key-management) to sign transactions 3. Follow the guides for your use case: transfers, staking, or subnet operations ## Quick Links Build from source and configure global options Generate, import, and encrypt private keys Send AVAX and perform cross-chain transfers Add validators and delegate to the primary network Create subnets and manage L1 validators Complete reference for all commands and flags ## Support - [GitHub Repository](https://github.com/ava-labs/platform-cli) - [Discord Community](https://chat.avalabs.org/) # Installation (/docs/tooling/platform-cli/installation) --- title: Installation description: Install and configure Platform CLI for Avalanche P-Chain operations --- ## Install Script (Recommended) The install script downloads the latest release binary for your platform: ```bash curl -sSfL https://build.avax.network/install/platform-cli | sh ``` Options: ```bash # Install to a custom directory curl -sSfL https://build.avax.network/install/platform-cli | sh -s -- -b ~/.local/bin # Install a specific version curl -sSfL https://build.avax.network/install/platform-cli | sh -s -- -v v0.2.0 ``` The script auto-detects your OS (Linux/macOS) and architecture (amd64/arm64), downloads the release tarball, verifies checksums, and installs the `platform` binary. Verify the installation: ```bash platform --help ``` ## Build from Source Requires **Go 1.24+** ([install Go](https://go.dev/dl/)). ```bash git clone https://github.com/ava-labs/platform-cli.git cd platform-cli go build -o platform . ``` ### Ledger Support To build with Ledger hardware wallet support: ```bash go build -tags ledger -o platform . ``` ## Global Flags These flags are available on all commands: | Flag | Short | Description | Default | |------|-------|-------------|---------| | `--network` | `-n` | Network: `fuji` or `mainnet` | `fuji` | | `--key-name` | | Name of key to load from keystore | | | `--ledger` | | Use Ledger hardware wallet | `false` | | `--ledger-index` | | Ledger address index (BIP44 path) | `0` | | `--rpc-url` | | Custom RPC URL (overrides `--network`) | | | `--network-id` | | Network ID for custom RPC (auto-detected if not set) | | | `--allow-insecure-http` | | Allow plain HTTP for non-local endpoints (unsafe) | `false` | | `--private-key` | `-k` | Private key (deprecated, prefer `--key-name` or `--ledger`) | | ## Environment Variables | Variable | Description | |----------|-------------| | `AVALANCHE_PRIVATE_KEY` | Private key (alternative to `--private-key` flag) | | `PLATFORM_CLI_KEY_PASSWORD` | Password for encrypted keys (avoids interactive prompts) | | `PLATFORM_CLI_TIMEOUT` | Operation timeout duration (e.g., `5m`, `30s`, default: `2m`) | ## Key Loading Priority When a command needs a private key, Platform CLI checks these sources in order: 1. `--key-name` flag (loads from keystore) 2. `--private-key` flag (deprecated) 3. Default key in keystore (if set) 4. `AVALANCHE_PRIVATE_KEY` environment variable ## Network Configuration ### Standard Networks ```bash # Fuji testnet (default) platform wallet balance --key-name mykey # Mainnet platform wallet balance --key-name mykey --network mainnet ``` ### Custom RPC For local networks or custom endpoints: ```bash # Local network platform wallet balance --key-name mykey --rpc-url http://127.0.0.1:9650 # Custom endpoint with explicit network ID platform wallet balance --key-name mykey --rpc-url https://my-node.example.com:9650 --network-id 5 ``` When using `--rpc-url`, the network ID is auto-detected from the node unless `--network-id` is specified. ## Next Steps Set up keys for signing transactions Explore all available commands # Key Management (/docs/tooling/platform-cli/key-management) --- title: Key Management description: Generate, import, export, and encrypt private keys with Platform CLI --- Platform CLI stores keys in `~/.platform/keys/` with AES-256-GCM encryption enabled by default. Keys are encrypted using Argon2id key derivation with a user-provided password. ## Generating Keys Create a new random secp256k1 key: ```bash # Generate an encrypted key (default, prompts for password) platform keys generate --name mykey # Generate an unencrypted key (unsafe, not recommended) platform keys generate --name mykey --encrypt=false ``` Output: ``` Key generated successfully! Name: mykey P-Chain: P-fuji1abc123... EVM: 0xdef456... Encrypted: true Default: yes WARNING: Back up your key! Use 'platform keys export' to view the private key. ``` ## Importing Keys Import an existing private key: ```bash # Import and encrypt (default) platform keys import --name mykey --private-key "PrivateKey-..." # Import with hidden input prompt (encrypted by default) platform keys import --name mykey # Import without encryption (unsafe) platform keys import --name mykey --encrypt=false ``` Accepted key formats: - **CB58**: `PrivateKey-ewoq...` (Avalanche standard) - **Hex**: `0x56289e99...` (Ethereum-style) ## Listing Keys ```bash # Basic listing platform keys list # Include addresses platform keys list --show-addresses ``` Output: ``` NAME ENCRYPTED DEFAULT P-CHAIN EVM CREATED mykey yes * P-fuji1abc123... 0xdef456... 2026-01-15 testkey no P-fuji1xyz789... 0xabc123... 2026-01-10 Total: 2 key(s) ``` ## Exporting Keys Export a private key to a file (recommended) or stdout: ```bash # Export to file with secure permissions (0600) platform keys export --name mykey --output-file ./mykey.txt # Export in hex format to file platform keys export --name mykey --format hex --output-file ./mykey.hex # Export to stdout (requires explicit opt-in) platform keys export --name mykey --unsafe-stdout ``` If the key is encrypted, you'll be prompted for the password. Set `PLATFORM_CLI_KEY_PASSWORD` to skip the prompt in scripts. ## Deleting Keys ```bash # Delete with confirmation prompt platform keys delete --name mykey # Delete without confirmation platform keys delete --name mykey --force ``` Deletion is irreversible. Ensure you have a backup first. ## Default Key Set a default key to avoid specifying `--key-name` on every command: ```bash # Set default platform keys default --name mykey # Show current default platform keys default ``` ## Built-in Test Key: ewoq Platform CLI includes the well-known `ewoq` test key for local development: ```bash platform wallet address --key-name ewoq ``` The ewoq key is pre-funded on local networks. Platform CLI blocks its use on mainnet for safety. ## Ledger Hardware Wallet Build with Ledger support and use the `--ledger` flag: ```bash go build -tags ledger -o platform . # Use Ledger for any command platform wallet address --ledger platform transfer send --to P-fuji1... --amount 10 --ledger # Use a different address index platform wallet balance --ledger --ledger-index 1 ``` ## Security Best Practices 1. **Keys are encrypted by default** - only use `--encrypt=false` for throwaway test keys 2. **Use strong passwords** (minimum 8 characters required) 3. **Back up keys** immediately after generation 4. **Use environment variables** (`AVALANCHE_PRIVATE_KEY`, `PLATFORM_CLI_KEY_PASSWORD`) for CI/CD 5. **Consider Ledger** for high-value mainnet operations ## Next Steps Send AVAX using your keys Add validators and delegate stake # L1 Validators (/docs/tooling/platform-cli/l1-validators) --- title: L1 Validators description: Register, manage, and disable validators on L1 blockchains --- Once a subnet is converted to an L1, manage its validators with the `l1` commands. These operations use hex-encoded Warp messages for authorization. ## Register Validator ```bash platform l1 register-validator \ --balance 1.0 \ --pop 0xabc123... \ --message 0xdef456... \ --key-name mykey ``` | Flag | Description | |------|-------------| | `--balance` | Initial balance in AVAX (required) | | `--pop` | BLS proof of possession hex (required) | | `--message` | Warp message hex (required) | ## Set Validator Weight ```bash platform l1 set-weight \ --message 0xabc123... \ --key-name mykey ``` | Flag | Description | |------|-------------| | `--message` | Warp message authorizing weight change (required) | ## Add Validator Balance Top up a validator's balance for continuous fee payments: ```bash platform l1 add-balance \ --validation-id 2QYfFcfZ9... \ --balance 5.0 \ --key-name mykey ``` | Flag | Description | |------|-------------| | `--validation-id` | Validation ID (required) | | `--balance` | AVAX to add (required) | ## Disable Validator Disable a validator and return remaining funds: ```bash platform l1 disable-validator \ --validation-id 2QYfFcfZ9... \ --key-name mykey ``` | Flag | Description | |------|-------------| | `--validation-id` | Validation ID to disable (required) | # Staking (/docs/tooling/platform-cli/staking) --- title: Staking description: Add validators and delegate stake to the Avalanche primary network --- Platform CLI provides commands to add validators and delegate stake on the Avalanche primary network. ## Requirements | Network | Validator Min | Delegator Min | Min Duration | |---------|---------------|---------------|--------------| | **Local** | 1 AVAX | 1 AVAX | 24 hours | | **Fuji** | 1 AVAX | 1 AVAX | 24 hours | | **Mainnet** | 2,000 AVAX | 25 AVAX | 14 days | ## Adding a Validator All validators require a BLS proof of possession. You can provide this manually (recommended) or auto-fetch from a node endpoint. ### Manual BLS Mode (Recommended) ```bash platform validator add \ --node-id NodeID-BFa1paAAAA... \ --stake 2000 \ --duration 336h \ --delegation-fee 0.02 \ --bls-public-key 0x1234... \ --bls-pop 0x5678... \ --key-name mykey \ --network mainnet ``` ### Auto-Fetch BLS from Node ```bash platform validator add \ --node-id NodeID-BFa1paAAAA... \ --stake 2000 \ --duration 336h \ --delegation-fee 0.02 \ --node-endpoint http://validator.example.com:9650 \ --key-name mykey \ --network mainnet ``` ### Get BLS Credentials Use `node info` to retrieve BLS credentials from a running node: ```bash platform node info --ip validator.example.com:9650 ``` ``` Node ID: NodeID-BFa1paAAAA... BLS Public Key: 0x1234567890abcdef... BLS PoP: 0xfedcba0987654321... ``` ## Delegating Stake Delegate AVAX to an existing validator: ```bash platform validator delegate \ --node-id NodeID-BFa1paAAAA... \ --stake 100 \ --duration 336h \ --key-name mykey \ --network mainnet ``` ## Delegation Fees Validators charge a fee as a percentage of delegator rewards: | `--delegation-fee` value | Percentage | Meaning | |--------------------------|------------|---------| | `0.02` | 2% | Validator keeps 2% of delegation rewards | | `0.05` | 5% | Validator keeps 5% of delegation rewards | | `0.10` | 10% | Validator keeps 10% of delegation rewards | ## Timing ### Start Time ```bash --start now # Default: 30 seconds from submission (5 minutes with --ledger) --start 2026-02-01T00:00:00Z # RFC3339 format ``` ### Duration Duration uses Go format (hours): | Value | Period | |-------|--------| | `336h` | 14 days (minimum on mainnet) | | `720h` | 30 days | | `2160h` | 90 days | | `8760h` | 365 days | ## Reward Address By default, rewards go to your P-Chain address. Specify a different address with: ```bash --reward-address P-avax1xyz789... ``` ## Next Steps Create subnets and manage L1 validators Complete staking command reference # Subnets (/docs/tooling/platform-cli/subnets) --- title: Subnets description: Create subnets, transfer ownership, and convert to L1 blockchains --- Platform CLI supports creating subnets, transferring ownership, and converting them to L1 blockchains. ## Create a Subnet ```bash platform subnet create --key-name mykey --network fuji ``` ``` Creating new subnet... Owner: P-fuji1abc123... Submitting transaction... Subnet created successfully! Subnet ID: 2QYfFcfZ9... ``` The subnet owner is the wallet address used to create it. You need P-Chain AVAX balance to create subnets. ## Transfer Subnet Ownership Transfer ownership to a new P-Chain address: ```bash platform subnet transfer-ownership \ --subnet-id 2QYfFcfZ9... \ --new-owner P-fuji1xyz789... \ --key-name mykey ``` ## Convert Subnet to L1 Convert a permissioned subnet to an L1 blockchain. This operation is **irreversible**. ### Auto-Discovery Mode Provide validator node addresses and let the CLI fetch NodeID and BLS credentials: ```bash platform subnet convert-l1 \ --subnet-id 2QYfFcfZ9... \ --chain-id 3RZgGdaH1... \ --manager 0x1234567890abcdef1234567890abcdef12345678 \ --validators 127.0.0.1:9650,127.0.0.1:9652 \ --validator-balance 1.0 \ --key-name mykey ``` ### Manual Mode Provide validator data explicitly (all lists must be comma-separated and aligned by index): ```bash platform subnet convert-l1 \ --subnet-id 2QYfFcfZ9... \ --chain-id 3RZgGdaH1... \ --manager 0x1234... \ --validator-node-ids NodeID-A...,NodeID-B... \ --validator-bls-public-keys 0xabc...,0xdef... \ --validator-bls-pops 0x111...,0x222... \ --validator-balance 1.0 \ --key-name mykey ``` ### Mock Validator (Testing) For local testing, generate a mock validator with random BLS credentials: ```bash platform subnet convert-l1 \ --subnet-id 2QYfFcfZ9... \ --chain-id 3RZgGdaH1... \ --mock-validator \ --key-name mykey \ --rpc-url http://127.0.0.1:9650 ``` # Transfers (/docs/tooling/platform-cli/transfers) --- title: Transfers description: Send AVAX on P-Chain and perform cross-chain transfers with Platform CLI --- Platform CLI supports P-Chain AVAX transfers and cross-chain transfers between P-Chain and C-Chain. ## Amount Formats | Format | Description | Example | |--------|-------------|---------| | `--amount` | Human-readable AVAX | `--amount 10.5` | | `--amount-navax` | Exact nAVAX (1 AVAX = 1,000,000,000 nAVAX) | `--amount-navax 10500000000` | These flags are mutually exclusive. Use `--amount-navax` when exact precision matters for large transfers. ## P-Chain Send Send AVAX to another P-Chain address: ```bash platform transfer send --to P-fuji1abc123... --amount 10 --key-name mykey # With exact nAVAX amount platform transfer send --to P-fuji1abc123... --amount-navax 10000000000 --key-name mykey # On mainnet platform transfer send --to P-avax1abc123... --amount 100 --network mainnet --key-name mykey ``` ## Cross-Chain: P-Chain to C-Chain Transfer AVAX from P-Chain to C-Chain in one step (handles export + import automatically): ```bash platform transfer p-to-c --amount 10 --key-name mykey ``` Output: ``` Transferring 10000000000 nAVAX (10.000000000 AVAX) from P-Chain to C-Chain... P-Chain Address: P-fuji1abc123... C-Chain Address: 0xdef456... Step 1/2: Exporting from P-Chain... Export TX ID: 2QYfFcfZ9... Step 2/2: Importing to C-Chain... Import TX ID: 3RZgGdaH1... Transfer complete! ``` ## Cross-Chain: C-Chain to P-Chain ```bash platform transfer c-to-p --amount 5 --key-name mykey ``` ## Manual Two-Step Transfers For advanced use cases, perform export and import separately: ```bash # Step 1: Export platform transfer export --from p --to c --amount 10 --key-name mykey # Step 2: Import (after network confirmation) platform transfer import --from p --to c --key-name mykey ``` The `--from` and `--to` flags accept `p` or `c`. ## Checking Balances ```bash platform wallet balance --key-name mykey --network fuji ``` ``` P-Chain Address: P-fuji1abc123... Balance: 100.000000000 AVAX ``` ## Viewing Addresses The same private key derives different addresses on each chain: ```bash platform wallet address --key-name mykey ``` ``` P-Chain Address: P-fuji1abc123... EVM Address: 0xdef456... ``` ## Next Steps Use P-Chain AVAX for staking Create subnets with P-Chain AVAX # Data Visualization (/docs/tooling/avalanche-postman/data-visualization) --- title: Data Visualization description: Data visualization for Avalanche APIs using Postman --- Data visualization is available for a number of API calls whose responses are transformed and presented in tabular format for easy reference. Please check out [Installing Postman Collection](/docs/tooling/avalanche-postman/index) and [Making API Calls](/docs/tooling/avalanche-postman/making-api-calls) beforehand, as this guide assumes that the user has already gone through these steps. Data visualizations are available for following API calls: ### C-Chain[​](#c-chain "Direct link to heading") - [`eth_baseFee`](/docs/rpcs/c-chain#eth_basefee) - [`eth_blockNumber`](https://www.quicknode.com/docs/ethereum/eth_blockNumber) - [`eth_chainId`](https://www.quicknode.com/docs/ethereum/eth_chainId) - [`eth_getBalance`](https://www.quicknode.com/docs/ethereum/eth_getBalance) - [`eth_getBlockByHash`](https://www.quicknode.com/docs/ethereum/eth_getBlockByHash) - [`eth_getBlockByNumber`](https://www.quicknode.com/docs/ethereum/eth_getBlockByNumber) - [`eth_getTransactionByHash`](https://www.quicknode.com/docs/ethereum/eth_getTransactionByHash) - [`eth_getTransactionReceipt`](https://www.quicknode.com/docs/ethereum/eth_getTransactionReceipt) - [`avax.getAtomicTx`](/docs/rpcs/c-chain#avaxgetatomictx) ### P-Chain[​](#p-chain "Direct link to heading") - [`platform.getCurrentValidators`](/docs/rpcs/p-chain#platformgetcurrentvalidators) ### X-Chain[​](#x-chain "Direct link to heading") - [`avm.getAssetDescription`](/docs/rpcs/x-chain#avmgetassetdescription) - [`avm.getBlock`](/docs/rpcs/x-chain#avmgetblock) - [`avm.getBlockByHeight`](/docs/rpcs/x-chain#avmgetblockbyheight) - [`avm.getTx`](/docs/rpcs/x-chain#avmgettx) Data Visualization Features[​](#data-visualization-features "Direct link to heading") ------------------------------------------------------------------------------------- - The response output is displayed in tabular format, each data category having a different color. ![Data Visualization Feature](/images/visualize1.png) - Unix timestamps are converted to date and time. ![Data Visualization Feature](/images/visualize2.png) - Hexadecimal to decimal conversions. ![Data Visualization Feature](/images/visualize3.png) - Native token amounts shown as AVAX and/or gwei and wei. ![Data Visualization Feature](/images/visualize4.png) - The name of the transaction type added besides the transaction type ID. ![Data Visualization Feature](/images/visualize5.png) - Percentages added for the amount of gas used. This percent represents what percentage of gas was used our of the `gasLimit`. ![Data Visualization Feature](/images/visualize6.png) - Convert the output for atomic transactions from hexadecimal to user readable. Please note that this only works for C-Chain Mainnet, not Fuji. ![Data Visualization Feature](/images/visualize7.png) How to Visualize Responses[​](#how-to-visualize-responses "Direct link to heading") ----------------------------------------------------------------------------------- 1. After [installing Postman](/docs/tooling/avalanche-postman#postman-installation) and importing the [Avalanche collection](/docs/tooling/avalanche-postman#collection-import), choose an API to make the call. 2. Make the call. 3. Click on the **Visualize** tab. 4. Now all data from the output is displayed in tabular format. ![Data Visualization Feature](/images/visualize8.png) ![Data Visualization Feature](/images/visualize9.png) Examples[​](#examples "Direct link to heading") ----------------------------------------------- ### `eth_getTransactionByHash`[​](#eth_gettransactionbyhash "Direct link to heading") ### `avm.getBlock`[​](#avmgetblock "Direct link to heading") ### `platform.getCurrentValidators`[​](#platformgetcurrentvalidators "Direct link to heading") ### `avax.getAtomicTx`[​](#avaxgetatomictx "Direct link to heading") ### `eth_getBalance`[​](#eth_getbalance "Direct link to heading") # Installing Postman Collection (/docs/tooling/avalanche-postman) --- title: Installing Postman Collection description: Installing Postman collection for Avalanche APIs --- We have made a Postman collection for Avalanche, that includes all the public API calls that are available on an [AvalancheGo instance](https://github.com/ava-labs/avalanchego/releases/), including environment variables, allowing developers to quickly issue commands to a node and see the response, without having to copy and paste long and complicated `curl` commands. [Link to GitHub](https://github.com/ava-labs/avalanche-postman-collection/) What Is Postman?[​](#what-is-postman "Direct link to heading") -------------------------------------------------------------- Postman is a free tool used by developers to quickly and easily send REST, SOAP, and GraphQL requests and test APIs. It is available as both an online tool and an application for Linux, MacOS and Windows. Postman allows you to quickly issue API calls and see the responses in a nicely formatted, searchable form. Along with the API collection, there is also the example Avalanche environment for Postman, that defines common variables such as IP address of the node, Avalanche addresses and similar common elements of the queries, so you don't have to enter them multiple times. Combined, they will allow you to easily keep tabs on an Avalanche node, check on its state and do quick queries to find out details about its operation. Setup[​](#setup "Direct link to heading") ----------------------------------------- ### Postman Installation[​](#postman-installation "Direct link to heading") Postman can be installed locally or used as a web app. We recommend installing the application, as it simplifies operation. You can download Postman from its [website](https://www.postman.com/downloads/). It is recommended that you sign up using your email address as then your workspace can be easily backed up and shared between the web app and the app installed on your computer. ![Download Postman](/images/postman1.png) When you run Postman for the first time, it will prompt you to create an account or log in. Again, it is not necessary, but recommended. ### Collection Import[​](#collection-import "Direct link to heading") Select `Create workspace` from Workspaces tab and follow the prompts to create a new workspace. This will be where the rest of the work will be done. ![Create workspace](/images/postman2.png) We're ready to import the collection. On the top-left corner of the Workspaces tab select `Import` and switch to `Link` tab. ![Import collection](/images/postman3.png) There, in the URL input field paste the link below to the collection: ```bash https://raw.githubusercontent.com/ava-labs/avalanche-postman-collection/master/Avalanche.postman_collection.json ``` Postman will recognize the format of the file content and offer to import the file as a collection. Complete the import. Now you will have Avalanche collection in your Workspace. ![Collection content](/images/postman4.png) ### Environment Import[​](#environment-import "Direct link to heading") Next, we have to import the environment variables. Again, on the top-left corner of the Workspaces tab select `Import` and switch to `Link` tab. This time, paste the link below to the environment JSON: ```bash https://raw.githubusercontent.com/ava-labs/avalanche-postman-collection/master/Example-Avalanche-Environment.postman_environment.json ``` Postman will recognize the format of the file: ![Environment import](/images/postman5.png) Import it to your workspace. Now, we will need to edit that environment to suit the actual parameters of your particular installation. These are the parameters that differ from the defaults in the imported file. Select the Environments tab, choose the Avalanche environment which was just added. You can directly edit any values here: ![Environment content](/images/postman6.png) As a minimum, you will need to change the IP address of your node, which is the value of the `host` variable. Change it to the IP of your node (change both the `initial` and `current` values). Also, if your node is not running on the same machine where you installed Postman, make sure your node is accepting the connections on the API port from the outside by checking the appropriate [command line option](/docs/nodes/configure/configs-flags#http-server). Now we sorted everything out, and we're ready to query the node. Conclusion[​](#conclusion "Direct link to heading") --------------------------------------------------- If you have completed the tutorial, you are now able to quickly [issue API calls](/docs/tooling/avalanche-postman/making-api-calls) to your node without messing with the curl commands in the terminal. This allows you to quickly see the state of your node, track changes or double-check the health or liveness of your node. Contributing[​](#contributing "Direct link to heading") ------------------------------------------------------- We're hoping to continuously keep this collection up-to-date with the [Avalanche APIs](/docs/rpcs/p-chain). If you're able to help improve the Avalanche Postman Collection in any way, first create a feature branch by branching off of `master`, next make the improvements on your feature branch and lastly create a [pull request](https://github.com/ava-labs/builders-hub/pulls) to merge your work back in to `master`. If you have any other questions or suggestions, come [talk to us](https://chat.avalabs.org/). # Making API Calls (/docs/tooling/avalanche-postman/making-api-calls) --- title: Making API Calls description: Making API calls using Postman --- After [installing Postman Collection](/docs/tooling/avalanche-postman/index) and importing the [Avalanche collection](/docs/tooling/avalanche-postman/index#collection-import), you can choose an API to make the call. You should also make sure the URL is the correct one for the call. This URL consists of the base URL and the endpoint: - The base URL is set by an environment variable called `baseURL`, and it is by default Avalanche's [public API](/docs/rpcs#mainnet-rpc---public-api-server). If you need to make a local API call, simply change the URL to localhost. This can be done by changing the value of the `baseURL` variable or changing the URL directly on the call tab. Check out the [RPC providers](/docs/rpcs) to see all public URLs. - The API endpoint depends on which API is used. Please check out [our APIs](/docs/rpcs/c-chain) to find the proper endpoint. The last step is to add the needed parameters for the call. For example, if a user wants to fetch data about a certain transaction, the transaction hash is needed. For fetching data about a block, depending on the call used, the block hash or number will be required. After clicking the **Send** button, if the call is successfully, the output will be displayed in the **Body** tab. Data visualization is available for a number of methods. Learn how to use it with the help of [this](/docs/tooling/avalanche-postman/data-visualization) guide. ![Make Call](/images/api-call1.png) Examples[​](#examples "Direct link to heading") ----------------------------------------------- ### C-Chain Public API Call[​](#c-chain-public-api-call "Direct link to heading") Fetching data about a C-Chain transaction using `eth_getTransactionByHash`. ### X-Chain Public API Call[​](#x-chain-public-api-call "Direct link to heading") Fetching data about an X-Chain block using `avm.getBlock`. ### P-Chain Public API Call[​](#p-chain-public-api-call "Direct link to heading") Getting the current P-Chain height using `platform.getHeight`. ### API Call Using Variables[​](#api-call-using-variables "Direct link to heading") Let's say we want fetch data about this `0x20cb0c03dbbe39e934c7bb04979e3073cc2c93defa30feec41198fde8fabc9b8` C-Chain transaction using both: - `eth_getTransactionReceipt` - `eth_getTransactionByHash` We can set up an environment variable with the transaction hash as value and use it on both calls. Find out more about variables [here](/docs/tooling/avalanche-postman/variables). # Variable Types (/docs/tooling/avalanche-postman/variables) --- title: Variable Types description: Variable types for Avalanche APIs using Postman --- Variables at different scopes are supported by Postman, as it follows: - **Global variables**: A global variable can be used with every collection. Basically, it allows user to access data between collections. - **Collection variables**: They are available for a certain collection and are independent of an environment. - **Environment variables**: An environment allows you to use a set of variables, which are called environment variables. Every collection can use an environment at a time, but the same environment can be used with multiple collections. This type of variables make the most sense to use with the Avalanche Postman collection, therefore an environment file with preset variables is provided - **Data variables**: Provided by external CSV and JSON files. - **Local variables**: Temporary variables that can be used in a script. For example, the returned block number from querying a transaction can be a local variable. It exists only for that request, and it will change when fetching data for another transaction hash. ![](/images/variables1.png) There are two types of variables: - **Default type**: Every variable is automatically assigned this type when created. - **Secret type**: Masks variable's value. It is used to store sensitive data. Only default variables are used in the Avalanche Environment file. To learn more about using the secret type of variables, please checkout the [Postman documentation](https://learning.postman.com/docs/sending-requests/variables/#variable-types). The [environment variables](/docs/tooling/avalanche-postman/index#environment-import) can be used to ease the process of making an API call. A variable contains the preset value of an API parameter, therefore it can be used in multiple places without having to add the value manually. How to Use Variables[​](#how-to-use-variables "Direct link to heading") ----------------------------------------------------------------------- Let's say we want to use both `eth_getTransactionByHash` and `eth_getTransctionReceipt` for a transaction with the following hash: `0x631dc45342a47d360915ea0d193fc317777f8061fe57b4a3e790e49d26960202`. We can set a variable which contains the transaction hash, and then use it on both API calls. Then, when wanting to fetch data about another transaction, the variable can be updated and the new transaction hash will be used again on both calls. Below are examples on how to set the transaction hash as variable of each scope. ### Set a Global Variable[​](#set-a-global-variable "Direct link to heading") Go to Environments ![](/images/variables2.png) Select Globals ![](/images/variables3.png) Click on the Add a new variable area ![](/images/variables4.png) Add the variable name and value. Make sure to use quotes. ![](/images/variables5.png) Click Save ![](/images/variables6.png) Now it can be used on any call from any collection ### Set a Collection Variable[​](#set-a-collection-variable "Direct link to heading") Click on the three dots next to the Avalanche collection and select Edit ![](/images/variables7.png) Go to the Variables tab ![](/images/variables8.png) Click on the Add a new variable area ![](/images/variables9.png) Add the variable name and value. Make sure to use quotes. ![](/images/variables10.png) Click Save ![](/images/variables11.png) Now it can be used on any call from this collection ### Set an Environment Variable[​](#set-an-environment-variable "Direct link to heading") Go to Environments ![](/images/variables12.png) Select an environment. In this case, it is Example-Avalanche-Environment. ![](/images/variables13.png) Scroll down until you find the Add a new variable area and click on it. ![](/images/variables14.png) Add the variable name and value. Make sure to use quotes. ![](/images/variables15.png) Click Save. ![](/images/variables16.png) The variable is available now for any call collection that uses this environment. ### Set a Data Variable[​](#set-a-data-variable "Direct link to heading") Please check out [this guide](https://www.softwaretestinghelp.com/postman-variables/#5_Data) and [this video](https://www.youtube.com/watch?v=9wl_UQtRLw4) on how to use data variables. ### Set a Local Variable[​](#set-a-local-variable "Direct link to heading") Please check out [this guide](https://www.softwaretestinghelp.com/postman-variables/#4_Local) and [this video](https://www.youtube.com/watch?v=gOF7Oc0sXmE) on how to use local variables. # Overview (/docs/tooling/tmpnet) --- title: Overview description: Create and manage temporary Avalanche networks for local development and testing --- tmpnet creates temporary Avalanche networks on your local machine. You get a complete multi-node network with consensus, P2P communication, and pre-funded test keys—everything you need to test custom VMs, L1s, and applications before deploying to testnet or mainnet. Networks run as native processes (no Docker needed). All configuration lives on disk at `~/.tmpnet/networks/`, making it easy to inspect state, share configs, or debug issues. ## What You Get | Feature | Description | |---------|-------------| | **Multi-node networks** | Spin up 2-50 validator nodes in under a minute | | **Pre-funded keys** | 50 keys with AVAX balances on P, X, and C-Chain | | **Custom VMs** | Deploy and test your Virtual Machines | | **Subnets** | Create subnets with specific validator sets | | **Monitoring** | Prometheus metrics and Promtail logs out of the box | | **CLI + Go API** | Use `tmpnetctl` commands or Go code | ## Use Cases | Scenario | What You Can Test | |----------|-------------------| | **L1 Development** | Run your L1 with multiple validators locally before deploying to Fuji | | **Custom VMs** | Test VM behavior with real consensus across multiple nodes | | **Staking Operations** | Add validators, test delegation, verify rewards distribution | | **Subnet Testing** | Create subnets, manage validators, test cross-subnet messaging | | **Integration Tests** | Write automated Go tests that spin up networks on demand | ## Basic Workflow ```bash # Start a 5-node network tmpnetctl start-network --avalanchego-path=./bin/avalanchego # Network is running at ~/.tmpnet/networks/latest # Get node URIs cat ~/.tmpnet/networks/latest/NodeID-*/process.json | jq -r '.uri' # Get pre-funded keys cat ~/.tmpnet/networks/latest/config.json | jq -r '.preFundedKeys[0]' # Stop when done tmpnetctl stop-network ``` ## Network Directory Structure Each network you create gets its own directory at `~/.tmpnet/networks/[timestamp]/`: | Path | Contents | |------|----------| | `config.json` | Network settings, pre-funded keys | | `genesis.json` | Genesis configuration | | `NodeID-*/` | Per-node directories (logs, database, config) | | `NodeID-*/process.json` | Running node info (PID, URI, ports) | | `metrics.txt` | Grafana dashboard link | The `latest` symlink always points to your most recent network. Both `tmpnetctl` and your Go code can manage the same networks because everything is file-based. No daemon, no Docker, no magic. ## Getting Started Build tmpnet and avalanchego Create your first network Deploy your VM to a local network Test validator operations ## Support & Resources - [GitHub Repository](https://github.com/ava-labs/avalanchego/tree/master/tests/fixture/tmpnet) - [Full README](https://github.com/ava-labs/avalanchego/blob/master/tests/fixture/tmpnet/README.md) - [Discord Community](https://chat.avalabs.org/) - [Documentation](https://docs.avax.network/) # Installation (/docs/tooling/tmpnet/installation) --- title: Installation description: Set up tmpnet and its prerequisites for local network testing --- This guide walks you through setting up tmpnet for testing your Avalanche applications and L1s. ## Prerequisites ### 1. Operating System tmpnet runs on: - **macOS** (Intel and Apple Silicon) - **Linux** Windows is not currently supported. ### 2. Go tmpnet requires **Go 1.21 or later**. Check your version: ```bash go version ``` If you need to install or update Go, visit [golang.org/dl](https://golang.org/dl/). ### 3. Git Ensure you have Git installed: ```bash git --version ``` ## Installation ### Step 1: Clone AvalancheGo tmpnet is part of the AvalancheGo repository: ```bash # Clone the repository git clone https://github.com/ava-labs/avalanchego.git cd avalanchego ``` ### Step 2: Build the Binaries Build both AvalancheGo and tmpnetctl: ```bash # Build AvalancheGo ./scripts/build.sh # Build tmpnetctl ./scripts/build_tmpnetctl.sh ``` You now have: - `build/avalanchego` and `build/tmpnetctl` - compiled binaries - `bin/avalanchego` and `bin/tmpnetctl` - thin wrappers that rebuild if needed ### Step 3: Verify Installation Test that the binaries work: ```bash # Check AvalancheGo ./bin/avalanchego --version # Check tmpnetctl ./bin/tmpnetctl --help ``` You should see version information and available commands. ## Understanding the Directory Structure AvalancheGo has two directories for binaries: ### `build/` Directory (Actual Binaries) Contains the compiled binaries: - `build/avalanchego` - Main node binary - `build/tmpnetctl` - Network management CLI - `build/plugins/` - Custom VM plugins ### `bin/` Directory (Convenience Wrappers) Symlinks that rebuild when needed, so they're safe defaults while iterating: - `bin/avalanchego` → `scripts/run_avalanchego.sh` - `bin/tmpnetctl` → `scripts/run_tmpnetctl.sh` For most workflows (and in the upstream README), use the `bin/` wrappers or enable the repo's `.envrc` so `tmpnetctl` is on your `PATH`. ## Optional: Simplified Setup with direnv [direnv](https://direnv.net/) automatically loads the repo's `.envrc` so tmpnet has the paths it needs. ### Install and Configure **macOS:** ```bash brew install direnv echo 'eval "$(direnv hook zsh)"' >> ~/.zshrc source ~/.zshrc ``` **Linux:** ```bash # Ubuntu/Debian sudo apt-get install direnv # Add to shell echo 'eval "$(direnv hook bash)"' >> ~/.bashrc source ~/.bashrc ``` ### Enable in AvalancheGo ```bash cd /path/to/avalanchego direnv allow ``` The repo's `.envrc` then: - Adds `bin/` to `PATH` so you can run `tmpnetctl` directly - Sets `AVALANCHEGO_PATH=$PWD/bin/avalanchego` - Sets `AVAGO_PLUGIN_DIR=$PWD/build/plugins` (and creates the dir) - Sets `TMPNET_NETWORK_DIR=~/.tmpnet/networks/latest` Now you can run: ```bash tmpnetctl start-network --node-count=3 ``` ## Optional: Monitoring Tools To collect metrics and logs from your networks: **Using Nix (Recommended):** ```bash nix develop # Provides prometheus and promtail ``` **Manual Installation:** - **Prometheus**: Download from [prometheus.io/download](https://prometheus.io/download/) or `brew install prometheus` - **Promtail**: Download from [Grafana Loki releases](https://github.com/grafana/loki/releases) See the [Monitoring guide](/docs/tooling/tmpnet/guides/monitoring) for setup details. ## Environment Variables Key environment variables tmpnet uses: | Variable | Purpose | Default | |----------|---------|---------| | `AVALANCHEGO_PATH` | Path to avalanchego binary (required unless passed as `--avalanchego-path`) | None | | `AVAGO_PLUGIN_DIR` | Plugin directory for custom VMs | `~/.avalanchego/plugins` (or `$PWD/build/plugins` via `.envrc`) | | `TMPNET_ROOT_NETWORK_DIR` | Where new networks are created | `~/.tmpnet/networks` | | `TMPNET_NETWORK_DIR` | Existing network to target for stop/restart/check commands | Unset (set automatically by `network.env` or `.envrc`) | ### Recommended Shell Setup If you aren't using direnv, add something like this to `~/.bashrc` or `~/.zshrc`: ```bash export AVALANCHEGO_PATH=~/avalanchego/bin/avalanchego export AVAGO_PLUGIN_DIR=~/.avalanchego/plugins export TMPNET_NETWORK_DIR=~/.tmpnet/networks/latest # optional convenience export PATH=$PATH:~/avalanchego/bin ``` ## Plugin Directory Setup If you're testing custom VMs, create the plugin directory: ```bash mkdir -p ~/.avalanchego/plugins ``` Place your custom VM binaries in this directory. The plugin binary name should match your VM name. ## Troubleshooting ### Command not found: tmpnetctl If you get "command not found": **Option 1:** Use the full path ```bash ./bin/tmpnetctl --help ``` **Option 2:** Add to PATH ```bash export PATH=$PATH:$(pwd)/bin tmpnetctl --help ``` **Option 3:** Use direnv (recommended) ```bash direnv allow tmpnetctl --help ``` ### Build Failures If builds fail: 1. **Check Go version:** ```bash go version # Must be 1.21 or later ``` 2. **Check you're in the repository root:** ```bash pwd # Should be /path/to/avalanchego ls scripts/build.sh # Should exist ``` 3. **Try cleaning and rebuilding:** ```bash rm -rf build/ ./scripts/build.sh ``` ### Permission Denied If you get permission errors: ```bash chmod +x ./bin/tmpnetctl chmod +x ./bin/avalanchego ``` ### Binary Not Found Error If tmpnetctl says avalanchego not found: ```bash # Verify the binary exists ls -lh ./bin/avalanchego # Use absolute path when starting networks tmpnetctl start-network --avalanchego-path="$(pwd)/bin/avalanchego" ``` ## Next Steps Now that tmpnet is installed, create your first network! Start your first temporary network in minutes Learn how to test your custom VM or L1 # Quick Start (/docs/tooling/tmpnet/quick-start) --- title: Quick Start description: Start your first temporary Avalanche network in minutes --- This guide will help you create, interact with, and manage your first temporary network using tmpnet. ## Before You Start Make sure you've completed the [installation](/docs/tooling/tmpnet/installation) and have: - Built `avalanchego` and `tmpnetctl` - A shell in the avalanchego repo root - Either `direnv allow`'d the repo **or** can pass `--avalanchego-path` ## Start Your First Network ### Basic Start Command Start a 2-node network (default is 5): ```bash cd /path/to/avalanchego # If you enabled direnv (.envrc sets paths) tmpnetctl start-network --node-count=2 # Without direnv, pass the avalanchego path explicitly ./bin/tmpnetctl start-network \ --avalanchego-path="$(pwd)/bin/avalanchego" \ --node-count=2 ``` **Expected Output:** ``` [12-05|15:23:26.831] INFO tmpnet/network.go:254 preparing configuration for new network [12-05|15:23:26.839] INFO tmpnet/network.go:385 starting network {"networkDir": "/Users/you/.tmpnet/networks/20251205-152326.831812", "uuid": "0ef20abc-4d96-438f-943c-a4442254b9bb"} [12-05|15:23:27.992] INFO tmpnet/process_runtime.go:148 started local node {"nodeID": "NodeID-Pw8tmrG..."} [12-05|15:23:28.395] INFO tmpnet/process_runtime.go:148 started local node {"nodeID": "NodeID-KBxAJo5..."} [12-05|15:23:28.396] INFO tmpnet/network.go:400 waiting for nodes to report healthy [12-05|15:23:30.399] INFO tmpnet/network.go:976 node is healthy {"nodeID": "NodeID-KBxAJo5...", "uri": "http://127.0.0.1:56395"} [12-05|15:23:33.999] INFO tmpnet/network.go:976 node is healthy {"nodeID": "NodeID-Pw8tmrG...", "uri": "http://127.0.0.1:56386"} [12-05|15:23:33.999] INFO tmpnet/network.go:404 started network Configure tmpnetctl to target this network by default with one of the following statements: - source /Users/you/.tmpnet/networks/20251205-152326.831812/network.env - export TMPNET_NETWORK_DIR=/Users/you/.tmpnet/networks/20251205-152326.831812 - export TMPNET_NETWORK_DIR=/Users/you/.tmpnet/networks/latest ``` The network is now running with 2 validator nodes! ### With direnv (Simpler) If you've set up direnv: ```bash cd /path/to/avalanchego direnv allow # Much simpler! tmpnetctl start-network --node-count=2 ``` ## Configure Your Shell To manage your network without specifying `--network-dir` every time, set `TMPNET_NETWORK_DIR`: ```bash # Option 1: Use the 'latest' symlink (recommended) export TMPNET_NETWORK_DIR=~/.tmpnet/networks/latest ``` The `latest` symlink always points to the most recently created network. Now you can run commands without flags: ```bash tmpnetctl stop-network # Uses TMPNET_NETWORK_DIR tmpnetctl restart-network ``` **Make it permanent** by adding to your shell config: ```bash # For zsh (macOS) echo 'export TMPNET_NETWORK_DIR=~/.tmpnet/networks/latest' >> ~/.zshrc source ~/.zshrc # For bash (Linux) echo 'export TMPNET_NETWORK_DIR=~/.tmpnet/networks/latest' >> ~/.bashrc source ~/.bashrc ``` **Alternative**: Source the network's env file directly: ```bash source ~/.tmpnet/networks/latest/network.env ``` ## Explore Your Network ### Network Directory Structure ```bash ls ~/.tmpnet/networks/latest/ ``` **Output:** ``` config.json # Network configuration genesis.json # Genesis file metrics.txt # Grafana dashboard link network.env # Environment setup script NodeID-74mGyq7dVVCeE4RUn4pufRMvYTFTEykcp/ # Node 1 directory NodeID-BTtC98RhLA5mbctKczZQC2Rt6N9DziM4c/ # Node 2 directory ``` ### Find Node API Endpoints Each node exposes API endpoints on dynamically allocated ports. When tmpnet starts a node with `--http-port=0`, the OS assigns an available port. AvalancheGo then writes the actual allocated port to `process.json` via the `--process-context-file` flag. The `process.json` file is created by **avalanchego itself**, not by tmpnetctl. When tmpnet starts a node, it passes: - `--http-port=0` and `--staking-port=0` for dynamic port allocation - `--process-context-file=[node-dir]/process.json` to specify where avalanchego should write runtime info AvalancheGo then writes its PID, URI (with the actual allocated port), and staking address to this file once it starts. ```bash # View all node URIs cat ~/.tmpnet/networks/latest/NodeID-*/process.json | jq -r '.uri' ``` **Example output:** ``` http://127.0.0.1:56395 http://127.0.0.1:56386 ``` ### Get a Single Node URI ```bash # Store first node URI in a variable NODE_URI=$(cat ~/.tmpnet/networks/latest/NodeID-*/process.json | jq -r '.uri' | head -1) echo $NODE_URI ``` ### Call Node RPCs Use the URI to call standard Avalanche APIs over HTTP: ```bash # Health curl -s "$NODE_URI/ext/health" | jq '.healthy' # Node ID curl -s -X POST --data '{ "jsonrpc": "2.0", "id": 1, "method": "info.getNodeID" }' -H 'content-type:application/json;' "$NODE_URI/ext/info" | jq # C-Chain RPC (replace with your chain ID if different) CHAIN_ID=C curl -s -X POST --data '{ "jsonrpc":"2.0", "id":1, "method":"eth_blockNumber", "params":[] }' -H 'content-type:application/json;' "$NODE_URI/ext/bc/$CHAIN_ID/rpc" | jq ``` ## Interact with Your Network ### Check Node Health ```bash curl -s http://127.0.0.1:56395/ext/health | jq '.healthy' ``` **Response:** ```json true ``` ### Get Node Information ```bash curl -s -X POST --data '{ "jsonrpc": "2.0", "id": 1, "method": "info.getNodeID" }' -H 'content-type:application/json;' http://127.0.0.1:56395/ext/info | jq ``` **Response:** ```json { "jsonrpc": "2.0", "result": { "nodeID": "NodeID-74mGyq7dVVCeE4RUn4pufRMvYTFTEykcp", "nodePOP": { "publicKey": "...", "proofOfPossession": "..." } }, "id": 1 } ``` ### Check Network Info ```bash curl -s -X POST --data '{ "jsonrpc": "2.0", "id": 1, "method": "info.getNetworkID" }' -H 'content-type:application/json;' http://127.0.0.1:56395/ext/info | jq ``` ## Use Pre-funded Keys Every tmpnet network comes with **50 pre-funded test keys** ready for immediate use. These keys have large balances on all chains (X-Chain, P-Chain, and C-Chain). ### View Pre-funded Keys ```bash cat ~/.tmpnet/networks/latest/config.json | jq '.preFundedKeys' ``` **Example output:** ```json [ "PrivateKey-ewoqjP7PxY4yr3iLTpLisriqt94hdyDFNgchSxGGztUrTXtNN", "PrivateKey-2VbLJLjPJn4XA8UqQ4BjmF5LmkZj4EZ2dXLKmKPmXTbKHvvQh6", "PrivateKey-R6e8f5QSa89DjpvL9asNdhdJ4u8VqzMJStPV8VVdDmLgPd8x4", ... ] ``` ### Get a Single Key for Testing ```bash # Store the first pre-funded key TEST_KEY=$(cat ~/.tmpnet/networks/latest/config.json | jq -r '.preFundedKeys[0]') echo $TEST_KEY # Output: PrivateKey-ewoqjP7PxY4yr3iLTpLisriqt94hdyDFNgchSxGGztUrTXtNN ``` ### What Are These Keys Funded With? Each key has balances on: - **P-Chain** - For staking and subnet operations - **X-Chain** - For asset transfers - **C-Chain** - For EVM transactions (contract deployments, etc.) You can use these keys immediately for transactions, contract deployments, staking operations, and validator management. ## Use with Foundry/Cast tmpnet networks work with standard EVM tools like Foundry. Here's how to connect. ### Set Up Environment Variables ```bash # Get the first node's URI and construct the C-Chain RPC URL NODE_URI=$(cat ~/.tmpnet/networks/latest/NodeID-*/process.json | jq -r '.uri' | head -1) export RPC_URL="${NODE_URI}/ext/bc/C/rpc" echo $RPC_URL # Example: http://127.0.0.1:56395/ext/bc/C/rpc ``` ### The EWOQ Test Key Every tmpnet network includes the well-known EWOQ test key, pre-funded with AVAX: | Property | Value | |----------|-------| | Private Key (hex) | `56289e99c94b6912bfc12adc093c9b51124f0dc54ac7a766b2bc5ccf558d8027` | | Address | `0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC` | | C-Chain Balance | 50,000,000 AVAX | ```bash export PRIVATE_KEY="56289e99c94b6912bfc12adc093c9b51124f0dc54ac7a766b2bc5ccf558d8027" ``` The EWOQ key is publicly known. Never use it on Fuji or Mainnet—only for local development. ### Common Cast Commands ```bash # Check balance cast balance 0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC --rpc-url $RPC_URL # Get chain ID cast chain-id --rpc-url $RPC_URL # Get latest block cast block-number --rpc-url $RPC_URL # Send AVAX to another address cast send 0xYourAddress --value 1ether \ --rpc-url $RPC_URL \ --private-key $PRIVATE_KEY ``` ### Deploy Contracts with Forge ```bash # Deploy a contract forge create src/MyContract.sol:MyContract \ --rpc-url $RPC_URL \ --private-key $PRIVATE_KEY # Run a deployment script forge script script/Deploy.s.sol \ --rpc-url $RPC_URL \ --private-key $PRIVATE_KEY \ --broadcast ``` ### Chain Configuration For `foundry.toml`: ```toml [rpc_endpoints] local = "http://127.0.0.1:56395/ext/bc/C/rpc" [etherscan] # No explorer for local networks ``` Remember that tmpnet uses dynamic ports. If you restart your network, the port may change. Always re-export `RPC_URL` after restarting. ## View Network Configuration ### Network Configuration ```bash cat ~/.tmpnet/networks/latest/config.json | jq '{ uuid, owner, preFundedKeyCount: (.preFundedKeys | length) }' ``` ### Genesis Configuration ```bash cat ~/.tmpnet/networks/latest/genesis.json | jq '.networkID' ``` ### Node Configuration ```bash # View node flags cat ~/.tmpnet/networks/latest/NodeID-*/flags.json | head -1 | jq # View node runtime config cat ~/.tmpnet/networks/latest/NodeID-*/config.json | head -1 | jq ``` ## Manage Your Network ### Stop the Network ```bash tmpnetctl stop-network ``` **Output:** ``` Stopped network configured at: /Users/you/.tmpnet/networks/latest ``` ### Restart the Network ```bash tmpnetctl restart-network ``` This preserves all network data and configuration, restarting with the same genesis and keys. ### Start a New Network ```bash # This creates a completely new network with new keys tmpnetctl start-network \ --avalanchego-path="$(pwd)/bin/avalanchego" \ --node-count=3 ``` ## View Node Logs ### Watch Logs in Real-time ```bash # Watch all node logs tail -f ~/.tmpnet/networks/latest/NodeID-*/logs/main.log # Watch a specific node tail -f ~/.tmpnet/networks/latest/NodeID-74mGyq7dVVCeE4RUn4pufRMvYTFTEykcp/logs/main.log ``` ### Search Logs for Errors ```bash grep -i "error" ~/.tmpnet/networks/latest/NodeID-*/logs/main.log ``` ### View Recent Log Lines ```bash tail -50 ~/.tmpnet/networks/latest/NodeID-*/logs/main.log ``` ## Common Operations ### Check Running Processes ```bash # View all node processes ps aux | grep avalanchego # Count running nodes ps aux | grep avalanchego | grep -v grep | wc -l ``` ### Get All Node URIs at Once ```bash # Create a simple script for process_file in ~/.tmpnet/networks/latest/NodeID-*/process.json; do jq -r '.uri' "$process_file" done ``` ### Check Node Process Details ```bash # View process information for all nodes cat ~/.tmpnet/networks/latest/NodeID-*/process.json | jq '{ pid, uri, stakingAddress }' ``` ## Directory Structure Reference ``` ~/.tmpnet/networks/latest/ ├── config.json # Network configuration (owner, UUID, keys) ├── genesis.json # Genesis file with allocations ├── metrics.txt # Grafana dashboard link ├── network.env # Shell environment variables └── NodeID-/ # Per-node directory ├── config.json # Node runtime configuration ├── flags.json # Node flags ├── process.json # Process info (PID, URIs, ports) ├── logs/ │ └── main.log # Node logs ├── db/ # Node database └── chainData/ # Chain data ``` ## Troubleshooting ### Network Won't Start **Error: `avalanchego binary not found`** Solution: ```bash # Verify binary exists ls -lh ./bin/avalanchego # Use absolute path tmpnetctl start-network \ --avalanchego-path="$(pwd)/bin/avalanchego" ``` **Error: `address already in use`** Solution: ```bash # Check for running nodes ps aux | grep avalanchego # Stop existing network export TMPNET_NETWORK_DIR=~/.tmpnet/networks/latest tmpnetctl stop-network ``` ### Can't Connect to Nodes **Issue:** Curl commands fail Solution: ```bash # 1. Verify nodes are running ps aux | grep avalanchego # 2. Check actual URIs cat ~/.tmpnet/networks/latest/NodeID-*/process.json | jq -r '.uri' # 3. Test health endpoint with correct URI curl http://127.0.0.1:/ext/health ``` ### Missing process.json Files **Issue:** `process.json` files don't exist in node directories, or ports in `flags.json` are all `0` This happens when running avalanchego manually without the `--process-context-file` flag. **Understanding the issue:** - `flags.json` showing `"http-port": "0"` is correct - this tells the OS to allocate a dynamic port - `process.json` is created by **avalanchego itself** when started with the `--process-context-file` flag - tmpnetctl automatically passes this flag, but manual setups need to include it **Solution for manual avalanchego setups:** ```bash # When starting avalanchego manually with dynamic ports, include: avalanchego \ --http-port=0 \ --staking-port=0 \ --process-context-file=/path/to/node/process.json \ # ... other flags # AvalancheGo will write the actual allocated ports to process.json ``` **If using tmpnetctl:** The `process.json` files should be created automatically. If they're missing, ensure: 1. The network started successfully (check for "started network" in output) 2. Nodes are still running (`ps aux | grep avalanchego`) 3. You're looking in the correct network directory ### Command Not Found **Error: `tmpnetctl: command not found`** Solution: ```bash # Use full path ./bin/tmpnetctl --help # Or add to PATH export PATH=$PATH:$(pwd)/bin tmpnetctl --help # Or use direnv direnv allow tmpnetctl --help ``` ### Clean Up Everything To remove all networks and start fresh: ```bash # Stop any running networks export TMPNET_NETWORK_DIR=~/.tmpnet/networks/latest tmpnetctl stop-network # Remove all tmpnet data (optional) rm -rf ~/.tmpnet/networks ``` ## Next Steps Now that you have a running network, learn how to: Deploy and test your custom Virtual Machine Create and test custom subnets Choose between local and Kubernetes runtimes Set up metrics and log collection # Troubleshooting Runtime Issues (/docs/tooling/tmpnet/troubleshooting-runtime) --- title: Troubleshooting Runtime Issues description: Diagnose and resolve common tmpnet runtime issues for local processes and Kubernetes deployments --- This guide helps you diagnose and resolve common issues with tmpnet's different runtime environments. Issues are organized by runtime type for quick reference. ## Local Process Runtime Issues ### Port Conflicts **Symptom:** Error messages like "address already in use" or "bind: address already in use" when starting a network. **Cause:** A previous network is still running, or another application is using the ports. **Solution:** ```bash # Check for orphaned avalanchego processes ps aux | grep avalanchego # Kill any orphaned processes pkill -f avalanchego # Verify ports are free lsof -i :9650-9660 ``` **Prevention:** Always use dynamic port allocation by setting ports to "0": ```go network.DefaultFlags = tmpnet.FlagsMap{ "http-port": "0", // Let OS assign available port "staking-port": "0", // Let OS assign available port } ``` Avoid hardcoding port numbers unless you have a specific reason. Dynamic ports prevent conflicts when running multiple networks or tests concurrently. ### Process Not Stopping **Symptom:** After calling `network.Stop()`, avalanchego processes remain running in the background. **Cause:** Process termination may fail silently, or cleanup may not complete properly. **Solution:** ```bash # Find all avalanchego processes ps aux | grep avalanchego # Try graceful termination first pkill -TERM -f avalanchego sleep 5 # If processes still running, force kill as last resort pkill -9 -f avalanchego # Clean up temporary directories if needed # First verify which network you want to delete ls -lt ~/.tmpnet/networks/ # Then delete the specific network directory rm -rf ~/.tmpnet/networks/20250312-143052.123456 ``` Use `pkill -9` (SIGKILL) only as a last resort after graceful termination fails. SIGKILL doesn't allow cleanup and can leave the database in an inconsistent state. **Prevention:** Always use context with timeout for Stop operations: ```go ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second) defer cancel() if err := network.Stop(ctx); err != nil { // Log error but continue cleanup log.Printf("Failed to stop network cleanly: %v", err) } ``` ### Binary Not Found **Symptom:** Error "avalanchego not found" or "executable file not found in $PATH" when starting nodes. **Cause:** The avalanchego binary path is incorrect or not specified. **Solution:** ```bash # Verify the binary exists ls -lh /path/to/avalanchego # Use absolute path when configuring export AVALANCHEGO_PATH="$(pwd)/bin/avalanchego" # Or specify in code runtimeCfg := &tmpnet.ProcessRuntimeConfig{ AvalancheGoPath: "/absolute/path/to/avalanchego", } ``` **Verification:** ```bash # Test the binary works /path/to/avalanchego --version # Should output version information ``` When using relative paths, ensure they resolve correctly from your test working directory. Absolute paths are more reliable for test automation. ### Logs Location **Where to find logs:** Node logs are stored in the network directory under each node's subdirectory. ```bash # Find the latest network ls -lt ~/.tmpnet/networks/ # Use the 'latest' symlink tail -f ~/.tmpnet/networks/latest/NodeID-*/logs/main.log # Or specify the timestamp directory tail -f ~/.tmpnet/networks/20250312-143052.123456/NodeID-7Xhw2mX5xVHr1ANraYiTgjuB8Jqdbj8/logs/main.log ``` **Useful log commands:** ```bash # View all node logs simultaneously tail -f ~/.tmpnet/networks/latest/NodeID-*/logs/main.log # Search for errors across all nodes grep -r "ERROR" ~/.tmpnet/networks/latest/*/logs/ # Monitor a specific node export NODE_ID="NodeID-7Xhw2mX5xVHr1ANraYiTgjuB8Jqdbj8" tail -f ~/.tmpnet/networks/latest/$NODE_ID/logs/main.log ``` ## Kubernetes Runtime Issues ### Pod Stuck in Pending **Symptom:** Node pods remain in "Pending" state and never start. **Common causes:** - Insufficient cluster resources (CPU/memory) - Node selector constraints not met - Storage class unavailable - Image pull errors (see below) **Diagnosis:** ```bash # Check pod status details kubectl describe pod avalanchego-node-0 -n tmpnet # Look for events section kubectl get events -n tmpnet --sort-by='.lastTimestamp' # Check node resources kubectl top nodes ``` **Solutions:** ```bash # If resource limits are too high, adjust them kubectl edit statefulset avalanchego -n tmpnet # Verify your cluster has available nodes kubectl get nodes # Check for node taints kubectl describe nodes | grep -i taint ``` ### Image Pull Errors **Symptom:** Pod status shows "ImagePullBackOff" or "ErrImagePull". **Cause:** Cannot pull the Docker image from the registry. **Diagnosis:** ```bash # Check image pull status kubectl describe pod avalanchego-node-0 -n tmpnet | grep -A 5 "Events:" # Verify image name kubectl get pod avalanchego-node-0 -n tmpnet -o jsonpath='{.spec.containers[0].image}' ``` **Solutions:** ```bash # Verify image exists in registry docker pull avaplatform/avalanchego:latest # If using private registry, check image pull secrets kubectl get secrets -n tmpnet # Create image pull secret if needed kubectl create secret docker-registry regcred \ --docker-server= \ --docker-username= \ --docker-password= \ -n tmpnet ``` **Alternative:** Use a local image with kind: ```bash # Load image into kind cluster kind load docker-image avaplatform/avalanchego:latest --name tmpnet-cluster ``` ### Ingress Not Working **Symptom:** Cannot reach node APIs through ingress endpoints, connection refused or timeouts. **Cause:** Ingress controller not installed, misconfigured, or ingress rules not applied. **Diagnosis:** ```bash # Check if ingress controller is running kubectl get pods -n ingress-nginx # Verify ingress resource exists kubectl get ingress -n tmpnet # Check ingress details kubectl describe ingress avalanchego-ingress -n tmpnet # Test service directly (bypassing ingress) kubectl port-forward svc/avalanchego-node-0 9650:9650 -n tmpnet curl http://localhost:9650/ext/health ``` **Solutions:** ```bash # Install ingress controller if missing kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml # Verify ingress host configuration kubectl get ingress -n tmpnet -o yaml | grep host: # Check service endpoints kubectl get endpoints -n tmpnet ``` For kind clusters, ensure you created the cluster with `extraPortMappings` to expose ports 80/443. See the [kind ingress documentation](https://kind.sigs.k8s.io/docs/user/ingress/). ### StatefulSet Not Updating **Symptom:** After updating the StatefulSet (e.g., changing image version), pods still run the old image. **Cause:** StatefulSet update strategy is set to `OnDelete` by default, requiring manual pod deletion. **Solution:** ```bash # Check update strategy kubectl get statefulset avalanchego -n tmpnet -o jsonpath='{.spec.updateStrategy}' # Manually delete pods to trigger update kubectl delete pod avalanchego-node-0 -n tmpnet # StatefulSet will recreate with new image # Or delete all pods kubectl delete pods -l app=avalanchego -n tmpnet ``` **Change to rolling updates:** ```bash kubectl patch statefulset avalanchego -n tmpnet -p '{"spec":{"updateStrategy":{"type":"RollingUpdate"}}}' ``` ### Persistent Volume Issues **Symptom:** Pod cannot start with error "FailedMount" or "PVC not bound". **Cause:** Persistent Volume Claims (PVCs) cannot be provisioned or bound. **Diagnosis:** ```bash # Check PVC status kubectl get pvc -n tmpnet # Should show "Bound" status # If "Pending", check details kubectl describe pvc data-avalanchego-node-0 -n tmpnet # Verify storage class exists kubectl get storageclass ``` **Solutions:** ```bash # If using kind or minikube, ensure default storage class exists kubectl get storageclass # For kind, standard storage class should be available by default # For custom clusters, install a storage provisioner # Delete stuck PVCs if needed (will delete data!) kubectl delete pvc data-avalanchego-node-0 -n tmpnet ``` ## General Runtime Issues ### Health Check Failures **Symptom:** Node reports as unhealthy or `IsHealthy()` returns false in tests. **Cause:** Node may still be bootstrapping, or there's a configuration issue. **Health check endpoint:** `GET /ext/health/liveness` on the HTTP port. **Diagnosis:** ```bash # Check health endpoint directly curl http://localhost:9650/ext/health/liveness # Expected healthy response: # {"checks":{"network":{"message":{"..."},"timestamp":"...","duration":123,"contiguousFailures":0,"timeOfFirstFailure":null}},"healthy":true} # Check if node is still bootstrapping curl http://localhost:9650/ext/info | jq '.result.isBootstrapped' ``` **Solutions:** Wait longer - bootstrapping can take time: ```go // Use generous timeout for health checks ctx, cancel := context.WithTimeout(context.Background(), 5*time.Minute) defer cancel() err := node.WaitForHealthy(ctx) if err != nil { return fmt.Errorf("node failed to become healthy: %w", err) } ``` Check logs for errors: ```bash # Look for bootstrap progress tail -f ~/.tmpnet/networks/latest/NodeID-*/logs/main.log | grep -i "bootstrap" # Check for errors tail -f ~/.tmpnet/networks/latest/NodeID-*/logs/main.log | grep -i "error" ``` The first node in a network typically takes longer to start because it must wait for staking to be enabled. Subsequent nodes bootstrap from the first node. ### Monitoring Not Working **Symptom:** No metrics or logs appear in Prometheus/Grafana/Loki dashboards. **Diagnosis:** ```bash # Check if collectors are running ps aux | grep prometheus ps aux | grep promtail # Verify environment variables echo $PROMETHEUS_URL echo $LOKI_URL # Check service discovery configs exist ls -la ~/.tmpnet/prometheus/file_sd_configs/ ls -la ~/.tmpnet/promtail/file_sd_configs/ ``` **Solutions:** ```bash # Start collectors if not running tmpnetctl start-metrics-collector tmpnetctl start-logs-collector # Verify binaries are in PATH which prometheus which promtail # If using nix, ensure development shell is active nix develop # Check collector logs tail -f ~/.tmpnet/prometheus/*.log tail -f ~/.tmpnet/promtail/*.log ``` **Verify metrics are being collected:** ```bash # Query Prometheus directly curl -s "${PROMETHEUS_URL}/api/v1/query?query=up" \ -u "${PROMETHEUS_USERNAME}:${PROMETHEUS_PASSWORD}" \ | jq ``` ## Performance Troubleshooting ### Slow Network Bootstrap **Symptom:** Network takes longer than 5 minutes to bootstrap. **Common causes:** - Network too large (many nodes/subnets) - Insufficient system resources - Debug logging enabled **Solutions:** Reduce network size for testing: ```go // Use fewer nodes for faster tests network.Nodes = tmpnet.NewNodesOrPanic(3) // Instead of 5+ ``` Reduce logging verbosity: ```go network.DefaultFlags = tmpnet.FlagsMap{ "log-level": "info", // Instead of "debug" or "trace" } ``` Increase system resources: ```bash # Check current resource usage top df -h ~/.tmpnet/ # Clean up old networks rm -rf ~/.tmpnet/networks/202* ``` ### High Memory Usage **Symptom:** avalanchego processes consume excessive memory, system becomes slow. **Diagnosis:** ```bash # Check memory usage per process ps aux | grep avalanchego | awk '{print $2, $4, $11}' # Monitor over time watch -n 5 'ps aux | grep avalanchego' ``` **Solutions:** Limit database size: ```go network.DefaultFlags = tmpnet.FlagsMap{ "db-type": "memdb", // Use in-memory DB for tests "pruning-enabled": "true", "state-sync-enabled": "false", // Disable if not needed } ``` Stop old networks: ```bash # Stop all running networks for dir in ~/.tmpnet/networks/*/; do export TMPNET_NETWORK_DIR="$dir" tmpnetctl stop-network done ``` ## Debugging Techniques ### Enable Verbose Logging Increase log verbosity to diagnose issues: ```go node.Flags = tmpnet.FlagsMap{ "log-level": "trace", // Most verbose "log-display-level": "trace", } ``` ### Capture Process Output Redirect process output to see initialization errors: ```bash # Run avalanchego manually with same config /path/to/avalanchego \ --config-file=~/.tmpnet/networks/latest/NodeID-*/flags.json \ 2>&1 | tee avalanchego-debug.log ``` ### Network State Inspection Inspect the network state directory: ```bash # View network configuration cat ~/.tmpnet/networks/latest/config.json | jq # View node flags cat ~/.tmpnet/networks/latest/NodeID-*/flags.json | jq # Check process status cat ~/.tmpnet/networks/latest/NodeID-*/process.json | jq ``` ### Test Individual Components Test components in isolation: ```go // Test just node health func TestNodeHealth(t *testing.T) { node := network.Nodes[0] ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second) defer cancel() err := node.WaitForHealthy(ctx) require.NoError(t, err) } ``` ## Getting Help If you're still experiencing issues: 1. **Check logs** - Always check node logs first for error messages 2. **Search GitHub issues** - Check [avalanchego issues](https://github.com/ava-labs/avalanchego/issues) for similar problems 3. **Ask the community** - Post in [Avalanche Discord](https://chat.avax.network) #developers channel 4. **Include details** - Share error messages, logs, and your configuration **Information to include when asking for help:** - tmpnet version: `go list -m github.com/ava-labs/avalanchego` - Runtime type: Local process or Kubernetes - Operating system and version - Error messages and relevant log excerpts - Network configuration (redact sensitive data) - Steps to reproduce the issue ## Next Steps Complete configuration options Set up metrics and logging Start with the basics # Get metrics for EVM chains (/docs/api-reference/metrics-api/chain-metrics/getEvmChainMetrics) --- title: Get metrics for EVM chains full: true _openapi: method: GET route: /v2/chains/{chainId}/metrics/{metric} toc: [] structuredData: headings: [] contents: - content: >- EVM chain metrics are available for all Avalanche L1s on _Mainnet_ and _Fuji_ (testnet). You can request metrics by EVM chain ID. See the `/chains` endpoint for all supported chains. All metrics are updated several times every hour. Each metric data point has a `value` and `timestamp` (Unix timestamp in seconds). All metric values include data within the duration of the associated timestamp plus the requested `timeInterval`. All timestamps are fixed to the hour. When requesting a timeInterval of **day**, **week**, or **month**, the timestamp will be 0:00 UTC of the day, Monday of the week, or first day of the month, respectively. The latest data point in any response may change on each update. ### Metrics activeAddresses: The number of distinct addresses seen within the selected `timeInterval` starting at the timestamp. Addresses counted are those that appear in the “from” and “to” fields of a transaction or ERC20/ERC721/ERC1155 transfer log event. activeSenders: This metric follows the same structure as activeAddresses, but instead only counts addresses that appear in the “from” field of the respective transaction or transfer log event. cumulativeTxCount: The cumulative transaction count from genesis up until 24 hours after the timestamp. This aggregation can be considered a “rolling sum” of the transaction count metric (txCount). Only `timeInterval=day` supported. cumulativeAddresses: The cumulative count of unique addresses from genesis up until 24 hours after the timestamp. Addresses counted are those that appear in the “from” and “to” fields of a transaction or ERC20/ERC721/ERC1155 transfer log event. Only `timeInterval=day` supported. cumulativeContracts: The cumulative count of contracts created from genesis up until the timestamp. Contracts are counted by looking for the CREATE, CREATE2, and CREATE3 call types in all transaction traces (aka internal transactions). Only `timeInterval=day` supported. cumulativeDeployers: The cumulative count of unique contract deployers from genesis up until 24 hours after the timestamp. Deployers counted are those that appear in the “from” field of transaction traces with the CREATE, CREATE2, and CREATE3 call types. Only `timeInterval=day` supported. contracts: The count of contracts created within the requested timeInterval starting at the timestamp. Contracts are counted by looking for the CREATE, CREATE2, and CREATE3 call types in all transaction traces (aka internal transactions). Only `timeInterval=day` supported. deployers: The count of unique deployers within the requested timeInterval starting at the timestamp. Deployers counted are those that appear in the “from” field of transaction traces with the CREATE, CREATE2, and CREATE3 call types. Only `timeInterval=day` supported. gasUsed: The amount of gas used by transactions within the requested timeInterval starting at the timestamp. txCount: The amount of transactions within the requested timeInterval starting at the timestamp. avgGps: The average Gas used Per Second (GPS) within the day beginning at the timestamp. The average is calculated by taking the sum of gas used by all blocks within the day and dividing it by the time interval between the last block of the previous day and the last block of the day that begins at the timestamp. Only `timeInterval=day` supported. maxGps: The max Gas used Per Second (GPS) measured within the day beginning at the timestamp. Each GPS data point is calculated using the gas used in a single block divided by the time since the last block. Only `timeInterval=day` supported. avgTps: The average Transactions Per Second (TPS) within the day beginning at the timestamp. The average is calculated by taking the sum of transactions within the day and dividing it by the time interval between the last block of the previous day and the last block of the day that begins at the timestamp. Only `timeInterval=day` supported. maxTps: The max Transactions Per Second (TPS) measured within the day beginning at the timestamp. Each TPS data point is calculated by taking the number of transactions in a single block and dividing it by the time since the last block. Only `timeInterval=day` supported. avgGasPrice: The average gas price within the day beginning at the timestamp. The gas price used is the price reported in transaction receipts. Only `timeInterval=day` supported. maxGasPrice: The max gas price seen within the day beginning at the timestamp. The gas price used is the price reported in transaction receipts. Only `timeInterval=day` supported. feesPaid: The sum of transaction fees paid within the day beginning at the timestamp. The fee is calculated as the gas used multiplied by the gas price as reported in all transaction receipts. Only `timeInterval=day` supported. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} EVM chain metrics are available for all Avalanche L1s on _Mainnet_ and _Fuji_ (testnet). You can request metrics by EVM chain ID. See the `/chains` endpoint for all supported chains. All metrics are updated several times every hour. Each metric data point has a `value` and `timestamp` (Unix timestamp in seconds). All metric values include data within the duration of the associated timestamp plus the requested `timeInterval`. All timestamps are fixed to the hour. When requesting a timeInterval of **day**, **week**, or **month**, the timestamp will be 0:00 UTC of the day, Monday of the week, or first day of the month, respectively. The latest data point in any response may change on each update. ### Metrics activeAddresses: The number of distinct addresses seen within the selected `timeInterval` starting at the timestamp. Addresses counted are those that appear in the “from” and “to” fields of a transaction or ERC20/ERC721/ERC1155 transfer log event. activeSenders: This metric follows the same structure as activeAddresses, but instead only counts addresses that appear in the “from” field of the respective transaction or transfer log event. cumulativeTxCount: The cumulative transaction count from genesis up until 24 hours after the timestamp. This aggregation can be considered a “rolling sum” of the transaction count metric (txCount). Only `timeInterval=day` supported. cumulativeAddresses: The cumulative count of unique addresses from genesis up until 24 hours after the timestamp. Addresses counted are those that appear in the “from” and “to” fields of a transaction or ERC20/ERC721/ERC1155 transfer log event. Only `timeInterval=day` supported. cumulativeContracts: The cumulative count of contracts created from genesis up until the timestamp. Contracts are counted by looking for the CREATE, CREATE2, and CREATE3 call types in all transaction traces (aka internal transactions). Only `timeInterval=day` supported. cumulativeDeployers: The cumulative count of unique contract deployers from genesis up until 24 hours after the timestamp. Deployers counted are those that appear in the “from” field of transaction traces with the CREATE, CREATE2, and CREATE3 call types. Only `timeInterval=day` supported. contracts: The count of contracts created within the requested timeInterval starting at the timestamp. Contracts are counted by looking for the CREATE, CREATE2, and CREATE3 call types in all transaction traces (aka internal transactions). Only `timeInterval=day` supported. deployers: The count of unique deployers within the requested timeInterval starting at the timestamp. Deployers counted are those that appear in the “from” field of transaction traces with the CREATE, CREATE2, and CREATE3 call types. Only `timeInterval=day` supported. gasUsed: The amount of gas used by transactions within the requested timeInterval starting at the timestamp. txCount: The amount of transactions within the requested timeInterval starting at the timestamp. avgGps: The average Gas used Per Second (GPS) within the day beginning at the timestamp. The average is calculated by taking the sum of gas used by all blocks within the day and dividing it by the time interval between the last block of the previous day and the last block of the day that begins at the timestamp. Only `timeInterval=day` supported. maxGps: The max Gas used Per Second (GPS) measured within the day beginning at the timestamp. Each GPS data point is calculated using the gas used in a single block divided by the time since the last block. Only `timeInterval=day` supported. avgTps: The average Transactions Per Second (TPS) within the day beginning at the timestamp. The average is calculated by taking the sum of transactions within the day and dividing it by the time interval between the last block of the previous day and the last block of the day that begins at the timestamp. Only `timeInterval=day` supported. maxTps: The max Transactions Per Second (TPS) measured within the day beginning at the timestamp. Each TPS data point is calculated by taking the number of transactions in a single block and dividing it by the time since the last block. Only `timeInterval=day` supported. avgGasPrice: The average gas price within the day beginning at the timestamp. The gas price used is the price reported in transaction receipts. Only `timeInterval=day` supported. maxGasPrice: The max gas price seen within the day beginning at the timestamp. The gas price used is the price reported in transaction receipts. Only `timeInterval=day` supported. feesPaid: The sum of transaction fees paid within the day beginning at the timestamp. The fee is calculated as the gas used multiplied by the gas price as reported in all transaction receipts. Only `timeInterval=day` supported. # Get rolling window metrics for EVM chains (/docs/api-reference/metrics-api/chain-metrics/getEvmChainRollingWindowMetrics) --- title: Get rolling window metrics for EVM chains full: true _openapi: method: GET route: /v2/chains/{chainId}/rollingWindowMetrics/{metric} toc: [] structuredData: headings: [] contents: - content: >- Gets the rolling window metrics for an EVM chain for the last hour, day, week, month, 90 days, year, and all time. Active addresses/active senders only support last hour, day, and week. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Gets the rolling window metrics for an EVM chain for the last hour, day, week, month, 90 days, year, and all time. Active addresses/active senders only support last hour, day, and week. # Get ICM summary metrics (/docs/api-reference/metrics-api/chain-metrics/getICMSummary) --- title: Get ICM summary metrics full: true _openapi: method: GET route: /v2/icm/summary toc: [] structuredData: headings: [] contents: - content: >- Get rolling window ICM message counts (last hour, day, month, 90 days, year, all time). Use filters (`srcBlockchainId`, `destBlockchainId`, `network`) to select data, and the `groupBy` parameter for aggregation level. ### Examples: - **Specific pair**: `?srcBlockchainId=...&destBlockchainId=...` - **From one source (aggregated)**: `?srcBlockchainId=...` - **From one source (by destination)**: `?srcBlockchainId=...&groupBy=destBlockchainId` - **To one destination (aggregated)**: `?destBlockchainId=...` - **To one destination (by source)**: `?destBlockchainId=...&groupBy=srcBlockchainId` - **Network total**: `?network=mainnet` - **Network breakdown**: `?network=mainnet&groupBy=srcBlockchainId,destBlockchainId`. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Get rolling window ICM message counts (last hour, day, month, 90 days, year, all time). Use filters (`srcBlockchainId`, `destBlockchainId`, `network`) to select data, and the `groupBy` parameter for aggregation level. ### Examples: - **Specific pair**: `?srcBlockchainId=...&destBlockchainId=...` - **From one source (aggregated)**: `?srcBlockchainId=...` - **From one source (by destination)**: `?srcBlockchainId=...&groupBy=destBlockchainId` - **To one destination (aggregated)**: `?destBlockchainId=...` - **To one destination (by source)**: `?destBlockchainId=...&groupBy=srcBlockchainId` - **Network total**: `?network=mainnet` - **Network breakdown**: `?network=mainnet&groupBy=srcBlockchainId,destBlockchainId`. # Get ICM timeseries metrics (/docs/api-reference/metrics-api/chain-metrics/getICMTimeseries) --- title: Get ICM timeseries metrics full: true _openapi: method: GET route: /v2/icm/timeseries toc: [] structuredData: headings: [] contents: - content: >- Get historical ICM message counts with flexible grouping. Use filters (`srcBlockchainId`, `destBlockchainId`, `network`) to select data, and the `groupBy` parameter for aggregation level. ### Examples: - **Specific pair**: `?srcBlockchainId=...&destBlockchainId=...` - **From one source (aggregated)**: `?srcBlockchainId=...` - **From one source (by destination)**: `?srcBlockchainId=...&groupBy=destBlockchainId` - **To one destination (aggregated)**: `?destBlockchainId=...` - **To one destination (by source)**: `?destBlockchainId=...&groupBy=srcBlockchainId` - **Network total**: `?network=mainnet` - **Network breakdown**: `?network=mainnet&groupBy=srcBlockchainId,destBlockchainId`. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Get historical ICM message counts with flexible grouping. Use filters (`srcBlockchainId`, `destBlockchainId`, `network`) to select data, and the `groupBy` parameter for aggregation level. ### Examples: - **Specific pair**: `?srcBlockchainId=...&destBlockchainId=...` - **From one source (aggregated)**: `?srcBlockchainId=...` - **From one source (by destination)**: `?srcBlockchainId=...&groupBy=destBlockchainId` - **To one destination (aggregated)**: `?destBlockchainId=...` - **To one destination (by source)**: `?destBlockchainId=...&groupBy=srcBlockchainId` - **Network total**: `?network=mainnet` - **Network breakdown**: `?network=mainnet&groupBy=srcBlockchainId,destBlockchainId`. # Get staking metrics for a given subnet (/docs/api-reference/metrics-api/chain-metrics/getStakingMetrics) --- title: Get staking metrics for a given subnet full: true _openapi: method: GET route: /v2/networks/{network}/metrics/{metric} toc: [] structuredData: headings: [] contents: - content: Gets staking metrics for a given subnet. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Gets staking metrics for a given subnet. # Get chain information for supported blockchain (/docs/api-reference/metrics-api/evm-chains/getChain) --- title: Get chain information for supported blockchain full: true _openapi: method: GET route: /v2/chains/{chainId} toc: [] structuredData: headings: [] contents: - content: Get chain information for Metrics API supported blockchain. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Get chain information for Metrics API supported blockchain. # Get a list of supported blockchains (/docs/api-reference/metrics-api/evm-chains/listChains) --- title: Get a list of supported blockchains full: true _openapi: method: GET route: /v2/chains toc: [] structuredData: headings: [] contents: - content: >- Get a list of Metrics API supported blockchains. This endpoint is paginated and supports a maximum page size of 10000. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Get a list of Metrics API supported blockchains. This endpoint is paginated and supports a maximum page size of 10000. # Get the health of the service (/docs/api-reference/metrics-api/health-check/metrics-health-check) --- title: Get the health of the service full: true _openapi: method: GET route: /v2/health-check toc: [] structuredData: headings: [] contents: - content: Check the health of the service. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Check the health of the service. # Get the liveliness of the service (/docs/api-reference/metrics-api/health-check/metrics-live-check) --- title: Get the liveliness of the service full: true _openapi: method: GET route: /v2/live-check toc: [] structuredData: headings: [] contents: - content: Check the liveliness of the service. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Check the liveliness of the service. # Get metric values with given nodeId and timestamp range (/docs/api-reference/metrics-api/l1-validator-metrics/getMetricsByNodeId) --- title: Get metric values with given nodeId and timestamp range full: true _openapi: method: GET route: /v2/validator/{nodeId}/metrics/{metric} toc: [] structuredData: headings: [] contents: - content: >- Get given metric values for a given nodeId with or without a timestamp range. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Get given metric values for a given nodeId with or without a timestamp range. # Get metric values with given subnetId and timestamp range (/docs/api-reference/metrics-api/l1-validator-metrics/getMetricsBySubnetId) --- title: Get metric values with given subnetId and timestamp range full: true _openapi: method: GET route: /v2/subnet/{subnetId}/metrics/{metric} toc: [] structuredData: headings: [] contents: - content: >- Get given metric values for a given subnetId with or without a timestamp range. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Get given metric values for a given subnetId with or without a timestamp range. # Get metric values with given validationId and timestamp range (/docs/api-reference/metrics-api/l1-validator-metrics/getMetricsByValidationId) --- title: Get metric values with given validationId and timestamp range full: true _openapi: method: GET route: /v2/validation/{l1ValidationId}/metrics/{metric} toc: [] structuredData: headings: [] contents: - content: >- Get given metric values for a given validationId with or without a timestamp range. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Get given metric values for a given validationId with or without a timestamp range. # Get given metric for all validators (/docs/api-reference/metrics-api/l1-validator-metrics/getTotalL1ValidatorMetrics) --- title: Get given metric for all validators full: true _openapi: method: GET route: /v2/validators/metrics/{metric} toc: [] structuredData: headings: [] contents: - content: Get given metric's value for all validators. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Get given metric's value for all validators. # Composite query (/docs/api-reference/metrics-api/looking-glass/compositeQueryV2) --- title: Composite query full: true _openapi: method: POST route: /v2/lookingGlass/compositeQuery toc: [] structuredData: headings: [] contents: - content: Composite query to get list of addresses from multiple subqueries. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Composite query to get list of addresses from multiple subqueries. # Get addresses by BTCb bridged balance (/docs/api-reference/metrics-api/looking-glass/getAddressesByBtcbBridged) --- title: Get addresses by BTCb bridged balance full: true _openapi: method: GET route: /v2/chains/43114/btcb/bridged:getAddresses toc: [] structuredData: headings: [] contents: - content: >- Get list of addresses and their net bridged amounts that have bridged more than a certain threshold. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Get list of addresses and their net bridged amounts that have bridged more than a certain threshold. # Get addresses running validators during a given time frame (/docs/api-reference/metrics-api/looking-glass/getValidatorsByDateRange) --- title: Get addresses running validators during a given time frame full: true _openapi: method: GET route: /v2/subnets/{subnetId}/validators:getAddresses toc: [] structuredData: headings: [] contents: - content: >- Get list of addresses and AddValidatorTx timestamps set to receive awards for validation periods during the specified time frame. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Get list of addresses and AddValidatorTx timestamps set to receive awards for validation periods during the specified time frame. # Track ERC-20 Transfers (/docs/api-reference/webhook-api/tutorials/erc20transfer) --- title: Track ERC-20 Transfers description: How to track ERC-20 transfers with the Webhooks API icon: Coins --- In a smart contract, events serve as notifications of specific occurrences, like transactions, or changes in ownership. Each event is uniquely identified by its event signature, which is calculated using the keccak 256 hash of the event name and its input argument types. For example, for an ERC-20 transfer event, the event signature is determined by taking the hash of `Transfer(address,address,uint256)`. To compute this hash yourself, you can use an online `keccak-256` converter and you’ll see that the hexadecimal representation of Transfer(address,address,uint256) is 0xddf252ad1be2c89b69c2b068fc378daa952ba7f163c4a11628f55a4df523b3ef. For a full list of signatures check [https://www.4byte.directory/event-signatures/](https://www.4byte.directory/event-signatures/). Take into consideration that the Transfer event for ERC-20 and ERC-721 tokens is similar. Here is the Transfer event prototype on each standard: * **ERC20**:\ `event Transfer(address indexed _from, address indexed _to, uint256 _value);` * **ERC721**:\ `event Transfer(address indexed _from, address indexed _to, uint256 indexed _tokenId);` These two signatures are indeed the same when you hash them to identify `Transfer` events. The example below illustrates how to set up filtering to receive transfer events. In this example, we will monitor all the USDT transfers on the C-chain. If we go to any block explorer, select a USDT transaction, and look at `Topic 0` from the transfer event, we can get the signature. With the event signature, we can create the webhook as follows: ```bash curl --location ' --header 'x-glacier-api-key: ' \ --header 'Content-Type: application/json' \ --data '{ "url": "https://webhook.site/961a0d1b-a7ed-42fd-9eab-d7e4c7eb1227", "chainId": "43114", "eventType": "address_activity", "metadata": { "addresses": ["0x54C800d2331E10467143911aabCa092d68bF4166"], "includeInternalTxs": false, "includeLogs": true, "eventSignatures": [ "0xddf252ad1be2c89b69c2b068fc378daa952ba7f163c4a11628f55a4df523b3ef" ] }, "name": "Dokyo NFT", "description": "Dokyo NFT" }' ``` Whenever an NFT is transferred you’ll receive a payload like this: ```json { "webhookId": "6d1bd383-aa8d-47b5-b793-da6d8a115fde", "eventType": "address_activity", "messageId": "6a364b45-47a2-45af-97c3-0ddc2e87ad36", "event": { "transaction": { "blockHash": "0x30da6a8887bf2c26b7921a1501abd6e697529427e4a4f52a9d4fc163a2344b46", "blockNumber": "42649820", "from": "0x0000333883f313AD709f583D0A3d2E18a44EF29b", "gas": "245004", "gasPrice": "30000000000", "maxFeePerGas": "30000000000", "maxPriorityFeePerGas": "30000000000", "txHash": "0x2f1a9e2b8719536997596d878f21b70f2ce0901287aa3480d923e7ffc68ac3bc", "txStatus": "1", "input": "0xafde1b3c0000000000000000000000000…0000000000000000000000000000000000", "nonce": "898", "to": "0x398baa6ffc99126671ab6be565856105a6118a40", "transactionIndex": 0, "value": "0", "type": 0, "chainId": "43114", "receiptCumulativeGasUsed": "163336", "receiptGasUsed": "163336", "receiptEffectiveGasPrice": "30000000000", "receiptRoot": "0xdf05c214cee5ff908744e13a3b2879fdba01c9c7f95073670cb23ed735126178", "contractAddress": "0x0000000000000000000000000000000000000000", "blockTimestamp": 1709930290 }, "logs": [ { "address": "0xB31f66AA3C1e785363F0875A1B74E27b85FD66c7", "topic0": "0xddf252ad1be2c89b69c2b068fc378daa952ba7f163c4a11628f55a4df523b3ef", "topic1": "0x0000000000000000000000008cdd7a500f21455361cf1c2e01c0525ce92481b2", "topic2": "0x0000000000000000000000000000333883f313ad709f583d0a3d2e18a44ef29b", "topic3": null, "data": "0x000000000000000000000000000000000000000000000001a6c5c6f4f4f6d060", "transactionIndex": 0, "logIndex": 0, "removed": false }, { "address": "0x54C800d2331E10467143911aabCa092d68bF4166", "topic0": "0x8c5be1e5ebec7d5bd14f71427d1e84f3dd0314c0f7b2291e5b200ac8c7c3b925", "topic1": "0x0000000000000000000000000000333883f313ad709f583d0a3d2e18a44ef29b", "topic2": "0x0000000000000000000000000000000000000000000000000000000000000000", "topic3": "0x0000000000000000000000000000000000000000000000000000000000001350", "data": "0x", "transactionIndex": 0, "logIndex": 1, "removed": false }, { "address": "0x54C800d2331E10467143911aabCa092d68bF4166", "topic0": "0xddf252ad1be2c89b69c2b068fc378daa952ba7f163c4a11628f55a4df523b3ef", "topic1": "0x0000000000000000000000000000333883f313ad709f583d0a3d2e18a44ef29b", "topic2": "0x0000000000000000000000008cdd7a500f21455361cf1c2e01c0525ce92481b2", "topic3": "0x0000000000000000000000000000000000000000000000000000000000001350", "data": "0x", "transactionIndex": 0, "logIndex": 2, "removed": false }, { "address": "0xB31f66AA3C1e785363F0875A1B74E27b85FD66c7", "topic0": "0xddf252ad1be2c89b69c2b068fc378daa952ba7f163c4a11628f55a4df523b3ef", "topic1": "0x0000000000000000000000008cdd7a500f21455361cf1c2e01c0525ce92481b2", "topic2": "0x00000000000000000000000087f45335268512cc5593d435e61df4d75b07d2a2", "topic3": null, "data": "0x000000000000000000000000000000000000000000000000087498758a04efb0", "transactionIndex": 0, "logIndex": 3, "removed": false }, { "address": "0xB31f66AA3C1e785363F0875A1B74E27b85FD66c7", "topic0": "0xddf252ad1be2c89b69c2b068fc378daa952ba7f163c4a11628f55a4df523b3ef", "topic1": "0x0000000000000000000000008cdd7a500f21455361cf1c2e01c0525ce92481b2", "topic2": "0x000000000000000000000000610512654af4fa883bb727afdff2dd78b65342b7", "topic3": null, "data": "0x000000000000000000000000000000000000000000000000021d261d62813bec", "transactionIndex": 0, "logIndex": 4, "removed": false }, { "address": "0x398BAa6FFc99126671Ab6be565856105a6118A40", "topic0": "0x50273fa02273cceea9cf085b42de5c8af60624140168bd71357db833535877af", "topic1": null, "topic2": null, "topic3": null, "data": "0x0000000000009911a89f400000000000000000000…0000010", "transactionIndex": 0, "logIndex": 5, "removed": false } ] } } ``` # Monitoring multiple addresses (/docs/api-reference/webhook-api/tutorials/monitor-multiple-addresses) --- title: Monitoring multiple addresses description: How to monitor multiple addresses with the Webhooks API icon: BookUser --- A single webhook can monitor multiple addresses, you don't need to create one webhook per address. In the free plan, you add up to 5 addresses per webhook. If you need more than that you can upgrade your plan. ### Creating the webhook Let's start by creating a new webhook to monitor all USDC and USDT activity: ```bash curl --location 'https://glacier-api.avax.network/v1/webhooks' \ --header 'x-glacier-api-key: ' \ --header 'Content-Type: application/json' \ --data '{ "url": "https://webhook.site/4eb31e6c-a088-4dcb-9a5d-e9341624b584", "chainId": "43114", "eventType": "address_activity", "includeInternalTxs": true, "includeLogs": true, "metadata": { "addresses": [ "0x9702230A8Ea53601f5cD2dc00fDBc13d4dF4A8c7", "0xB97EF9Ef8734C71904D8002F8b6Bc66Dd9c48a6E" ] }, "name": "Tokens", "description": "Track tokens" } ``` It returns the following: ```json { "id": "401da7d9-d6d7-46c8-b431-72ff1e1543f4", "eventType": "address_activity", "chainId": "43114", "metadata": { "addresses": [ "0x9702230A8Ea53601f5cD2dc00fDBc13d4dF4A8c7", "0xB97EF9Ef8734C71904D8002F8b6Bc66Dd9c48a6E" ] }, "includeInternalTxs": true, "includeLogs": true, "url": "https://webhook.site/4eb31e6c-a088-4dcb-9a5d-e9341624b584", "status": "active", "createdAt": 1715621587726, "name": "Tokens", "description": "Track tokens" } ``` ### Adding addresses to monitor With the webhook `id` we can add more addresses. In this case, let's add the contract addresses for JOE and PNG: ```bash curl --location --request PATCH 'https://glacier-api.avax.network/v1/webhooks/401da7d9-d6d7-46c8-b431-72ff1e1543f4/addresses' \ --header 'x-glacier-api-key: ' \ --header 'Content-Type: application/json' \ --data '{ "addresses": [ "0x6e84a6216eA6dACC71eE8E6b0a5B7322EEbC0fDd", "0x60781C2586D68229fde47564546784ab3fACA982" ] } ``` Following that, we will begin to receive events from the four smart contracts integrated into the webhook: USDC, USDT, JOE, and PNG. ### Deleting addresses To remove addresses, simply send an array: ```bash curl --location --request DELETE 'https://glacier-api.avax.network/v1/webhooks/401da7d9-d6d7-46c8-b431-72ff1e1543f4/addresses' \ --header 'x-glacier-api-key: Next, click on Configure Your Platform and select Web, select Code Custom, and set the site URL `http://localhost:3000` and enable both toggles for local development. This will generate a code snippet to add to your code. Download the OneSignal SDK files and copy them to the top-level root of your directory. ### Step 2 - Frontend Setup In a real-world scenario, your architecture typically involves customers signing up for subscriptions within your Web or Mobile App. To ensure these notifications are sent out, your app needs to register with a push notification provider such as OneSignal. To maintain privacy and security, we'll be using a hash of the wallet address as the `externalID` instead of directly sharing the addresses with OneSignal. This `externalID` will then be mapped to an address in our database. So, when our backend receives a webhook for a specific address, it can retrieve the corresponding `externalID` and send a push notification accordingly. OneSignal Architecture For the sake of simplicity in our demonstration, we'll present a basic scenario where our frontend app retrieves the wallet address and registers it with OneSignal. Additionally, we'll simulate a database using an array within the code. Download the [sample code](https://github.com/javiertc/webhookdemo) and you'll see `client/index.html` with this content. ```html

Avalanche push notifications

``` Run the project using Nodejs. ```bash npm install express axios path body-parser dotenv node app.js ``` Open a Chrome tab and type `http://localhost:3000`, you should see something like this. Then click on Connect and accept receiving push notifications. If you are using MacOS, check in **System Settings** > **Notifications** that you have enabled notifications for the browser. If everything runs correctly your browser should be registered in OneSignal. To check go to **Audience** > **Subscriptions** and verify that your browser is registered. ### Step 3 - Backend Setup Now, let's configure the backend to manage webhook events and dispatch notifications based on the incoming data. Here's the step-by-step process: 1. **Transaction Initiation:** When someone starts a transaction with your wallet as the destination, the webhooks detect the transaction and generate an event. 2. **Event Triggering:** The backend receives the event triggered by the transaction, containing the destination address. 3. **ExternalID Retrieval:** Using the received address, the backend retrieves the corresponding `externalID` associated with that wallet. 4. **Notification Dispatch:** The final step involves sending a notification through OneSignal, utilizing the retrieved `externalID`. OneSignal Backend #### 3.1 - Use Ngrok to tunnel the traffic to localhost If we want to test the webhook in our computer and we are behind a proxy/NAT device or a firewall we need a tool like Ngrok. Glacier will trigger the webhook and make a POST to the Ngrok cloud, then the request is forwarded to your local Ngrok client who in turn forwards it to the Node.js app listening on port 3000. Go to [https://ngrok.com/](https://ngrok.com/) create a free account, download the binary, and connect to your account. Create a Node.js app with Express and paste the following code to receive the webhook: To start an HTTP tunnel forwarding to your local port 3000 with Ngrok, run this next: ```bash ./ngrok http 3000 ``` You should see something like this: ``` ngrok (Ctrl+C to quit) Take our ngrok in production survey! https://forms.gle/aXiBFWzEA36DudFn6 Session Status online Account javier.toledo@avalabs.org (Plan: Free) Version 3.8.0 Region United States (us) Latency 48ms Web Interface http://127.0.0.1:4040 Forwarding https://c902-2600-1700-5220-11a0-813c-d5ac-d72c-f7fd.ngrok-free.app -> http://localhost:3000 Connections ttl opn rt1 rt5 p50 p90 33 0 0.00 0.00 5.02 5.05 HTTP Requests ------------- ``` #### 3.2 - Create the webhook The webhook can be created using the [Avacloud Dashboard](https://app.avacloud.io/) or Glacier API. For convenience, we are going to use cURL. For that copy the forwarding URL generated by Ngrok and append the `/callbackpath` and our address. ```bash curl --location 'https://glacier-api-dev.avax.network/v1/webhooks' \ --header 'x-glacier-api-key: ' \ --header 'Content-Type: application/json' \ --data '{ "url": " https://c902-2600-1700-5220-11a0-813c-d5ac-d72c-f7fd.ngrok-free.app/callback", "chainId": "43113", "eventType": "address_activity", "includeInternalTxs": true, "includeLogs": true, "metadata": { "addresses": ["0x8ae323046633A07FB162043f28Cea39FFc23B50A"] }, "name": "My wallet", "description": "My wallet" }' ``` Don't forget to add your API Key. If you don't have one go to the [Avacloud Dashboard](https://app.avacloud.io/) and create a new one. #### 3.3 - The backend To run the backend we need to add the environment variables in the root of your project. For that create an `.env` file with the following values: ``` PORT=3000 ONESIGNAL_API_KEY= APP_ID= ``` To get the APP ID from OneSignal go to **Settings** > **Keys and IDs** Since we are simulating the connection to a database to retrieve the externalID, we need to add the wallet address and the OneSignal externalID to the myDB array. ```javascript //simulating a DB const myDB = [ { name: 'wallet1', address: '0x8ae323046633A07FB162043f28Cea39FFc23B50A', externalID: '9c96e91d40c7a44c763fb55960e12293afbcfaf6228860550b0c1cc09cd40ac3' }, { name: 'wallet2', address: '0x1f83eC80D755A87B31553f670070bFD897c40CE0', externalID: '0xd39d39c99305c6df2446d5cc3d584dc1eb041d95ac8fb35d4246f1d2176bf330' } ]; ``` The code handles a webhook event triggered when a wallet receives a transaction, performs a lookup in the simulated "database" using the receiving address to retrieve the corresponding OneSignal `externalID`, and then sends an instruction to OneSignal to dispatch a notification to the browser, with OneSignal ultimately delivering the web push notification to the browser. ```javascript require('dotenv').config(); const axios = require('axios'); const express = require('express'); const bodyParser = require('body-parser'); const path = require('path'); const app = express(); const port = process.env.PORT || 3000; // Serve static website app.use(bodyParser.json()); app.use(express.static(path.join(__dirname, './client'))); //simulating a DB const myDB = [ { name: 'wallet1', address: '0x8ae323046633A07FB162043f28Cea39FFc23B50A', externalID: '9c96e91d40c7a44c763fb55960e12293afbcfaf6228860550b0c1cc09cd40ac3' }, { name: 'wallet2', address: '0x1f83eC80D755A87B31553f670070bFD897c40CE0', externalID: '0xd39d39c99305c6df2446d5cc3d584dc1eb041d95ac8fb35d4246f1d2176bf330' } ]; app.post('/callback', async (req, res) => { const { body } = req; try { res.sendStatus(200); handleTransaction(body.event.transaction).catch(error => { console.error('Error processing transaction:', error); }); } catch (error) { console.error('Error processing transaction:', error); res.status(500).json({ error: 'Internal server error' }); } }); // Handle transaction async function handleTransaction(transaction) { console.log('*****Transaction:', transaction); const notifications = []; const erc20Transfers = transaction?.erc20Transfers || []; for (const transfer of erc20Transfers) { const externalID = await getExternalID(transfer.to); const { symbol, valueWithDecimals } = transfer.erc20Token; notifications.push({ type: transfer.type, sender: transfer.from, receiver: transfer.to, amount: valueWithDecimals, token: symbol, externalID }); } if (transaction?.networkToken) { const { tokenSymbol, valueWithDecimals } = transaction.networkToken; const externalID = await getExternalID(transaction.to); notifications.push({ sender: transaction.from, receiver: transaction.to, amount: valueWithDecimals, token: tokenSymbol, externalID }); } if (notifications.length > 0) { sendNotifications(notifications); } } //connect to DB and return externalID async function getExternalID(address) { const entry = myDB.find(entry => entry.address.toLowerCase() === address.toLowerCase()); return entry ? entry.externalID : null; } // Send notifications async function sendNotifications(notifications) { for (const notification of notifications) { try { const data = { include_aliases: { external_id: [notification.externalID.toLowerCase()] }, target_channel: 'push', isAnyWeb: true, contents: { en: `You've received ${notification.amount} ${notification.token}` }, headings: { en: 'Core wallet' }, name: 'Notification', app_id: process.env.APP_ID }; console.log('data:', data); const response = await axios.post('https://onesignal.com/api/v1/notifications', data, { headers: { Authorization: `Bearer ${process.env.ONESIGNAL_API_KEY}`, 'Content-Type': 'application/json' } }); console.log('Notification sent:', response.data); } catch (error) { console.error('Error sending notification:', error); // Optionally, implement retry logic here } } } // Start the server app.listen(port, () => { console.log(`App listening at http://localhost:${port}`); }); ``` You can now start your backend server by running: ```shell node app.js ``` Send AVAX from another wallet to the wallet being monitored by the webhook and you should receive a notification with the amount of Avax received. You can try it with any other ERC20 token as well. ### Conclusion In this tutorial, we've set up a frontend to connect to the Core wallet and enable push notifications using OneSignal. We've also implemented a backend to handle webhook events and send notifications based on the received data. By integrating the frontend with the backend, users can receive real-time notifications for blockchain events. # Add addresses to EVM activity webhook (/docs/api-reference/webhook-api/webhooks/addAddressesToWebhook) --- title: Add addresses to EVM activity webhook full: true _openapi: method: PATCH route: /v1/webhooks/{id}/addresses toc: [] structuredData: headings: [] contents: - content: Add addresses to webhook. Only valid for EVM activity webhooks. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Add addresses to webhook. Only valid for EVM activity webhooks. # Create a webhook (/docs/api-reference/webhook-api/webhooks/createWebhook) --- title: Create a webhook full: true _openapi: method: POST route: /v1/webhooks toc: [] structuredData: headings: [] contents: - content: Create a new webhook. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Create a new webhook. # Deactivate a webhook (/docs/api-reference/webhook-api/webhooks/deactivateWebhook) --- title: Deactivate a webhook full: true _openapi: method: DELETE route: /v1/webhooks/{id} toc: [] structuredData: headings: [] contents: - content: Deactivates a webhook by ID. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Deactivates a webhook by ID. # Generate or rotate a shared secret (/docs/api-reference/webhook-api/webhooks/generateOrRotateSharedSecret) --- title: Generate or rotate a shared secret full: true _openapi: method: POST route: /v1/webhooks:generateOrRotateSharedSecret toc: [] structuredData: headings: [] contents: - content: Generates a new shared secret or rotate an existing one. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Generates a new shared secret or rotate an existing one. # List adresses by EVM activity webhooks (/docs/api-reference/webhook-api/webhooks/getAddressesFromWebhook) --- title: List adresses by EVM activity webhooks full: true _openapi: method: GET route: /v1/webhooks/{id}/addresses toc: [] structuredData: headings: [] contents: - content: List adresses by webhook. Only valid for EVM activity webhooks. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} List adresses by webhook. Only valid for EVM activity webhooks. # Get a shared secret (/docs/api-reference/webhook-api/webhooks/getSharedSecret) --- title: Get a shared secret full: true _openapi: method: GET route: /v1/webhooks:getSharedSecret toc: [] structuredData: headings: [] contents: - content: Get a previously generated shared secret. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Get a previously generated shared secret. # Get a webhook by ID (/docs/api-reference/webhook-api/webhooks/getWebhook) --- title: Get a webhook by ID full: true _openapi: method: GET route: /v1/webhooks/{id} toc: [] structuredData: headings: [] contents: - content: Retrieves a webhook by ID. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Retrieves a webhook by ID. # List webhooks (/docs/api-reference/webhook-api/webhooks/listWebhooks) --- title: List webhooks full: true _openapi: method: GET route: /v1/webhooks toc: [] structuredData: headings: [] contents: - content: Lists webhooks for the user. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists webhooks for the user. # Remove addresses from EVM activity webhook (/docs/api-reference/webhook-api/webhooks/removeAddressesFromWebhook) --- title: Remove addresses from EVM activity webhook full: true _openapi: method: DELETE route: /v1/webhooks/{id}/addresses toc: [] structuredData: headings: [] contents: - content: Remove addresses from webhook. Only valid for EVM activity webhooks. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Remove addresses from webhook. Only valid for EVM activity webhooks. # Update a webhook (/docs/api-reference/webhook-api/webhooks/updateWebhook) --- title: Update a webhook full: true _openapi: method: PATCH route: /v1/webhooks/{id} toc: [] structuredData: headings: [] contents: - content: Updates an existing webhook. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Updates an existing webhook. # Get AVAX supply information (/docs/api-reference/data-api/avax-supply/getAvaxSupply) --- title: Get AVAX supply information full: true _openapi: method: GET route: /v1/avax/supply toc: [] structuredData: headings: [] contents: - content: >- Get AVAX supply information that includes total supply, circulating supply, total p burned, total c burned, total x burned, total staked, total locked, total rewards, and last updated. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Get AVAX supply information that includes total supply, circulating supply, total p burned, total c burned, total x burned, total staked, total locked, total rewards, and last updated. # Get logs for requests made by client (/docs/api-reference/data-api/data-api-usage-metrics/getApiLogs) --- title: Get logs for requests made by client full: true _openapi: method: GET route: /v1/apiLogs toc: [] structuredData: headings: [] contents: - content: >- Gets logs for requests made by client over a specified time interval for a specific organization. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Gets logs for requests made by client over a specified time interval for a specific organization. # Get usage metrics for the Data API (/docs/api-reference/data-api/data-api-usage-metrics/getApiUsageMetrics) --- title: Get usage metrics for the Data API full: true _openapi: method: GET route: /v1/apiUsageMetrics toc: [] structuredData: headings: [] contents: - content: >- Gets metrics for Data API usage over a specified time interval aggregated at the specified time-duration granularity. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Gets metrics for Data API usage over a specified time interval aggregated at the specified time-duration granularity. # Get usage metrics for the Primary Network RPC (/docs/api-reference/data-api/data-api-usage-metrics/getPrimaryNetworkRpcUsageMetrics) --- title: Get usage metrics for the Primary Network RPC full: true _openapi: method: GET route: /v1/primaryNetworkRpcUsageMetrics toc: [] structuredData: headings: [] contents: - content: >- Gets metrics for public Primary Network RPC usage over a specified time interval aggregated at the specified time-duration granularity. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Gets metrics for public Primary Network RPC usage over a specified time interval aggregated at the specified time-duration granularity. # Get usage metrics for the Subnet RPC (/docs/api-reference/data-api/data-api-usage-metrics/getSubnetRpcUsageMetrics) --- title: Get usage metrics for the Subnet RPC full: true _openapi: method: GET route: /v1/subnetRpcUsageMetrics toc: [] structuredData: headings: [] contents: - content: >- Gets metrics for public Subnet RPC usage over a specified time interval aggregated at the specified time-duration granularity. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Gets metrics for public Subnet RPC usage over a specified time interval aggregated at the specified time-duration granularity. # Get native token balance (/docs/api-reference/data-api/evm-balances/getNativeBalance) --- title: Get native token balance full: true _openapi: method: GET route: /v1/chains/{chainId}/addresses/{address}/balances:getNative toc: [] structuredData: headings: [] contents: - content: >- Gets native token balance of a wallet address. Balance at a given block can be retrieved with the `blockNumber` parameter. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Gets native token balance of a wallet address. Balance at a given block can be retrieved with the `blockNumber` parameter. # List collectible (ERC-721/ERC-1155) balances (/docs/api-reference/data-api/evm-balances/listCollectibleBalances) --- title: List collectible (ERC-721/ERC-1155) balances full: true _openapi: method: GET route: /v1/chains/{chainId}/addresses/{address}/balances:listCollectibles toc: [] structuredData: headings: [] contents: - content: >- Lists ERC-721 and ERC-1155 token balances of a wallet address. Balance for a specific contract can be retrieved with the `contractAddress` parameter. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists ERC-721 and ERC-1155 token balances of a wallet address. Balance for a specific contract can be retrieved with the `contractAddress` parameter. # List ERC-1155 balances (/docs/api-reference/data-api/evm-balances/listErc1155Balances) --- title: List ERC-1155 balances full: true _openapi: method: GET route: /v1/chains/{chainId}/addresses/{address}/balances:listErc1155 toc: [] structuredData: headings: [] contents: - content: >- Lists ERC-1155 token balances of a wallet address. Balance at a given block can be retrieved with the `blockNumber` parameter. Balance for a specific contract can be retrieved with the `contractAddress` parameter. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists ERC-1155 token balances of a wallet address. Balance at a given block can be retrieved with the `blockNumber` parameter. Balance for a specific contract can be retrieved with the `contractAddress` parameter. # List ERC-20 balances (/docs/api-reference/data-api/evm-balances/listErc20Balances) --- title: List ERC-20 balances full: true _openapi: method: GET route: /v1/chains/{chainId}/addresses/{address}/balances:listErc20 toc: [] structuredData: headings: [] contents: - content: >- Lists ERC-20 token balances of a wallet address. Balance at a given block can be retrieved with the `blockNumber` parameter. Balance for specific contracts can be retrieved with the `contractAddresses` parameter. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists ERC-20 token balances of a wallet address. Balance at a given block can be retrieved with the `blockNumber` parameter. Balance for specific contracts can be retrieved with the `contractAddresses` parameter. # List ERC-721 balances (/docs/api-reference/data-api/evm-balances/listErc721Balances) --- title: List ERC-721 balances full: true _openapi: method: GET route: /v1/chains/{chainId}/addresses/{address}/balances:listErc721 toc: [] structuredData: headings: [] contents: - content: >- Lists ERC-721 token balances of a wallet address. Balance for a specific contract can be retrieved with the `contractAddress` parameter. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists ERC-721 token balances of a wallet address. Balance for a specific contract can be retrieved with the `contractAddress` parameter. # Get block (/docs/api-reference/data-api/evm-blocks/getBlock) --- title: Get block full: true _openapi: method: GET route: /v1/chains/{chainId}/blocks/{blockId} toc: [] structuredData: headings: [] contents: - content: Gets the details of an individual block on the EVM-compatible chain. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Gets the details of an individual block on the EVM-compatible chain. # List latest blocks (/docs/api-reference/data-api/evm-blocks/getLatestBlocks) --- title: List latest blocks full: true _openapi: method: GET route: /v1/chains/{chainId}/blocks toc: [] structuredData: headings: [] contents: - content: >- Lists the latest indexed blocks on the EVM-compatible chain sorted in descending order by block timestamp. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists the latest indexed blocks on the EVM-compatible chain sorted in descending order by block timestamp. # List latest blocks across all supported EVM chains (/docs/api-reference/data-api/evm-blocks/listLatestBlocksAllChains) --- title: List latest blocks across all supported EVM chains full: true _openapi: method: GET route: /v1/blocks toc: [] structuredData: headings: [] contents: - content: >- Lists the most recent blocks from all supported EVM-compatible chains. The results can be filtered by network. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists the most recent blocks from all supported EVM-compatible chains. The results can be filtered by network. # Get chain information (/docs/api-reference/data-api/evm-chains/getChainInfo) --- title: Get chain information full: true _openapi: method: GET route: /v1/chains/{chainId} toc: [] structuredData: headings: [] contents: - content: >- Gets chain information for the EVM-compatible chain if supported by AvaCloud. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Gets chain information for the EVM-compatible chain if supported by AvaCloud. # List all chains associated with a given address (/docs/api-reference/data-api/evm-chains/listAddressChains) --- title: List all chains associated with a given address full: true _openapi: method: GET route: /v1/address/{address}/chains toc: [] structuredData: headings: [] contents: - content: >- Lists the chains where the specified address has participated in transactions or ERC token transfers, either as a sender or receiver. The data is refreshed every 15 minutes. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists the chains where the specified address has participated in transactions or ERC token transfers, either as a sender or receiver. The data is refreshed every 15 minutes. # List chains (/docs/api-reference/data-api/evm-chains/supportedChains) --- title: List chains full: true _openapi: method: GET route: /v1/chains toc: [] structuredData: headings: [] contents: - content: >- Lists the AvaCloud supported EVM-compatible chains. Filterable by network. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists the AvaCloud supported EVM-compatible chains. Filterable by network. # Get contract metadata (/docs/api-reference/data-api/evm-contracts/getContractMetadata) --- title: Get contract metadata full: true _openapi: method: GET route: /v1/chains/{chainId}/addresses/{address} toc: [] structuredData: headings: [] contents: - content: Gets metadata about the contract at the given address. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Gets metadata about the contract at the given address. # Get deployment transaction (/docs/api-reference/data-api/evm-transactions/getDeploymentTransaction) --- title: Get deployment transaction full: true _openapi: method: GET route: /v1/chains/{chainId}/contracts/{address}/transactions:getDeployment toc: [] structuredData: headings: [] contents: - content: >- If the address is a smart contract, returns the transaction in which it was deployed. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} If the address is a smart contract, returns the transaction in which it was deployed. # Get transaction (/docs/api-reference/data-api/evm-transactions/getTransaction) --- title: Get transaction full: true _openapi: method: GET route: /v1/chains/{chainId}/transactions/{txHash} toc: [] structuredData: headings: [] contents: - content: Gets the details of a single transaction. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Gets the details of a single transaction. # List transactions for a block (/docs/api-reference/data-api/evm-transactions/getTransactionsForBlock) --- title: List transactions for a block full: true _openapi: method: GET route: /v1/chains/{chainId}/blocks/{blockId}/transactions toc: [] structuredData: headings: [] contents: - content: Lists the transactions that occured in a given block. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists the transactions that occured in a given block. # List deployed contracts (/docs/api-reference/data-api/evm-transactions/listContractDeployments) --- title: List deployed contracts full: true _openapi: method: GET route: /v1/chains/{chainId}/contracts/{address}/deployments toc: [] structuredData: headings: [] contents: - content: Lists all contracts deployed by the given address. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists all contracts deployed by the given address. # List ERC-1155 transfers (/docs/api-reference/data-api/evm-transactions/listErc1155Transactions) --- title: List ERC-1155 transfers full: true _openapi: method: GET route: /v1/chains/{chainId}/addresses/{address}/transactions:listErc1155 toc: [] structuredData: headings: [] contents: - content: Lists ERC-1155 transfers for an address. Filterable by block range. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists ERC-1155 transfers for an address. Filterable by block range. # List ERC-20 transfers (/docs/api-reference/data-api/evm-transactions/listErc20Transactions) --- title: List ERC-20 transfers full: true _openapi: method: GET route: /v1/chains/{chainId}/addresses/{address}/transactions:listErc20 toc: [] structuredData: headings: [] contents: - content: Lists ERC-20 transfers for an address. Filterable by block range. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists ERC-20 transfers for an address. Filterable by block range. # List ERC-721 transfers (/docs/api-reference/data-api/evm-transactions/listErc721Transactions) --- title: List ERC-721 transfers full: true _openapi: method: GET route: /v1/chains/{chainId}/addresses/{address}/transactions:listErc721 toc: [] structuredData: headings: [] contents: - content: Lists ERC-721 transfers for an address. Filterable by block range. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists ERC-721 transfers for an address. Filterable by block range. # List internal transactions (/docs/api-reference/data-api/evm-transactions/listInternalTransactions) --- title: List internal transactions full: true _openapi: method: GET route: /v1/chains/{chainId}/addresses/{address}/transactions:listInternals toc: [] structuredData: headings: [] contents: - content: >- Returns a list of internal transactions for an address and chain. Filterable by block range. Note that the internal transactions list only contains `CALL` or `CALLCODE` transactions with a non-zero value and `CREATE`/`CREATE2`/`CREATE3` transactions. To get a complete list of internal transactions use the `debug_` prefixed RPC methods on an archive node. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns a list of internal transactions for an address and chain. Filterable by block range. Note that the internal transactions list only contains `CALL` or `CALLCODE` transactions with a non-zero value and `CREATE`/`CREATE2`/`CREATE3` transactions. To get a complete list of internal transactions use the `debug_` prefixed RPC methods on an archive node. # List latest transactions (/docs/api-reference/data-api/evm-transactions/listLatestTransactions) --- title: List latest transactions full: true _openapi: method: GET route: /v1/chains/{chainId}/transactions toc: [] structuredData: headings: [] contents: - content: Lists the latest transactions. Filterable by status. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists the latest transactions. Filterable by status. # List the latest transactions across all supported EVM chains (/docs/api-reference/data-api/evm-transactions/listLatestTransactionsAllChains) --- title: List the latest transactions across all supported EVM chains full: true _openapi: method: GET route: /v1/transactions toc: [] structuredData: headings: [] contents: - content: >- Lists the most recent transactions from all supported EVM-compatible chains. The results can be filtered based on transaction status. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists the most recent transactions from all supported EVM-compatible chains. The results can be filtered based on transaction status. # List native transactions (/docs/api-reference/data-api/evm-transactions/listNativeTransactions) --- title: List native transactions full: true _openapi: method: GET route: /v1/chains/{chainId}/addresses/{address}/transactions:listNative toc: [] structuredData: headings: [] contents: - content: Lists native transactions for an address. Filterable by block range. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists native transactions for an address. Filterable by block range. # List transactions (/docs/api-reference/data-api/evm-transactions/listTransactions) --- title: List transactions full: true _openapi: method: GET route: /v1/chains/{chainId}/addresses/{address}/transactions toc: [] structuredData: headings: [] contents: - content: >- Returns a list of transactions where the given wallet address had an on-chain interaction for the given chain. The ERC-20 transfers, ERC-721 transfers, ERC-1155, and internal transactions returned are only those where the input address had an interaction. Specifically, those lists only inlcude entries where the input address was the sender (`from` field) or the receiver (`to` field) for the sub-transaction. Therefore the transactions returned from this list may not be complete representations of the on-chain data. For a complete view of a transaction use the `/chains/:chainId/transactions/:txHash` endpoint. Filterable by block ranges. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns a list of transactions where the given wallet address had an on-chain interaction for the given chain. The ERC-20 transfers, ERC-721 transfers, ERC-1155, and internal transactions returned are only those where the input address had an interaction. Specifically, those lists only inlcude entries where the input address was the sender (`from` field) or the receiver (`to` field) for the sub-transaction. Therefore the transactions returned from this list may not be complete representations of the on-chain data. For a complete view of a transaction use the `/chains/:chainId/transactions/:txHash` endpoint. Filterable by block ranges. # List transactions v2 (/docs/api-reference/data-api/evm-transactions/listTransactionsV2) --- title: List transactions v2 full: true _openapi: method: GET route: /v2/chains/{chainId}/addresses/{address}/transactions toc: [] structuredData: headings: [] contents: - content: >- Returns a list of transactions where the given wallet address had an on-chain interaction for the given chain. The ERC-20 transfers (with token reputation), ERC-721 transfers, ERC-1155, and internal transactions returned are only those where the input address had an interaction. Specifically, those lists only inlcude entries where the input address was the sender (`from` field) or the receiver (`to` field) for the sub-transaction. Therefore the transactions returned from this list may not be complete representations of the on-chain data. For a complete view of a transaction use the `/chains/:chainId/transactions/:txHash` endpoint. Filterable by block ranges. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns a list of transactions where the given wallet address had an on-chain interaction for the given chain. The ERC-20 transfers (with token reputation), ERC-721 transfers, ERC-1155, and internal transactions returned are only those where the input address had an interaction. Specifically, those lists only inlcude entries where the input address was the sender (`from` field) or the receiver (`to` field) for the sub-transaction. Therefore the transactions returned from this list may not be complete representations of the on-chain data. For a complete view of a transaction use the `/chains/:chainId/transactions/:txHash` endpoint. Filterable by block ranges. # List ERC transfers (/docs/api-reference/data-api/evm-transactions/listTransfers) --- title: List ERC transfers full: true _openapi: method: GET route: /v1/chains/{chainId}/tokens/{address}/transfers toc: [] structuredData: headings: [] contents: - content: >- Lists ERC transfers for an ERC-20, ERC-721, or ERC-1155 contract address. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists ERC transfers for an ERC-20, ERC-721, or ERC-1155 contract address. # Get the health of the service (/docs/api-reference/data-api/health-check/data-health-check) --- title: Get the health of the service full: true _openapi: method: GET route: /v1/health-check toc: [] structuredData: headings: [] contents: - content: >- Check the health of the service. This checks the read and write health of the database and cache. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Check the health of the service. This checks the read and write health of the database and cache. # Get the liveliness of the service (reads only) (/docs/api-reference/data-api/health-check/live-check) --- title: Get the liveliness of the service (reads only) full: true _openapi: method: GET route: /v1/live-check toc: [] structuredData: headings: [] contents: - content: Check the liveliness of the service (reads only). --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Check the liveliness of the service (reads only). # Get an ICM message (/docs/api-reference/data-api/interchain-messaging/getIcmMessage) --- title: Get an ICM message full: true _openapi: method: GET route: /v1/icm/messages/{messageId} toc: [] structuredData: headings: [] contents: - content: Gets an ICM message by teleporter message ID. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Gets an ICM message by teleporter message ID. # List ICM messages (/docs/api-reference/data-api/interchain-messaging/listIcmMessages) --- title: List ICM messages full: true _openapi: method: GET route: /v1/icm/messages toc: [] structuredData: headings: [] contents: - content: Lists ICM messages. Ordered by timestamp in descending order. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists ICM messages. Ordered by timestamp in descending order. # List ICM messages by address (/docs/api-reference/data-api/interchain-messaging/listIcmMessagesByAddress) --- title: List ICM messages by address full: true _openapi: method: GET route: /v1/icm/addresses/{address}/messages toc: [] structuredData: headings: [] contents: - content: >- Lists ICM messages by address. Ordered by timestamp in descending order. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists ICM messages by address. Ordered by timestamp in descending order. # Get token details (/docs/api-reference/data-api/nfts/getTokenDetails) --- title: Get token details full: true _openapi: method: GET route: /v1/chains/{chainId}/nfts/collections/{address}/tokens/{tokenId} toc: [] structuredData: headings: [] contents: - content: Gets token details for a specific token of an NFT contract. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Gets token details for a specific token of an NFT contract. # List tokens (/docs/api-reference/data-api/nfts/listTokens) --- title: List tokens full: true _openapi: method: GET route: /v1/chains/{chainId}/nfts/collections/{address}/tokens toc: [] structuredData: headings: [] contents: - content: Lists tokens for an NFT contract. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists tokens for an NFT contract. # Reindex NFT metadata (/docs/api-reference/data-api/nfts/reindexNft) --- title: Reindex NFT metadata full: true _openapi: method: POST route: /v1/chains/{chainId}/nfts/collections/{address}/tokens/{tokenId}:reindex toc: [] structuredData: headings: [] contents: - content: >- Triggers reindexing of token metadata for an NFT token. Reindexing can only be called once per hour for each NFT token. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Triggers reindexing of token metadata for an NFT token. Reindexing can only be called once per hour for each NFT token. # Get operation (/docs/api-reference/data-api/operations/getOperationResult) --- title: Get operation full: true _openapi: method: GET route: /v1/operations/{operationId} toc: [] structuredData: headings: [] contents: - content: Gets operation details for the given operation id. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Gets operation details for the given operation id. # Create transaction export operation (/docs/api-reference/data-api/operations/postTransactionExportJob) --- title: Create transaction export operation full: true _openapi: method: POST route: /v1/operations/transactions:export toc: [] structuredData: headings: [] contents: - content: >- Trigger a transaction export operation with given parameters. The transaction export operation runs asynchronously in the background. The status of the job can be retrieved from the `/v1/operations/:operationId` endpoint using the `operationId` returned from this endpoint. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Trigger a transaction export operation with given parameters. The transaction export operation runs asynchronously in the background. The status of the job can be retrieved from the `/v1/operations/:operationId` endpoint using the `operationId` returned from this endpoint. # Get balances (/docs/api-reference/data-api/primary-network-balances/getBalancesByAddresses) --- title: Get balances full: true _openapi: method: GET route: /v1/networks/{network}/blockchains/{blockchainId}/balances toc: [] structuredData: headings: [] contents: - content: >- Gets primary network balances for one of the Primary Network chains for the supplied addresses. C-Chain balances returned are only the shared atomic memory balance. For EVM balance, use the `/v1/chains/:chainId/addresses/:addressId/balances:getNative` endpoint. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Gets primary network balances for one of the Primary Network chains for the supplied addresses. C-Chain balances returned are only the shared atomic memory balance. For EVM balance, use the `/v1/chains/:chainId/addresses/:addressId/balances:getNative` endpoint. # Get asset details (/docs/api-reference/data-api/primary-network/getAssetDetails) --- title: Get asset details full: true _openapi: method: GET route: /v1/networks/{network}/blockchains/{blockchainId}/assets/{assetId} toc: [] structuredData: headings: [] contents: - content: Gets asset details corresponding to the given asset id on the X-Chain. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Gets asset details corresponding to the given asset id on the X-Chain. # Get blockchain details by ID (/docs/api-reference/data-api/primary-network/getBlockchainById) --- title: Get blockchain details by ID full: true _openapi: method: GET route: /v1/networks/{network}/blockchains/{blockchainId} toc: [] structuredData: headings: [] contents: - content: Get details of the blockchain registered on the network. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Get details of the blockchain registered on the network. # Get chain interactions for addresses (/docs/api-reference/data-api/primary-network/getChainIdsForAddresses) --- title: Get chain interactions for addresses full: true _openapi: method: GET route: /v1/networks/{network}/addresses:listChainIds toc: [] structuredData: headings: [] contents: - content: >- Returns Primary Network chains that each address has touched in the form of an address mapped array. If an address has had any on-chain interaction for a chain, that chain's chain id will be returned. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns Primary Network chains that each address has touched in the form of an address mapped array. If an address has had any on-chain interaction for a chain, that chain's chain id will be returned. # Get network details (/docs/api-reference/data-api/primary-network/getNetworkDetails) --- title: Get network details full: true _openapi: method: GET route: /v1/networks/{network} toc: [] structuredData: headings: [] contents: - content: Gets network details such as validator and delegator stats. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Gets network details such as validator and delegator stats. # Get single validator details (/docs/api-reference/data-api/primary-network/getSingleValidatorDetails) --- title: Get single validator details full: true _openapi: method: GET route: /v1/networks/{network}/validators/{nodeId} toc: [] structuredData: headings: [] contents: - content: >- List validator details for a single validator. Filterable by validation status. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} List validator details for a single validator. Filterable by validation status. # Get Subnet details by ID (/docs/api-reference/data-api/primary-network/getSubnetById) --- title: Get Subnet details by ID full: true _openapi: method: GET route: /v1/networks/{network}/subnets/{subnetId} toc: [] structuredData: headings: [] contents: - content: Get details of the Subnet registered on the network. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Get details of the Subnet registered on the network. # List blockchains (/docs/api-reference/data-api/primary-network/listBlockchains) --- title: List blockchains full: true _openapi: method: GET route: /v1/networks/{network}/blockchains toc: [] structuredData: headings: [] contents: - content: Lists all blockchains registered on the network. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists all blockchains registered on the network. # List delegators (/docs/api-reference/data-api/primary-network/listDelegators) --- title: List delegators full: true _openapi: method: GET route: /v1/networks/{network}/delegators toc: [] structuredData: headings: [] contents: - content: Lists details for delegators. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists details for delegators. # List L1 validators (/docs/api-reference/data-api/primary-network/listL1Validators) --- title: List L1 validators full: true _openapi: method: GET route: /v1/networks/{network}/l1Validators toc: [] structuredData: headings: [] contents: - content: >- Lists details for L1 validators. By default, returns details for all active L1 validators. Filterable by validator node ids, subnet id, and validation id. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists details for L1 validators. By default, returns details for all active L1 validators. Filterable by validator node ids, subnet id, and validation id. # List subnets (/docs/api-reference/data-api/primary-network/listSubnets) --- title: List subnets full: true _openapi: method: GET route: /v1/networks/{network}/subnets toc: [] structuredData: headings: [] contents: - content: Lists all subnets registered on the network. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists all subnets registered on the network. # List validators (/docs/api-reference/data-api/primary-network/listValidators) --- title: List validators full: true _openapi: method: GET route: /v1/networks/{network}/validators toc: [] structuredData: headings: [] contents: - content: >- Lists details for validators. By default, returns details for all validators. The nodeIds parameter supports substring matching. Filterable by validation status, delegation capacity, time remaining, fee percentage, uptime performance, and subnet id. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists details for validators. By default, returns details for all validators. The nodeIds parameter supports substring matching. Filterable by validation status, delegation capacity, time remaining, fee percentage, uptime performance, and subnet id. # Get block (/docs/api-reference/data-api/primary-network-blocks/getBlockById) --- title: Get block full: true _openapi: method: GET route: /v1/networks/{network}/blockchains/{blockchainId}/blocks/{blockId} toc: [] structuredData: headings: [] contents: - content: >- Gets a block by block height or block hash on one of the Primary Network chains. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Gets a block by block height or block hash on one of the Primary Network chains. # List latest blocks (/docs/api-reference/data-api/primary-network-blocks/listLatestPrimaryNetworkBlocks) --- title: List latest blocks full: true _openapi: method: GET route: /v1/networks/{network}/blockchains/{blockchainId}/blocks toc: [] structuredData: headings: [] contents: - content: Lists latest blocks on one of the Primary Network chains. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists latest blocks on one of the Primary Network chains. # List blocks proposed by node (/docs/api-reference/data-api/primary-network-blocks/listPrimaryNetworkBlocksByNodeId) --- title: List blocks proposed by node full: true _openapi: method: GET route: /v1/networks/{network}/blockchains/{blockchainId}/nodes/{nodeId}/blocks toc: [] structuredData: headings: [] contents: - content: >- Lists the latest blocks proposed by a given NodeID on one of the Primary Network chains. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists the latest blocks proposed by a given NodeID on one of the Primary Network chains. # List historical rewards (/docs/api-reference/data-api/primary-network-rewards/listHistoricalPrimaryNetworkRewards) --- title: List historical rewards full: true _openapi: method: GET route: /v1/networks/{network}/rewards toc: [] structuredData: headings: [] contents: - content: >- Lists historical rewards on the Primary Network for the supplied addresses. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists historical rewards on the Primary Network for the supplied addresses. # List pending rewards (/docs/api-reference/data-api/primary-network-rewards/listPendingPrimaryNetworkRewards) --- title: List pending rewards full: true _openapi: method: GET route: /v1/networks/{network}/rewards:listPending toc: [] structuredData: headings: [] contents: - content: >- Lists pending rewards on the Primary Network for the supplied addresses. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists pending rewards on the Primary Network for the supplied addresses. # Get transaction (/docs/api-reference/data-api/primary-network-transactions/getTxByHash) --- title: Get transaction full: true _openapi: method: GET route: /v1/networks/{network}/blockchains/{blockchainId}/transactions/{txHash} toc: [] structuredData: headings: [] contents: - content: >- Gets the details of a single transaction on one of the Primary Network chains. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Gets the details of a single transaction on one of the Primary Network chains. # List staking transactions (/docs/api-reference/data-api/primary-network-transactions/listActivePrimaryNetworkStakingTransactions) --- title: List staking transactions full: true _openapi: method: GET route: /v1/networks/{network}/blockchains/{blockchainId}/transactions:listStaking toc: [] structuredData: headings: [] contents: - content: >- Lists active staking transactions on the P-Chain for the supplied addresses. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists active staking transactions on the P-Chain for the supplied addresses. # List asset transactions (/docs/api-reference/data-api/primary-network-transactions/listAssetTransactions) --- title: List asset transactions full: true _openapi: method: GET route: >- /v1/networks/{network}/blockchains/{blockchainId}/assets/{assetId}/transactions toc: [] structuredData: headings: [] contents: - content: >- Lists asset transactions corresponding to the given asset id on the X-Chain. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists asset transactions corresponding to the given asset id on the X-Chain. # List latest transactions (/docs/api-reference/data-api/primary-network-transactions/listLatestPrimaryNetworkTransactions) --- title: List latest transactions full: true _openapi: method: GET route: /v1/networks/{network}/blockchains/{blockchainId}/transactions toc: [] structuredData: headings: [] contents: - content: >- Lists the latest transactions on one of the Primary Network chains. Transactions are filterable by addresses, txTypes, and timestamps. When querying for latest transactions without an address parameter, filtering by txTypes and timestamps is not supported. An address filter must be provided to utilize txTypes and timestamp filters. For P-Chain, you can fetch all L1 validators related transactions like ConvertSubnetToL1Tx, IncreaseL1ValidatorBalanceTx etc. using the unique L1 validation ID. These transactions are further filterable by txTypes and timestamps as well. Given that each transaction may return a large number of UTXO objects, bounded only by the maximum transaction size, the query may return less transactions than the provided page size. The result will contain less results than the page size if the number of utxos contained in the resulting transactions reach a performance threshold. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists the latest transactions on one of the Primary Network chains. Transactions are filterable by addresses, txTypes, and timestamps. When querying for latest transactions without an address parameter, filtering by txTypes and timestamps is not supported. An address filter must be provided to utilize txTypes and timestamp filters. For P-Chain, you can fetch all L1 validators related transactions like ConvertSubnetToL1Tx, IncreaseL1ValidatorBalanceTx etc. using the unique L1 validation ID. These transactions are further filterable by txTypes and timestamps as well. Given that each transaction may return a large number of UTXO objects, bounded only by the maximum transaction size, the query may return less transactions than the provided page size. The result will contain less results than the page size if the number of utxos contained in the resulting transactions reach a performance threshold. # Get last activity timestamp by addresses (/docs/api-reference/data-api/primary-network-utxos/getLastActivityTimestampByAddresses) --- title: Get last activity timestamp by addresses full: true _openapi: method: GET route: >- /v1/networks/{network}/blockchains/{blockchainId}/lastActivityTimestampByAddresses toc: [] structuredData: headings: [] contents: - content: >- Gets the last activity timestamp for the supplied addresses on one of the Primary Network chains. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Gets the last activity timestamp for the supplied addresses on one of the Primary Network chains. # Get last activity timestamp by addresses v2 (/docs/api-reference/data-api/primary-network-utxos/getLastActivityTimestampByAddressesV2) --- title: Get last activity timestamp by addresses v2 full: true _openapi: method: POST route: >- /v1/networks/{network}/blockchains/{blockchainId}/lastActivityTimestampByAddresses toc: [] structuredData: headings: [] contents: - content: >- Gets the last activity timestamp for the supplied addresses on one of the Primary Network chains. V2 route supports querying for more addresses. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Gets the last activity timestamp for the supplied addresses on one of the Primary Network chains. V2 route supports querying for more addresses. # List UTXOs (/docs/api-reference/data-api/primary-network-utxos/getUtxosByAddresses) --- title: List UTXOs full: true _openapi: method: GET route: /v1/networks/{network}/blockchains/{blockchainId}/utxos toc: [] structuredData: headings: [] contents: - content: >- Lists UTXOs on one of the Primary Network chains for the supplied addresses. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists UTXOs on one of the Primary Network chains for the supplied addresses. # List UTXOs v2 - Supports querying for more addresses (/docs/api-reference/data-api/primary-network-utxos/getUtxosByAddressesV2) --- title: List UTXOs v2 - Supports querying for more addresses full: true _openapi: method: POST route: /v1/networks/{network}/blockchains/{blockchainId}/utxos toc: [] structuredData: headings: [] contents: - content: >- Lists UTXOs on one of the Primary Network chains for the supplied addresses. This v2 route supports increased page size and address limit. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists UTXOs on one of the Primary Network chains for the supplied addresses. This v2 route supports increased page size and address limit. # Get vertex (/docs/api-reference/data-api/primary-network-vertices/getVertexByHash) --- title: Get vertex full: true _openapi: method: GET route: /v1/networks/{network}/blockchains/{blockchainId}/vertices/{vertexHash} toc: [] structuredData: headings: [] contents: - content: Gets a single vertex on the X-Chain. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Gets a single vertex on the X-Chain. # List vertices by height (/docs/api-reference/data-api/primary-network-vertices/getVertexByHeight) --- title: List vertices by height full: true _openapi: method: GET route: /v1/networks/{network}/blockchains/{blockchainId}/vertices:listByHeight toc: [] structuredData: headings: [] contents: - content: Lists vertices at the given vertex height on the X-Chain. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists vertices at the given vertex height on the X-Chain. # List vertices (/docs/api-reference/data-api/primary-network-vertices/listLatestXChainVertices) --- title: List vertices full: true _openapi: method: GET route: /v1/networks/{network}/blockchains/{blockchainId}/vertices toc: [] structuredData: headings: [] contents: - content: Lists latest vertices on the X-Chain. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Lists latest vertices on the X-Chain. # Aggregate Signatures (/docs/api-reference/data-api/signature-aggregator/aggregateSignatures) --- title: Aggregate Signatures full: true _openapi: method: POST route: /v1/signatureAggregator/{network}/aggregateSignatures toc: [] structuredData: headings: [] contents: - content: Aggregates Signatures for a Warp message from Subnet validators. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Aggregates Signatures for a Warp message from Subnet validators. # Get Aggregated Signatures (/docs/api-reference/data-api/signature-aggregator/getAggregatedSignatures) --- title: Get Aggregated Signatures full: true _openapi: method: GET route: /v1/signatureAggregator/{network}/aggregateSignatures/{txHash} toc: [] structuredData: headings: [] contents: - content: Get Aggregated Signatures for a P-Chain L1 related Warp Message. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Get Aggregated Signatures for a P-Chain L1 related Warp Message. # Firewood Database (/docs/nodes/architecture/execution/firewood) --- title: Firewood Database description: Learn about Firewood, the compaction-less database optimized for efficiently storing Merkleized blockchain state. --- Firewood is a purpose-built embedded key-value store optimized for storing recent Merkleized blockchain state. Unlike traditional blockchain storage approaches that layer Merkle tries on top of generic databases, Firewood stores trie nodes directly on disk, eliminating compaction overhead and enabling superior performance. **Source Code**: [github.com/ava-labs/firewood](https://github.com/ava-labs/firewood) ## Summary Firewood reimagines blockchain state storage with several key innovations: - **Native trie storage**: Stores Merkle trie nodes directly on disk - **No compaction**: Eliminates expensive compaction cycles - **Recent state focus**: Optimized for storing recent revisions - **Disk-offset addressing**: Root addresses are disk offsets, not hashes Firewood is beta-level software. The Firewood API may change with little to no warning. ## The Problem with Traditional Approaches Most blockchain clients (including Ethereum's Geth and traditional AvalancheGo) store state using generic key-value databases like LevelDB or RocksDB. This creates a fundamental mismatch: ### Problems with Generic KV Stores | Issue | Description | |-------|-------------| | **Double indexing** | Trie structure is flattened into KV pairs, then re-indexed by the database | | **Compaction overhead** | LSM-trees require periodic compaction that causes latency spikes | | **Write amplification** | Data is rewritten multiple times during compaction | | **Hash-based lookup** | Finding a node requires hashing, then database lookup | ## How Firewood Works Firewood implements a Patricia trie (a specific variant of radix tree) natively on disk, using the trie structure itself as the index. ### Native Trie Storage Key design decisions: - **Disk offset = address**: A node's address is simply its offset in the database file - **Direct pointers**: Branch nodes point to disk offsets of child nodes - **No hash lookup**: Finding a node doesn't require computing or looking up hashes ### Revision Management Firewood implements a persistent (immutable) trie structure that supports multiple concurrent versions: When state is updated: 1. New versions of modified nodes are created 2. Unchanged subtrees are shared between revisions 3. Old revisions remain accessible for reads ### Future-Delete Log (FDL) Firewood tracks which nodes become obsolete: This enables: - **Predictable cleanup**: No sudden compaction pauses - **Inline compaction**: Space is reclaimed as part of normal operation - **Configurable history**: Retain as many revisions as needed ## Technical Architecture ### Core Components ``` firewood/ ├── storage/ # Low-level storage primitives ├── trie/ # Patricia trie implementation ├── proposal/ # Transaction staging ├── revision/ # Version management ├── proof/ # Merkle proof generation └── ffi/ # Foreign function interface ``` ### Free Space Management Firewood manages free space similarly to heap memory allocation: When allocating space for new nodes: 1. Check free lists for appropriate size 2. If no suitable free space, allocate from end of file 3. When revisions expire, return space to free lists ## Key Features ### Concurrent Access Firewood efficiently synchronizes between: | Actor | Role | |-------|------| | **Writer** (Execution) | Single writer commits new state | | **Readers** (Consensus, RPC) | Multiple readers access historical state | The persistent trie structure ensures: - Readers always see consistent state - Writes are atomic from readers' perspective - No locks required for read operations ### Sequential Writes The persistent data structure enables sequential writes. New nodes are appended to the end of the file, filling free space sequentially: | State | Slot 1 | Slot 2 | Slot 3 | Slot 4 | Slot 5 | |-------|--------|--------|--------|--------|--------| | **Before** | Node A | Node B | Node C | *Free* | *Free* | | **After** | Node A | Node B | Node C | Node D | Node E | Benefits for SSDs: - Entire blocks are filled before moving to the next - Simplified garbage collection - Reduced write amplification - Increased SSD longevity ### Proofs and State Sync Firewood natively supports proof generation: | Proof Type | Description | |------------|-------------| | **Key Proof** | Proves a key exists in a specific revision | | **Range Proof** | Proves a range of keys with all values | | **Change Proof** | Proves differences between two revisions | These proofs enable efficient state sync without trusting the source. ## Ethereum Compatibility By default, Firewood uses SHA256 hashing (compatible with [MerkleDB](https://github.com/ava-labs/avalanchego/tree/master/x/merkledb)). For Ethereum compatibility, enable the `ethhash` feature: ```bash # Build with Ethereum-compatible hashing cargo build --features ethhash ``` This changes: - Hashing algorithm: SHA256 → Keccak256 - Account handling: Understands RLP-encoded accounts - Storage trie: Computes account storage roots correctly The `ethhash` feature has some performance overhead compared to the default configuration. ## Performance Characteristics ### Compared to LevelDB/RocksDB | Metric | Traditional | Firewood | |--------|-------------|----------| | **Write amplification** | High (compaction) | Low (no compaction) | | **Latency spikes** | Periodic (compaction) | Minimal | | **Iteration speed** | Fast | Fast (native trie) | | **Proof generation** | Requires reconstruction | Native support | | **Space efficiency** | Good after compaction | Configurable | ### Optimal Configuration For best performance: ```bash # Run directly on block device (bypass filesystem) ./firewood --device /dev/nvme0n1 # Or use regular files (easier setup) ./firewood --path /data/firewood.db ``` Running on block devices avoids filesystem overhead: - No block allocation delays - No fragmentation - No metadata management - Direct I/O to SSD ## Metrics Firewood provides comprehensive Prometheus metrics: ```text # Database size firewood_db_size_bytes # Read/write latency firewood_read_latency_seconds firewood_write_latency_seconds # Revision count firewood_revision_count # Free space firewood_free_space_bytes ``` See [METRICS.md](https://github.com/ava-labs/firewood/blob/main/METRICS.md) for the complete metrics reference. ## Command Line Interface Firewood includes `fwdctl` for database operations: ```bash # Create a new database fwdctl create --path /data/firewood.db # Insert key-value pairs fwdctl put --path /data/firewood.db key1 value1 # Query data fwdctl get --path /data/firewood.db key1 # Generate proofs fwdctl prove --path /data/firewood.db key1 ``` ## Integration with AvalancheGo Firewood integrates with AvalancheGo as an alternative to LevelDB/PebbleDB: Firewood integration with AvalancheGo is under active development. Check the [firewood-go-ethhash](https://github.com/ava-labs/firewood-go-ethhash) repository for Go bindings. ## Related Resources How SAE leverages Firewood for efficient execution AvalancheGo's overall architecture ### External Links - [Firewood Repository](https://github.com/ava-labs/firewood) - [Firewood Go Bindings](https://github.com/ava-labs/firewood-go-ethhash) - [Firewood Metrics Reference](https://github.com/ava-labs/firewood/blob/main/METRICS.md) # Execution (/docs/nodes/architecture/execution) --- title: Execution description: Understand how AvalancheGo executes transactions efficiently, including streaming async execution and optimized state storage. --- AvalancheGo is evolving its execution model to achieve higher throughput and lower latency. This section covers the advanced execution concepts being developed for Avalanche, including decoupled consensus and execution, and optimized state storage. **Active Development**: The concepts described here (Streaming Asynchronous Execution and Firewood) are under active development. Some features may be experimental or not yet deployed to mainnet. ## Execution at a Glance | Concept | Description | |---------|-------------| | **Streaming Async Execution** | Decouples consensus from execution, allowing both to proceed concurrently | | **Firewood** | Compaction-less database optimized for Merkleized blockchain state | | **Optimistic Parallelism** | Execute multiple transactions concurrently with conflict detection | ## Key Innovations ### Decoupled Consensus and Execution Traditional blockchain execution is synchronous—transactions are executed and their results computed before the block is accepted by consensus. AvalancheGo's Streaming Asynchronous Execution (SAE) breaks this tight coupling: This separation enables: - **Higher throughput**: Consensus and execution proceed in parallel - **Reduced latency**: Blocks are accepted faster - **Better resource utilization**: No context switching between consensus and execution ### Optimized State Storage Firewood reimagines blockchain state storage by storing Merkle trie nodes directly on disk, eliminating the need for: - Generic key-value stores (LevelDB, RocksDB) - Expensive compaction cycles - Hash-based storage addressing ## Why This Matters For **node operators**: - Lower hardware requirements through better resource utilization - More predictable performance without compaction pauses - Faster state sync with native trie operations For **developers**: - Higher transaction throughput - Lower confirmation latency for users - More consistent block times For **the network**: - Better scalability without sacrificing decentralization - Improved validator experience - Foundation for future optimizations (encrypted mempools, VRF) ## Explore Further Learn how SAE decouples consensus from execution for higher throughput Discover the compaction-less database optimized for Merkleized state ## Related Resources - [ACP-194: Streaming Asynchronous Execution](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/194-streaming-asynchronous-execution) - Formal specification - [StreVM Repository](https://github.com/ava-labs/strevm) - Reference SAE implementation - [Firewood Repository](https://github.com/ava-labs/firewood) - Database implementation # Streaming Asynchronous Execution (/docs/nodes/architecture/execution/streaming-async-execution) --- title: Streaming Asynchronous Execution description: Learn how Streaming Asynchronous Execution (SAE) decouples consensus from execution to achieve higher throughput and lower latency. --- Streaming Asynchronous Execution (SAE) is a fundamental architectural change that decouples consensus from execution. By allowing these two critical processes to run concurrently, AvalancheGo can achieve significantly higher throughput without sacrificing security guarantees. **Specification**: SAE is defined in [ACP-194](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/194-streaming-asynchronous-execution). The reference implementation is [StreVM](https://github.com/ava-labs/strevm). ## Summary In traditional synchronous execution, a block must be fully executed before it can be accepted by consensus. SAE introduces a queue upon which consensus is performed, with a concurrent execution stream responsible for clearing the queue and reporting state roots to later consensus rounds. Key benefits: - **Concurrent processing**: Consensus and execution no longer block each other - **Reduced latency**: Blocks are accepted faster - **Bursty throughput**: Transactions can be eagerly accepted - **Future features**: Enables encrypted mempools and real-time VRF ## The Problem with Synchronous Execution In synchronous execution models, the block lifecycle is tightly coupled: This creates several bottlenecks: 1. **Context switching**: Nodes constantly switch between consensus and execution work 2. **Latency accumulation**: Execution time directly adds to block acceptance time 3. **Resource contention**: CPU-intensive execution competes with network-intensive consensus 4. **Stop-the-world events**: Database compaction or GC pauses affect both consensus and execution ## How SAE Works SAE separates the block lifecycle into distinct phases: ### Block Lifecycle | Phase | Description | |-------|-------------| | **Proposed** | Block builder creates a block with transactions | | **Validated** | Validators check that transactions can eventually be paid for | | **Accepted** | Block is accepted by consensus and enqueued for execution | | **Executed** | Block is executed by the concurrent execution stream | | **Settled** | Execution results are recorded in a later block | ### Lightweight Validation Before accepting a block, validators perform lightweight validation to ensure all transactions can eventually be executed. This validation: - Checks sender balances against worst-case bounds - Verifies the maximum required base fee - Does **not** execute transactions or compute state The worst-case bounds guarantee that transactions can pay for their fees, but do not guarantee that transactions won't revert or run out of gas during execution. ### The Execution Queue Once accepted, blocks enter a FIFO execution queue: The block executor runs in parallel with consensus, constantly processing blocks from the queue. ### Settlement Executed blocks are settled when a later block includes their execution results. The settlement includes: - **State root**: The root hash after executing the block - **Receipt root**: Merkle root of all receipts since last settlement A constant time delay ($\tau$ seconds) ensures that sporadic execution slowdowns are amortized. ## Technical Specification ### Gas Charging SAE introduces a new gas charging formula that accounts for the gas limit: $$ g_C := \max(g_U, g_L / \lambda) $$ Where: - $g_C$ = gas charged - $g_U$ = gas used - $g_L$ = gas limit - $\lambda$ = limit factor (enforces minimum charge based on limit) This prevents transactions from reserving large gas limits without paying proportionally. ### Block Size Limits The maximum block size is constrained by the settlement delay: $$ \omega_B := R \cdot \tau \cdot \lambda $$ Where: - $\omega_B$ = maximum block size (gas) - $R$ = gas capacity per second - $\tau$ = settlement delay - $\lambda$ = limit factor ### Queue Size Limits The execution queue has a maximum size to prevent unbounded growth: $$ \omega_Q := 2 \cdot \omega_B $$ Blocks that would exceed this queue size are considered invalid. ## Performance Implications ### Concurrent Execution Streams The primary benefit is that "VM time" more closely aligns with wall time: | Time | Synchronous | SAE | |------|-------------|-----| | 0-2s | Consensus | Consensus | | 1-5s | - | Execution (overlapping) | | 2-4s | Execution | Consensus | | 3-7s | - | Execution (overlapping) | | 4-6s | Consensus | - | | 6-8s | Execution | - | In synchronous mode, consensus and execution alternate. In SAE, they overlap, increasing throughput. ### Lean Execution Clients SAE enables specialized execution-only clients that can: - Rapidly execute the agreed-upon queue - Skip expensive Merkle data structure computation - Provide accelerated receipt issuance This is valuable for: - High-frequency trading applications - Custodial platforms monitoring deposits - Indexers tracking specific addresses ### Amortized Overhead Irregular events like database compaction are spread across multiple blocks instead of causing individual block delays. ## Future Features SAE provides a foundation for additional optimizations: ### Encrypted Mempools By performing execution after consensus sequencing, transactions can remain encrypted until their order is finalized. This reduces: - Front-running attacks - MEV extraction - Transaction censorship ### Real-Time VRF Consensus artifacts become available during execution, enabling: - Verifiable random functions during execution - Fair on-chain randomness - Gaming and lottery applications These features are not yet implemented but require SAE as a prerequisite. ## Implementation ### StreVM [StreVM](https://github.com/ava-labs/strevm) is the reference implementation of SAE for EVM blocks: ```bash # StreVM is under active development git clone https://github.com/ava-labs/strevm ``` StreVM is under active development. There are currently no guarantees about the stability of its Go APIs. ### Integration with AvalancheGo SAE integrates with AvalancheGo's existing architecture: ## Related Resources The optimized database designed to work with SAE How Snowman consensus works with SAE ### External Links - [ACP-194 Specification](https://github.com/avalanche-foundation/ACPs/tree/main/ACPs/194-streaming-asynchronous-execution) - [StreVM Repository](https://github.com/ava-labs/strevm) - [ACP-194 Discussion](https://github.com/avalanche-foundation/ACPs/discussions/196) # Avalanche L1 Configs (/docs/nodes/chain-configs/avalanche-l1s/avalanche-l1-configs) --- title: "Avalanche L1 Configs" description: "This page describes the configuration options available for Avalanche L1s." edit_url: https://github.com/ava-labs/avalanchego/edit/master/subnets/config.md --- # Subnet Configs It is possible to provide parameters for a Subnet. Parameters here apply to all chains in the specified Subnet. AvalancheGo looks for files specified with `{subnetID}.json` under `--subnet-config-dir` as documented [here](https://build.avax.network/docs/nodes/configure/configs-flags#subnet-configs). Here is an example of Subnet config file: ```json { "validatorOnly": false, "consensusParameters": { "k": 25, "alpha": 18 } } ``` ## Parameters ### Private Subnet #### `validatorOnly` (bool) If `true` this node does not expose Subnet blockchain contents to non-validators via P2P messages. Defaults to `false`. Avalanche Subnets are public by default. It means that every node can sync and listen ongoing transactions/blocks in Subnets, even they're not validating the listened Subnet. Subnet validators can choose not to publish contents of blockchains via this configuration. If a node sets `validatorOnly` to true, the node exchanges messages only with this Subnet's validators. Other peers will not be able to learn contents of this Subnet from this node. This is a node-specific configuration. Every validator of this Subnet has to use this configuration in order to create a full private Subnet. #### `allowedNodes` (string list) If `validatorOnly=true` this allows explicitly specified NodeIDs to be allowed to sync the Subnet regardless of validator status. Defaults to be empty. This is a node-specific configuration. Every validator of this Subnet has to use this configuration in order to properly allow a node in the private Subnet. ### Consensus Parameters Subnet configs supports loading new consensus parameters. JSON keys are different from their matching `CLI` keys. These parameters must be grouped under `consensusParameters` key. The consensus parameters of a Subnet default to the same values used for the Primary Network, which are given [CLI Snow Parameters](https://build.avax.network/docs/nodes/configure/configs-flags#snow-parameters). | CLI Key | JSON Key | | :------------------------------- | :-------------------- | | --snow-sample-size | k | | --snow-quorum-size | alpha | | --snow-commit-threshold | `beta` | | --snow-concurrent-repolls | concurrentRepolls | | --snow-optimal-processing | `optimalProcessing` | | --snow-max-processing | maxOutstandingItems | | --snow-max-time-processing | maxItemProcessingTime | | --snow-avalanche-batch-size | `batchSize` | | --snow-avalanche-num-parents | `parentSize` | #### `proposerMinBlockDelay` (duration) The minimum delay performed when building snowman++ blocks. Default is set to 1 second. As one of the ways to control network congestion, Snowman++ will only build a block `proposerMinBlockDelay` after the parent block's timestamp. Some high-performance custom VMs may find this too strict. This flag allows tuning the frequency at which blocks are built. ### Gossip Configs It's possible to define different Gossip configurations for each Subnet without changing values for Primary Network. JSON keys of these parameters are different from their matching `CLI` keys. These parameters default to the same values used for the Primary Network. For more information see [CLI Gossip Configs](https://build.avax.network/docs/nodes/configure/configs-flags#gossiping). | CLI Key | JSON Key | | :------------------------------------------------------ | :------------------------------------- | | --consensus-accepted-frontier-gossip-validator-size | gossipAcceptedFrontierValidatorSize | | --consensus-accepted-frontier-gossip-non-validator-size | gossipAcceptedFrontierNonValidatorSize | | --consensus-accepted-frontier-gossip-peer-size | gossipAcceptedFrontierPeerSize | | --consensus-on-accept-gossip-validator-size | gossipOnAcceptValidatorSize | | --consensus-on-accept-gossip-non-validator-size | gossipOnAcceptNonValidatorSize | | --consensus-on-accept-gossip-peer-size | gossipOnAcceptPeerSize | # Subnet-EVM Configs (/docs/nodes/chain-configs/avalanche-l1s/subnet-evm) --- title: "Subnet-EVM Configs" description: "This page describes the configuration options available for the Subnet-EVM." edit_url: https://github.com/ava-labs/avalanchego/edit/master/graft/subnet-evm/plugin/evm/config/config.md --- # Subnet-EVM Configuration > **Note**: These are the configuration options available in the Subnet-EVM codebase. To set these values, you need to create a configuration file at `~/.avalanchego/configs/chains//config.json`. > > For the AvalancheGo node configuration options, see the AvalancheGo Configuration page. This document describes all configuration options available for Subnet-EVM. ## Example Configuration ```json { "eth-apis": ["eth", "eth-filter", "net", "web3"], "pruning-enabled": true, "commit-interval": 4096, "trie-clean-cache": 512, "trie-dirty-cache": 512, "snapshot-cache": 256, "rpc-gas-cap": 50000000, "log-level": "info", "metrics-expensive-enabled": true, "continuous-profiler-dir": "./profiles", "state-sync-enabled": false, "accepted-cache-size": 32 } ``` ## Configuration Format Configuration is provided as a JSON object. All fields are optional unless otherwise specified. ## API Configuration ### Ethereum APIs | Option | Type | Description | Default | |--------|------|-------------|---------| | `eth-apis` | array of strings | List of Ethereum services that should be enabled | `["eth", "eth-filter", "net", "web3", "internal-eth", "internal-blockchain", "internal-transaction"]` | ### Subnet-EVM Specific APIs | Option | Type | Description | Default | |--------|------|-------------|---------| | `validators-api-enabled` | bool | Enable the validators API | `true` | | `admin-api-enabled` | bool | Enable the admin API for administrative operations | `false` | | `admin-api-dir` | string | Directory for admin API operations | - | | `warp-api-enabled` | bool | Enable the Warp API for cross-chain messaging | `false` | ### API Limits and Security | Option | Type | Description | Default | |--------|------|-------------|---------| | `rpc-gas-cap` | uint64 | Maximum gas limit for RPC calls | `50,000,000` | | `rpc-tx-fee-cap` | float64 | Maximum transaction fee cap in AVAX | `100` | | `api-max-duration` | duration | Maximum duration for API calls (0 = no limit) | `0` | | `api-max-blocks-per-request` | int64 | Maximum number of blocks per getLogs request (0 = no limit) | `0` | | `http-body-limit` | uint64 | Maximum size of HTTP request bodies | - | | `batch-request-limit` | uint64 | Maximum number of requests that can be batched in an RPC call. For no limit, set either this or `batch-response-max-size` to 0 | `1000` | | `batch-response-max-size` | uint64 | Maximum size (in bytes) of response that can be returned from a batched RPC call. For no limit, set either this or `batch-request-limit` to 0. Defaults to `25 MB`| `1000` | ### WebSocket Settings | Option | Type | Description | Default | |--------|------|-------------|---------| | `ws-cpu-refill-rate` | duration | Rate at which WebSocket CPU usage quota is refilled (0 = no limit) | `0` | | `ws-cpu-max-stored` | duration | Maximum stored WebSocket CPU usage quota (0 = no limit) | `0` | ## Cache Configuration ### Trie Caches | Option | Type | Description | Default | |--------|------|-------------|---------| | `trie-clean-cache` | int | Size of the trie clean cache in MB | `512` | | `trie-dirty-cache` | int | Size of the trie dirty cache in MB | `512` | | `trie-dirty-commit-target` | int | Memory limit to target in the dirty cache before performing a commit in MB | `20` | | `trie-prefetcher-parallelism` | int | Maximum concurrent disk reads trie prefetcher should perform | `16` | ### Other Caches | Option | Type | Description | Default | |--------|------|-------------|---------| | `snapshot-cache` | int | Size of the snapshot disk layer clean cache in MB | `256` | | `accepted-cache-size` | int | Depth to keep in the accepted headers and logs cache (blocks) | `32` | | `state-sync-server-trie-cache` | int | Trie cache size for state sync server in MB | `64` | ## Ethereum Settings ### Transaction Processing | Option | Type | Description | Default | |--------|------|-------------|---------| | `preimages-enabled` | bool | Enable preimage recording | `false` | | `allow-unfinalized-queries` | bool | Allow queries for unfinalized blocks | `false` | | `allow-unprotected-txs` | bool | Allow unprotected transactions (without EIP-155) | `false` | | `allow-unprotected-tx-hashes` | array | List of specific transaction hashes allowed to be unprotected | EIP-1820 registry tx | | `local-txs-enabled` | bool | Enable treatment of transactions from local accounts as local | `false` | ### Snapshots | Option | Type | Description | Default | |--------|------|-------------|---------| | `snapshot-wait` | bool | Wait for snapshot generation on startup | `false` | | `snapshot-verification-enabled` | bool | Enable snapshot verification | `false` | ## Pruning and State Management ### Basic Pruning | Option | Type | Description | Default | |--------|------|-------------|---------| | `pruning-enabled` | bool | Enable state pruning to save disk space | `true` | | `commit-interval` | uint64 | Interval at which to persist EVM and atomic tries (blocks) | `4096` | | `accepted-queue-limit` | int | Maximum blocks to queue before blocking during acceptance | `64` | ### State Reconstruction | Option | Type | Description | Default | |--------|------|-------------|---------| | `allow-missing-tries` | bool | Suppress warnings about incomplete trie index | `false` | | `populate-missing-tries` | uint64 | Starting block for re-populating missing tries (null = disabled) | `null` | | `populate-missing-tries-parallelism` | int | Concurrent readers for re-populating missing tries | `1024` | ### Offline Pruning | Option | Type | Description | Default | |--------|------|-------------|---------| | `offline-pruning-enabled` | bool | Enable offline pruning | `false` | | `offline-pruning-bloom-filter-size` | uint64 | Bloom filter size for offline pruning in MB | `512` | | `offline-pruning-data-directory` | string | Directory for offline pruning data | - | ### Historical Data | Option | Type | Description | Default | |--------|------|-------------|---------| | `historical-proof-query-window` | uint64 | Number of blocks before last accepted for proof queries (archive mode only, ~24 hours) | `43200` | | `state-history` | uint64 | Number of most recent states that are accesible on disk (pruning mode only) | `32` | ## Transaction Pool Configuration | Option | Type | Description | Default | |--------|------|-------------|---------| | `tx-pool-price-limit` | uint64 | Minimum gas price for transaction acceptance | - | | `tx-pool-price-bump` | uint64 | Minimum price bump percentage for transaction replacement | - | | `tx-pool-account-slots` | uint64 | Maximum number of executable transaction slots per account | - | | `tx-pool-global-slots` | uint64 | Maximum number of executable transaction slots for all accounts | - | | `tx-pool-account-queue` | uint64 | Maximum number of non-executable transaction slots per account | - | | `tx-pool-global-queue` | uint64 | Maximum number of non-executable transaction slots for all accounts | - | | `tx-pool-lifetime` | duration | Maximum time transactions can stay in the pool | - | ## Gossip Configuration ### Push Gossip Settings | Option | Type | Description | Default | |--------|------|-------------|---------| | `push-gossip-percent-stake` | float64 | Percentage of total stake to push gossip to (range: [0, 1]) | `0.9` | | `push-gossip-num-validators` | int | Number of validators to push gossip to | `100` | | `push-gossip-num-peers` | int | Number of non-validator peers to push gossip to | `0` | ### Regossip Settings | Option | Type | Description | Default | |--------|------|-------------|---------| | `push-regossip-num-validators` | int | Number of validators to regossip to | `10` | | `push-regossip-num-peers` | int | Number of non-validator peers to regossip to | `0` | | `priority-regossip-addresses` | array | Addresses to prioritize for regossip | - | ### Timing Configuration | Option | Type | Description | Default | |--------|------|-------------|---------| | `push-gossip-frequency` | duration | Frequency of push gossip | `100ms` | | `pull-gossip-frequency` | duration | Frequency of pull gossip | `1s` | | `regossip-frequency` | duration | Frequency of regossip | `30s` | ## Logging and Monitoring ### Logging | Option | Type | Description | Default | |--------|------|-------------|---------| | `log-level` | string | Logging level (trace, debug, info, warn, error, crit) | `"info"` | | `log-json-format` | bool | Use JSON format for logs | `false` | ### Profiling | Option | Type | Description | Default | |--------|------|-------------|---------| | `continuous-profiler-dir` | string | Directory for continuous profiler output (empty = disabled) | - | | `continuous-profiler-frequency` | duration | Frequency to run continuous profiler | `15m` | | `continuous-profiler-max-files` | int | Maximum number of profiler files to maintain | `5` | ### Metrics | Option | Type | Description | Default | |--------|------|-------------|---------| | `metrics-expensive-enabled` | bool | Enable expensive debug-level metrics; this includes Firewood metrics | `true` | ## Security and Access ### Keystore | Option | Type | Description | Default | |--------|------|-------------|---------| | `keystore-directory` | string | Directory for keystore files (absolute or relative path) | - | | `keystore-external-signer` | string | External signer configuration | - | | `keystore-insecure-unlock-allowed` | bool | Allow insecure account unlocking | `false` | ### Fee Configuration | Option | Type | Description | Default | |--------|------|-------------|---------| | `feeRecipient` | string | Address to send transaction fees to (leave empty if not supported) | - | ## Network and Sync ### Network | Option | Type | Description | Default | |--------|------|-------------|---------| | `max-outbound-active-requests` | int64 | Maximum number of outbound active requests for VM2VM network | `16` | ### State Sync | Option | Type | Description | Default | |--------|------|-------------|---------| | `state-sync-enabled` | bool | Enable state sync | `false` | | `state-sync-skip-resume` | bool | Force state sync to use highest available summary block | `false` | | `state-sync-ids` | string | Comma-separated list of state sync IDs | - | | `state-sync-commit-interval` | uint64 | Commit interval for state sync (blocks) | `16384` | | `state-sync-min-blocks` | uint64 | Minimum blocks ahead required for state sync | `300000` | | `state-sync-request-size` | uint16 | Number of key/values to request per state sync request | `1024` | ## Database Configuration > **WARNING**: `firewood` and `path` schemes are untested in production. Using `path` is strongly discouraged. To use `firewood`, you must also set the following config options: > > - `pruning-enabled: true` (enabled by default) > - `state-sync-enabled: false` > - `snapshot-cache: 0` Failing to set these options will result in errors on VM initialization. Additionally, not all APIs are available - see these portions of the config documentation for more details. | Option | Type | Description | Default | |--------|------|-------------|---------| | `database-type` | string | Type of database to use | `"pebbledb"` | | `database-path` | string | Path to database directory | - | | `database-read-only` | bool | Open database in read-only mode | `false` | | `database-config` | string | Inline database configuration | - | | `database-config-file` | string | Path to database configuration file | - | | `use-standalone-database` | bool | Use standalone database instead of shared one | - | | `inspect-database` | bool | Inspect database on startup | `false` | | `state-scheme` | string | EXPERIMENTAL: specifies the database scheme to store state data; can be one of `hash` or `firewood` | `hash` | ## Transaction Indexing | Option | Type | Description | Default | |--------|------|-------------|---------| | `transaction-history` | uint64 | Maximum number of blocks from head whose transaction indices are reserved (0 = no limit) | - | | `tx-lookup-limit` | uint64 | **Deprecated** - use `transaction-history` instead | - | | `skip-tx-indexing` | bool | Skip indexing transactions entirely | `false` | ## Warp Configuration | Option | Type | Description | Default | |--------|------|-------------|---------| | `warp-off-chain-messages` | array | Off-chain messages the node should be willing to sign | - | | `prune-warp-db-enabled` | bool | Clear warp database on startup | `false` | ## Miscellaneous | Option | Type | Description | Default | |--------|------|-------------|---------| | `airdrop` | string | Path to airdrop file | - | | `skip-upgrade-check` | bool | Skip checking that upgrades occur before last accepted block ⚠️ **Warning**: Only use when you understand the implications | `false` | | `min-delay-target` | integer | The minimum delay between blocks (in milliseconds) that this node will attempt to use when creating blocks | Parent block's target | ## Gossip Constants The following constants are defined for transaction gossip behavior and cannot be configured without a custom build of Subnet-EVM: | Constant | Type | Description | Value | |----------|------|-------------|-------| | Bloom Filter Min Target Elements | int | Minimum target elements for bloom filter | `8,192` | | Bloom Filter Target False Positive Rate | float | Target false positive rate | `1%` | | Bloom Filter Reset False Positive Rate | float | Reset false positive rate | `5%` | | Bloom Filter Churn Multiplier | int | Churn multiplier | `3` | | Push Gossip Discarded Elements | int | Number of discarded elements | `16,384` | | Tx Gossip Target Message Size | size | Target message size for transaction gossip | `20 KiB` | | Tx Gossip Throttling Period | duration | Throttling period | `10s` | | Tx Gossip Throttling Limit | int | Throttling limit | `2` | | Tx Gossip Poll Size | int | Poll size | `1` | ## Validation Notes - Cannot enable `populate-missing-tries` while pruning or offline pruning is enabled - Cannot run offline pruning while pruning is disabled - Commit interval must be non-zero when pruning is enabled - `push-gossip-percent-stake` must be in range `[0, 1]` - Some settings may require node restart to take effect # C-Chain Configs (/docs/nodes/chain-configs/primary-network/c-chain) --- title: "C-Chain Configs" description: "This page describes the configuration options available for the C-Chain." edit_url: https://github.com/ava-labs/avalanchego/edit/master/graft/coreth/plugin/evm/config/config.md --- {/* markdownlint-disable MD041 MD033 */} > **Note**: These are the configuration options available in the coreth codebase. To set these values, you need to create a configuration file at `{chain-config-dir}/C/config.json`. This file does not exist by default. > > For example if `chain-config-dir` has the default value which is `$HOME/.avalanchego/configs/chains`, then `config.json` should be placed at `$HOME/.avalanchego/configs/chains/C/config.json`. > > For the AvalancheGo node configuration options, see the AvalancheGo Configuration page. This document describes all configuration options available for coreth. The C-Chain config is printed out in the log when a node starts. Default values for each config flag are specified below. Default values are overridden only if specified in the given config file. It is recommended to only provide values which are different from the default, as that makes the config more resilient to future default changes. Otherwise, if defaults change, your node will remain with the old values, which might adversely affect your node operation. ## Example Configuration ```json { "eth-apis": ["eth", "eth-filter", "net", "web3"], "pruning-enabled": true, "commit-interval": 4096, "trie-clean-cache": 512, "trie-dirty-cache": 512, "snapshot-cache": 256, "rpc-gas-cap": 50000000, "log-level": "info", "metrics-expensive-enabled": true, "continuous-profiler-dir": "./profiles", "state-sync-enabled": false, "accepted-cache-size": 32 } ``` ## Configuration Format Configuration is provided as a JSON object. All fields are optional unless otherwise specified. ## API Configuration ### Ethereum APIs | Option | Type | Description | Default | |--------|------|-------------|---------| | `eth-apis` | array of strings | List of Ethereum services that should be enabled | `["eth", "eth-filter", "net", "web3", "internal-eth", "internal-blockchain", "internal-transaction"]` | | `eth` | bool | Adds the `eth_coinbase` and `eth_etherbase` RPC calls to the `eth_*` namespace. | `true` | | `eth-filter` | bool | Enables the public filter API for the `eth_*` namespace and adds the following RPC calls (see [Ethereum JSON-RPC API documentation](https://eth.wiki/json-rpc/API) for complete documentation):
- `eth_newPendingTransactionFilter`
- `eth_newPendingTransactions`
- `eth_newAcceptedTransactions`
- `eth_newBlockFilter`
- `eth_newHeads`
- `eth_logs`
- `eth_newFilter`
- `eth_getLogs`
- `eth_uninstallFilter`
- `eth_getFilterLogs`
- `eth_getFilterChanges`
| `true` | | `admin` | bool | Adds the `admin_importChain` and `admin_exportChain` RPC calls to the `admin_*` namespace | `false` | | `debug` | bool | Adds the following RPC calls to the `debug_*` namespace.
- `debug_dumpBlock`
- `debug_accountRange`
- `debug_preimage`
- `debug_getBadBlocks`
- `debug_storageRangeAt`
- `debug_getModifiedAccountsByNumber`
- `debug_getModifiedAccountsByHash`
- `debug_getAccessibleState`
The following RPC calls are disabled for any nodes with `state-scheme = firewood`:
- `debug_storageRangeAt`
- `debug_getModifiedAccountsByNumber`
- `debug_getModifiedAccountsByHash`
| `false` | | `net` | bool | Adds the following RPC calls to the `net_*` namespace.
- `net_listening`
- `net_peerCount`
- `net_version`
Note: Coreth is a virtual machine and does not have direct access to the networking layer, so `net_listening` always returns true and `net_peerCount` always returns 0. For accurate metrics on the network layer, users should use the AvalancheGo APIs. | `true` | | `debug-tracer` | bool | Adds the following RPC calls to the `debug_*` namespace.
- `debug_traceChain`
- `debug_traceBlockByNumber`
- `debug_traceBlockByHash`
- `debug_traceBlock`
- `debug_traceBadBlock`
- `debug_intermediateRoots`
- `debug_traceTransaction`
- `debug_traceCall` | `false` | | `web3` | bool | Adds the `web3_clientVersion` and `web3_sha3` RPC calls to the `web3_*` namespace | `true` | | `internal-eth` | bool | Adds the following RPC calls to the `eth_*` namespace.
- `eth_gasPrice`
- `eth_baseFee`
- `eth_maxPriorityFeePerGas`
- `eth_feeHistory` | `true` | | `internal-blockchain` | bool | Adds the following RPC calls to the `eth_*` namespace.
- `eth_chainId`
- `eth_blockNumber`
- `eth_getBalance`
- `eth_getProof`
- `eth_getHeaderByNumber`
- `eth_getHeaderByHash`
- `eth_getBlockByNumber`
- `eth_getBlockByHash`
- `eth_getUncleBlockByNumberAndIndex`
- `eth_getUncleBlockByBlockHashAndIndex`
- `eth_getUncleCountByBlockNumber`
- `eth_getUncleCountByBlockHash`
- `eth_getCode`
- `eth_getStorageAt`
- `eth_call`
- `eth_estimateGas`
- `eth_createAccessList`
`eth_getProof` is disabled for any node with `state-scheme = firewood` | `true` | | `internal-transaction` | bool | Adds the following RPC calls to the `eth_*` namespace.
- `eth_getBlockTransactionCountByNumber`
- `eth_getBlockTransactionCountByHash`
- `eth_getTransactionByBlockNumberAndIndex`
- `eth_getTransactionByBlockHashAndIndex`
- `eth_getRawTransactionByBlockNumberAndIndex`
- `eth_getRawTransactionByBlockHashAndIndex`
- `eth_getTransactionCount`
- `eth_getTransactionByHash`
- `eth_getRawTransactionByHash`
- `eth_getTransactionReceipt`
- `eth_sendTransaction`
- `eth_fillTransaction`
- `eth_sendRawTransaction`
- `eth_sign`
- `eth_signTransaction`
- `eth_pendingTransactions`
- `eth_resend` | `true` | | `internal-tx-pool` | bool | Adds the following RPC calls to the `txpool_*` namespace.
- `txpool_content`
- `txpool_contentFrom`
- `txpool_status`
- `txpool_inspect` | `false` | | `internal-debug` | bool | Adds the following RPC calls to the `debug_*` namespace.
- `debug_getHeaderRlp`
- `debug_getBlockRlp`
- `debug_printBlock`
- `debug_chaindbProperty`
- `debug_chaindbCompact` | `false` | | `debug-handler` | bool | Adds the following RPC calls to the `debug_*` namespace.
- `debug_verbosity`
- `debug_vmodule`
- `debug_backtraceAt`
- `debug_memStats`
- `debug_gcStats`
- `debug_blockProfile`
- `debug_setBlockProfileRate`
- `debug_writeBlockProfile`
- `debug_mutexProfile`
- `debug_setMutexProfileFraction`
- `debug_writeMutexProfile`
- `debug_writeMemProfile`
- `debug_stacks`
- `debug_freeOSMemory`
- `debug_setGCPercent` | `false` | | `internal-account` | bool | Adds the `eth_accounts` RPC call to the `eth_*` namespace | `true` | | `internal-personal` | bool | Adds the following RPC calls to the `personal_*` namespace.
- `personal_listAccounts`
- `personal_listWallets`
- `personal_openWallet`
- `personal_deriveAccount`
- `personal_newAccount`
- `personal_importRawKey`
- `personal_unlockAccount`
- `personal_lockAccount`
- `personal_sendTransaction`
- `personal_signTransaction`
- `personal_sign`
- `personal_ecRecover`
- `personal_signAndSendTransaction`
- `personal_initializeWallet`
- `personal_unpair` | `false` | ### Enabling Avalanche Specific APIs | Option | Type | Description | Default | |--------|------|-------------|---------| | `admin-api-enabled` | bool | Enables the Admin API | `false` | | `admin-api-dir` | string | Specifies the directory for the Admin API to use to store CPU/Mem/Lock Profiles | `""` | | `warp-api-enabled` | bool | Enable the Warp API for cross-chain messaging | `false` | ### API Limits and Security | Option | Type | Description | Default | |--------|------|-------------|---------| | `rpc-gas-cap` | uint64 | Maximum gas limit for RPC calls | `50,000,000` | | `rpc-tx-fee-cap` | float64 | Maximum transaction fee cap in AVAX | `100` | | `api-max-duration` | duration | Maximum duration for API calls (0 = no limit) | `0` | | `api-max-blocks-per-request` | int64 | Maximum number of blocks per getLogs request (0 = no limit) | `0` | | `http-body-limit` | uint64 | Maximum size of HTTP request bodies (0 = no limit) | `0` | | `batch-request-limit` | uint64 | Maximum number of requests that can be batched in an RPC call. For no limit, set either this or `batch-response-max-size` to 0 | `1000` | | `batch-response-max-size` | uint64 | Maximum size (in bytes) of response that can be returned from a batched RPC call. For no limit, set either this or `batch-request-limit` to 0. Defaults to `25 MB`| `1000` | ### WebSocket Settings | Option | Type | Description | Default | |--------|------|-------------|---------| | `ws-cpu-refill-rate` | duration | Rate at which WebSocket CPU usage quota is refilled (0 = no limit) | `0` | | `ws-cpu-max-stored` | duration | Maximum stored WebSocket CPU usage quota (0 = no limit) | `0` | ## Cache Configuration ### Trie Caches | Option | Type | Description | Default | |--------|------|-------------|---------| | `trie-clean-cache` | int | Size of the trie clean cache in MB | `512` | | `trie-dirty-cache` | int | Size of the trie dirty cache in MB | `512` | | `trie-dirty-commit-target` | int | Memory limit to target in the dirty cache before performing a commit in MB | `20` | | `trie-prefetcher-parallelism` | int | Maximum concurrent disk reads trie prefetcher should perform | `16` | ### Other Caches | Option | Type | Description | Default | |--------|------|-------------|---------| | `snapshot-cache` | int | Size of the snapshot disk layer clean cache in MB | `256` | | `accepted-cache-size` | int | Depth to keep in the accepted headers and logs cache (blocks) | `32` | | `state-sync-server-trie-cache` | int | Trie cache size for state sync server in MB | `64` | ## Ethereum Settings ### Transaction Processing > **⚠️ WARNING: `allow-unfinalized-queries` should likely be set to `false` in production.** Enabling this flag can result in a confusing/unreliable user experience. Enabling this flag should only be done when users are expected to have knowledge of how Snow* consensus finalizes blocks. > Unlike chains with reorgs and forks that require block confirmations, Avalanche **does not** increase the confidence that a block will be finalized based on the depth of the chain. Waiting for additional blocks **does not** confirm finalization. Enabling this flag removes the guarantee that the node only exposes finalized blocks; requiring users to guess if a block is finalized. | Option | Type | Description | Default | |--------|------|-------------|---------| | `preimages-enabled` | bool | Enable preimage recording | `false` | | `allow-unfinalized-queries` | bool | Allow queries for unfinalized blocks | `false` | | `allow-unprotected-txs` | bool | Allow unprotected transactions (without EIP-155) | `false` | | `allow-unprotected-tx-hashes` | \[\]TxHash | Specifies an array of transaction hashes that should be allowed to bypass replay protection. This flag is intended for node operators that want to explicitly allow specific transactions to be issued through their API. | an empty list | | `local-txs-enabled` | bool | Enable treatment of transactions from local accounts as local | `false` | ### Snapshots | Option | Type | Description | Default | |--------|------|-------------|---------| | `snapshot-wait` | bool | Wait for snapshot generation on startup | `false` | | `snapshot-verification-enabled` | bool | Enable snapshot verification | `false` | ## Pruning and State Management > **Note**: If a node is ever run with `pruning-enabled` as `false` (archival mode), setting `pruning-enabled` to `true` will result in a warning and the node will shut down. This is to protect against unintentional misconfigurations of an archival node. To override this and switch to pruning mode, in addition to `pruning-enabled: true`, `allow-missing-tries` should be set to `true` as well. ### Basic Pruning | Option | Type | Description | Default | |--------|------|-------------|---------| | `pruning-enabled` | bool | Enable state pruning to save disk space | `true` | | `commit-interval` | uint64 | Interval at which to persist EVM and atomic tries (blocks) | `4096` | | `accepted-queue-limit` | int | Maximum blocks to queue before blocking during acceptance | `64` | ### State Reconstruction | Option | Type | Description | Default | |--------|------|-------------|---------| | `allow-missing-tries` | bool | Suppress warnings about incomplete trie index | `false` | | `populate-missing-tries` | uint64 | Starting block for re-populating missing tries (null = disabled) | `null` | | `populate-missing-tries-parallelism` | int | Concurrent readers for re-populating missing tries | `1024` | ### Offline Pruning > **Note**: If offline pruning is enabled it will run on startup and block until it completes (approximately one hour on Mainnet). This will reduce the size of the database by deleting old trie nodes. **While performing offline pruning, your node will not be able to process blocks and will be considered offline.** While ongoing, the pruning process consumes a small amount of additional disk space (for deletion markers and the bloom filter). For more information see the [disk space considerations documentation](https://build.avax.network/docs/nodes/maintain/reduce-disk-usage#disk-space-considerations). Since offline pruning deletes old state data, this should not be run on nodes that need to support archival API requests. This is meant to be run manually, so after running with this flag once, it must be toggled back to false before running the node again. Therefore, you should run with this flag set to true and then set it to false on the subsequent run. | Option | Type | Description | Default | |--------|------|-------------|---------| | `offline-pruning-enabled` | bool | Enable offline pruning | `false` | | `offline-pruning-bloom-filter-size` | uint64 | Bloom filter size for offline pruning in MB | `512` | | `offline-pruning-data-directory` | string | Directory for offline pruning data | - | ### Historical Data | Option | Type | Description | Default | |--------|------|-------------|---------| | `historical-proof-query-window` | uint64 | Number of blocks before last accepted for proof queries (archive mode only, ~24 hours) | `43200` | | `state-history` | uint64 | Number of most recent states that are accesible on disk (pruning mode only) | `32` | ## Transaction Pool Configuration | Option | Type | Description | Default | |--------|------|-------------|---------| | `tx-pool-price-limit` | uint64 | Minimum gas price for transaction acceptance | `1` | | `tx-pool-price-bump` | uint64 | Minimum price bump percentage for transaction replacement | `10%` | | `tx-pool-account-slots` | uint64 | Maximum number of executable transaction slots per account | `16` | | `tx-pool-global-slots` | uint64 | Maximum number of executable transaction slots for all accounts | `5120` | | `tx-pool-account-queue` | uint64 | Maximum number of non-executable transaction slots per account | `64` | | `tx-pool-global-queue` | uint64 | Maximum number of non-executable transaction slots for all accounts | `1024` | | `tx-pool-lifetime` | duration | Maximum duration in nanoseconds a non-executable transaction will be allowed in the poll | `600000000000` (10 minutes) | | `price-options-slow-fee-percentage` | integer | Percentage to apply for slow fee estimation | `95` | | `price-options-fast-fee-percentage` | integer | Percentage to apply for fast fee estimation | `105` | | `price-options-max-tip` | integer | Maximum tip in wei for fee estimation | `20000000000` (20 Gwei) | ### Push Gossip Settings | Option | Type | Description | Default | |--------|------|-------------|---------| | `push-gossip-percent-stake` | float64 | Percentage of total stake to push gossip to (range: [0, 1]) | `0.9` | | `push-gossip-num-validators` | int | Number of validators to push gossip to | `100` | | `push-gossip-num-peers` | int | Number of non-validator peers to push gossip to | `0` | ### Regossip Settings | Option | Type | Description | Default | |--------|------|-------------|---------| | `push-regossip-num-validators` | int | Number of validators to regossip to | `10` | | `push-regossip-num-peers` | int | Number of non-validator peers to regossip to | `0` | ### Timing Configuration | Option | Type | Description | Default | |--------|------|-------------|---------| | `push-gossip-frequency` | duration | Frequency of push gossip | `100ms` | | `pull-gossip-frequency` | duration | Frequency of pull gossip | `1s` | | `regossip-frequency` | duration | Frequency of regossip | `30s` | ## Logging and Monitoring ### Logging | Option | Type | Description | Default | |--------|------|-------------|---------| | `log-level` | string | Logging level (trace, debug, info, warn, error, crit) | `"info"` | | `log-json-format` | bool | Use JSON format for logs | `false` | ### Profiling | Option | Type | Description | Default | |--------|------|-------------|---------| | `continuous-profiler-dir` | string | Directory for continuous profiler output (empty = disabled) | `""` | | `continuous-profiler-frequency` | duration | Frequency to run continuous profiler | `15m` | | `continuous-profiler-max-files` | int | Maximum number of profiler files to maintain | `5` | ### Metrics | Option | Type | Description | Default | |--------|------|-------------|---------| | `metrics-expensive-enabled` | bool | Enable expensive debug-level metrics | `true` | ## Security and Access ### Keystore | Option | Type | Description | Default | |--------|------|-------------|---------| | `keystore-directory` | string | Directory for keystore files (absolute or relative path); if empty, uses a temporary directory at `coreth-keystore` | `""` | | `keystore-external-signer` | string | Specifies an external URI for a clef-type signer | `""` | | `keystore-insecure-unlock-allowed` | bool | Allow insecure account unlocking | `false` | ## Network and Sync ### Network | Option | Type | Description | Default | |--------|------|-------------|---------| | `max-outbound-active-requests` | int64 | Maximum number of outbound active requests for VM2VM network | `16` | ### State Sync > **Note:** If state-sync is enabled, the peer will download chain state from peers up to a recent block near tip, then proceed with normal bootstrapping. Please note that if you need historical data, state sync isn't the right option. However, it is sufficient if you are just running a validator. | Option | Type | Description | Default | |--------|------|-------------|---------| | `state-sync-enabled` | bool | Enable state sync | `false` | | `state-sync-skip-resume` | bool | Force state sync to use highest available summary block | `false` | | `state-sync-ids` | string | Comma-separated list of state sync IDs; If not specified (or empty), peers are selected at random.| `""`| | `state-sync-commit-interval` | uint64 | Commit interval for state sync (blocks) | `16384` | | `state-sync-min-blocks` | uint64 | Minimum blocks ahead required for state sync | `300000` | | `state-sync-request-size` | uint16 | Number of key/values to request per state sync request | `1024` | ## Database Configuration > **WARNING**: `firewood` and `path` schemes are untested in production. Using `path` is strongly discouraged. To use `firewood`, you must also set the following config options: > > - `populate-missing-tries: nil` > - `snapshot-cache: 0` Failing to set these options will result in errors on VM initialization. Additionally, not all APIs are available - see these portions of the config documentation for more details. | Option | Type | Description | Default | |--------|------|-------------|---------| | `inspect-database` | bool | Inspect database on startup | `false` | | `state-scheme` | string | EXPERIMENTAL: specifies the database scheme to store state data; can be one of `hash`, `firewood`, or `path` | `hash` | ## Transaction Indexing | Option | Type | Description | Default | |--------|------|-------------|---------| | `transaction-history` | uint64 | Maximum number of blocks from head whose transaction indices are reserved (0 = no limit) | `0` | | `skip-tx-indexing` | bool | Skip indexing transactions entirely | `false` | ## Warp Configuration | Option | Type | Description | Default | |--------|------|-------------|---------| | `warp-off-chain-messages` | array | Off-chain messages the node should be willing to sign | empty array | | `prune-warp-db-enabled` | bool | Clear warp database on startup | `false` | ## Miscellaneous > **Note:** If `skip-upgrade-check` is set to `true`, the chain will skip verifying that all expected network upgrades have taken place before the last accepted block on startup. This allows node operators to recover if their node has accepted blocks after a network upgrade with a version of the code prior to the upgrade. | Option | Type | Description | Default | |--------|------|-------------|---------| | `acceptor-queue-limit` | integer | Specifies the maximum number of blocks to queue during block acceptance before blocking on Accept. | `64` | | `gas-target` | integer | The target gas per second that this node will attempt to use when creating blocks | Parent block's target | | `min-delay-target` | integer | The minimum delay between blocks (in milliseconds) that this node will attempt to use when creating blocks | Parent block's target | | `skip-upgrade-check` | bool | Skip checking that upgrades occur before last accepted block ⚠️ **Warning**: Only use when you understand the implications | `false` | # P-Chain Configs (/docs/nodes/chain-configs/primary-network/p-chain) --- title: "P-Chain Configs" description: "This page describes the configuration options available for the P-Chain." edit_url: https://github.com/ava-labs/avalanchego/edit/master/vms/platformvm/config/config.md --- This document provides details about the configuration options available for the PlatformVM. ## Standard Configurations In order to specify a configuration for the PlatformVM, you need to define a `Config` struct and its parameters. The default values for these parameters are: | Option | Type | Default | | ------------------------------------ | --------------- | ------------------ | | `network` | `Network` | `DefaultNetwork` | | `block-cache-size` | `int` | `64 * units.MiB` | | `tx-cache-size` | `int` | `128 * units.MiB` | | `transformed-subnet-tx-cache-size` | `int` | `4 * units.MiB` | | `reward-utxos-cache-size` | `int` | `2048` | | `chain-cache-size` | `int` | `2048` | | `chain-db-cache-size` | `int` | `2048` | | `block-id-cache-size` | `int` | `8192` | | `fx-owner-cache-size` | `int` | `4 * units.MiB` | | `subnet-to-l1-conversion-cache-size` | `int` | `4 * units.MiB` | | `l1-weights-cache-size` | `int` | `16 * units.KiB` | | `l1-inactive-validators-cache-size` | `int` | `256 * units.KiB` | | `l1-subnet-id-node-id-cache-size` | `int` | `16 * units.KiB` | | `checksums-enabled` | `bool` | `false` | | `mempool-prune-frequency` | `time.Duration` | `30 * time.Minute` | | `mempool-gas-capacity` | `gas.Gas` | `1_000_000` | Default values are overridden only if explicitly specified in the config. ## Network Configuration The Network configuration defines parameters that control the network's gossip and validator behavior. ### Parameters | Field | Type | Default | Description | |-------|------|---------|-------------| | `max-validator-set-staleness` | `time.Duration` | `1 minute` | Maximum age of a validator set used for peer sampling and rate limiting | | `target-gossip-size` | `int` | `20 * units.KiB` | Target number of bytes to send when pushing transactions or responding to transaction pull requests | | `push-gossip-percent-stake` | `float64` | `0.9` | Percentage of total stake to target in the initial gossip round. Higher stake nodes are prioritized to minimize network messages | | `push-gossip-num-validators` | `int` | `100` | Number of validators to push transactions to in the initial gossip round | | `push-gossip-num-peers` | `int` | `0` | Number of peers to push transactions to in the initial gossip round | | `push-regossip-num-validators` | `int` | `10` | Number of validators for subsequent gossip rounds after the initial push | | `push-regossip-num-peers` | `int` | `0` | Number of peers for subsequent gossip rounds after the initial push | | `push-gossip-discarded-cache-size` | `int` | `16384` | Size of the cache storing recently dropped transaction IDs from mempool to avoid re-pushing | | `push-gossip-max-regossip-frequency` | `time.Duration` | `30 * time.Second` | Maximum frequency limit for re-gossiping a transaction | | `push-gossip-frequency` | `time.Duration` | `500 * time.Millisecond` | Frequency of push gossip rounds | | `pull-gossip-poll-size` | `int` | `1` | Number of validators to sample during pull gossip rounds | | `pull-gossip-frequency` | `time.Duration` | `1500 * time.Millisecond` | Frequency of pull gossip rounds | | `pull-gossip-throttling-period` | `time.Duration` | `10 * time.Second` | Time window for throttling pull requests | | `pull-gossip-throttling-limit` | `int` | `2` | Maximum number of pull queries allowed per validator within the throttling window | | `expected-bloom-filter-elements` | `int` | `8 * 1024` | Expected number of elements when creating a new bloom filter. Larger values increase filter size | | `expected-bloom-filter-false-positive-probability` | `float64` | `0.01` | Target probability of false positives after inserting the expected number of elements. Lower values increase filter size | | `max-bloom-filter-false-positive-probability` | `float64` | `0.05` | Threshold for bloom filter regeneration. Filter is refreshed when false positive probability exceeds this value | ### Details The configuration is divided into several key areas: - **Validator Set Management**: Controls how fresh the validator set must be for network operations. The staleness setting ensures the network operates with reasonably current validator information. - **Gossip Size Controls**: Manages the size of gossip messages to maintain efficient network usage while ensuring reliable transaction propagation. - **Push Gossip Configuration**: Defines how transactions are initially propagated through the network, with emphasis on reaching high-stake validators first to optimize network coverage. - **Pull Gossip Configuration**: Controls how nodes request transactions they may have missed, including throttling mechanisms to prevent network overload. - **Bloom Filter Settings**: Configures the trade-off between memory usage and false positive rates in transaction filtering, with automatic filter regeneration when accuracy degrades. # X-Chain Configs (/docs/nodes/chain-configs/primary-network/x-chain) --- title: "X-Chain Configs" description: "This page describes the configuration options available for the X-Chain." edit_url: https://github.com/ava-labs/avalanchego/edit/master/vms/avm/config.md --- In order to specify a config for the X-Chain, a JSON config file should be placed at `{chain-config-dir}/X/config.json`. For example if `chain-config-dir` has the default value which is `$HOME/.avalanchego/configs/chains`, then `config.json` can be placed at `$HOME/.avalanchego/configs/chains/X/config.json`. This allows you to specify a config to be passed into the X-Chain. The default values for this config are: ```json { "checksums-enabled": false } ``` Default values are overridden only if explicitly specified in the config. The parameters are as follows: ### `checksums-enabled` _Boolean_ Enables checksums if set to `true`. # Amazon Web Services (/docs/nodes/run-a-node/on-third-party-services/amazon-web-services) --- title: Amazon Web Services description: Learn how to run a node on Amazon Web Services. --- Introduction[​](#introduction "Direct link to heading") ------------------------------------------------------- This tutorial will guide you through setting up an Avalanche node on [Amazon Web Services (AWS)](https://aws.amazon.com/). Cloud services like AWS are a good way to ensure that your node is highly secure, available, and accessible. To get started, you'll need: - An AWS account - A terminal with which to SSH into your AWS machine - A place to securely store and back up files This tutorial assumes your local machine has a Unix style terminal. If you're on Windows, you'll have to adapt some of the commands used here. Log Into AWS[​](#log-into-aws "Direct link to heading") ------------------------------------------------------- Signing up for AWS is outside the scope of this article, but Amazon has instructions [here](https://aws.amazon.com/premiumsupport/knowledge-center/create-and-activate-aws-account). It is _highly_ recommended that you set up Multi-Factor Authentication on your AWS root user account to protect it. Amazon has documentation for this [here](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_enable_virtual.html#enable-virt-mfa-for-root). Once your account is set up, you should create a new EC2 instance. An EC2 is a virtual machine instance in AWS's cloud. Go to the [AWS Management Console](https://console.aws.amazon.com/) and enter the EC2 dashboard. ![AWS Management Console.png](/images/amazon1.png) To log into the EC2 instance, you will need a key on your local machine that grants access to the instance. First, create that key so that it can be assigned to the EC2 instance later on. On the bar on the left side, under **Network & Security**, select **Key Pairs.** ![Select "Key Pairs" under the "Network & Security" drop-down.](/images/amazon2.png) Select **Create key pair** to launch the key pair creation wizard. ![Select "Create key pair."](/images/amazon3.png) Name your key `avalanche`. If your local machine has MacOS or Linux, select the `pem` file format. If it's Windows, use the `ppk` file format. Optionally, you can add tags for the key pair to assist with tracking. ![Create a key pair that will later be assigned to your EC2 instance.](/images/amazon4.png) Click `Create key pair`. You should see a success message, and the key file should be downloaded to your local machine. Without this file, you will not be able to access your EC2 instance. **Make a copy of this file and put it on a separate storage medium such as an external hard drive. Keep this file secret; do not share it with others.** ![Success message after creating a key pair.](/images/amazon5.png) Create a Security Group[​](#create-a-security-group "Direct link to heading") ----------------------------------------------------------------------------- An AWS Security Group defines what internet traffic can enter and leave your EC2 instance. Think of it like a firewall. Create a new Security Group by selecting **Security Groups** under the **Network & Security** drop-down. ![Select "Security Groups" underneath "Network & Security."](/images/amazon6.png) This opens the Security Groups panel. Click **Create security group** in the top right of the Security Groups panel. ![Select "Create security group."](/images/amazon7.png) You'll need to specify what inbound traffic is allowed. Allow SSH traffic from your IP address so that you can log into your EC2 instance (each time your ISP changes your IP address, you will need to modify this rule). Allow TCP traffic on port 9651 so your node can communicate with other nodes on the network. Allow TCP traffic on port 9650 from your IP so you can make API calls to your node. **It's important that you only allow traffic on the SSH and API port from your IP.** If you allow incoming traffic from anywhere, this could be used to brute force entry to your node (SSH port) or used as a denial of service attack vector (API port). Finally, allow all outbound traffic. ![Your inbound and outbound rules should look like this.](/images/amazon8.png) Add a tag to the new security group with key `Name` and value`Avalanche Security Group`. This will enable us to know what this security group is when we see it in the list of security groups. ![Tag the security group so you can identify it later.](/images/amazon9.png) Click `Create security group`. You should see the new security group in the list of security groups. Launch an EC2 Instance[​](#launch-an-ec2-instance "Direct link to heading") --------------------------------------------------------------------------- Now you're ready to launch an EC2 instance. Go to the EC2 Dashboard and select **Launch instance**. ![Select "Launch Instance."](/images/amazon10.png) Select **Ubuntu 20.04 LTS (HVM), SSD Volume Type** for the operating system. ![Select Ubuntu 20.04 LTS.](/images/amazon11.png) Next, choose your instance type. This defines the hardware specifications of the cloud instance. In this tutorial we set up a **c5.2xlarge**. This should be more than powerful enough since Avalanche is a lightweight consensus protocol. To create a c5.2xlarge instance, select the **Compute-optimized** option from the filter drop-down menu. ![Filter by compute optimized.](/images/amazon12.png) Select the checkbox next to the c5.2xlarge instance in the table. ![Select c5.2xlarge.](/images/amazon13.png) Click the **Next: Configure Instance Details** button in the bottom right-hand corner. ![Configure instance details](/images/amazon14.png) The instance details can stay as their defaults. When setting up a node as a validator, it is crucial to select the appropriate AWS instance type to ensure the node can efficiently process transactions and manage the network load. The recommended instance types are as follows: - For a minimal stake, start with a compute-optimized instance such as c6, c6i, c6a, c7 and similar. - Use a 2xlarge instance size for the minimal stake configuration. - As the staked amount increases, choose larger instance sizes to accommodate the additional workload. For every order of magnitude increase in stake, move up one instance size. For example, for a 20k AVAX stake, a 4xlarge instance is suitable. ### Optional: Using Reserved Instances[​](#optional-using-reserved-instances "Direct link to heading") By default, you will be charged hourly for running your EC2 instance. For a long term usage that is not optimal. You could save money by using a **Reserved Instance**. With a reserved instance, you pay upfront for an entire year of EC2 usage, and receive a lower per-hour rate in exchange for locking in. If you intend to run a node for a long time and don't want to risk service interruptions, this is a good option to save money. Again, do your own research before selecting this option. ### Add Storage, Tags, Security Group[​](#add-storage-tags-security-group "Direct link to heading") Click the **Next: Add Storage** button in the bottom right corner of the screen. You need to add space to your instance's disk. You should start with at least 700GB of disk space. Although upgrades to reduce disk usage are always in development, on average the database will continually grow, so you need to constantly monitor disk usage on the node and increase disk space if needed. Note that the image below shows 100GB as disk size, which was appropriate at the time the screenshot was taken. You should check the current [recommended disk space size](https://github.com/ava-labs/avalanchego#installation) before entering the actual value here. ![Select disk size.](/images/amazon15.png) Click **Next: Add Tags** in the bottom right corner of the screen to add tags to the instance. Tags enable us to associate metadata with our instance. Add a tag with key `Name` and value `My Avalanche Node`. This will make it clear what this instance is on your list of EC2 instances. ![Add a tag with key "Name" and value "My Avalanche Node."](/images/amazon16.png) Now assign the security group created earlier to the instance. Choose **Select an existing security group** and choose the security group created earlier. ![Choose the security group created earlier.](/images/amazon17.png) Finally, click **Review and Launch** in the bottom right. A review page will show the details of the instance you're about to launch. Review those, and if all looks good, click the blue **Launch** button in the bottom right corner of the screen. You'll be asked to select a key pair for this instance. Select **Choose an existing key pair** and then select the `avalanche` key pair you made earlier in the tutorial. Check the box acknowledging that you have access to the `.pem` or `.ppk` file created earlier (make sure you've backed it up!) and then click **Launch Instances**. ![Use the key pair created earlier.](/images/amazon18.png) You should see a new pop up that confirms the instance is launching! ![Your instance is launching!](/images/amazon19.png) ### Assign an Elastic IP[​](#assign-an-elastic-ip "Direct link to heading") By default, your instance will not have a fixed IP. Let's give it a fixed IP through AWS's Elastic IP service. Go back to the EC2 dashboard. Under **Network & Security,** select **Elastic IPs**. ![Select "Elastic IPs" under "Network & Security."](/images/amazon20.png) Select **Allocate Elastic IP address**. ![Select "Allocate Elastic IP address."](/images/amazon21.png) Select the region your instance is running in, and choose to use Amazon's pool of IPv4 addresses. Click **Allocate**. ![Settings for the Elastic IP.](/images/amazon22.png) Select the Elastic IP you just created from the Elastic IP manager. From the **Actions** drop-down, choose **Associate Elastic IP address**. ![Under "Actions" select "Associate Elastic IP address."](/images/amazon23.png) Select the instance you just created. This will associate the new Elastic IP with the instance and give it a public IP address that won't change. ![Assign the Elastic IP to your EC2 instance.](/images/amazon24.png) Set Up AvalancheGo[​](#set-up-avalanchego "Direct link to heading") ------------------------------------------------------------------- Go back to the EC2 Dashboard and select `Running Instances`. ![Go to your running instances.](/images/amazon25.png) Select the newly created EC2 instance. This opens a details panel with information about the instance. ![Details about your new instance.](/images/amazon26.png) Copy the `IPv4 Public IP` field to use later. From now on we call this value `PUBLICIP`. **Remember: the terminal commands below assume you're running Linux. Commands may differ for MacOS or other operating systems. When copy-pasting a command from a code block, copy and paste the entirety of the text in the block.** Log into the AWS instance from your local machine. Open a terminal (try shortcut `CTRL + ALT + T`) and navigate to the directory containing the `.pem` file you downloaded earlier. Move the `.pem` file to `$HOME/.ssh` (where `.pem` files generally live) with: Add it to the SSH agent so that we can use it to SSH into your EC2 instance, and mark it as read-only. ```bash ssh-add ~/.ssh/avalanche.pem; chmod 400 ~/.ssh/avalanche.pem ``` SSH into the instance. (Remember to replace `PUBLICIP` with the public IP field from earlier.) If the permissions are **not** set correctly, you will see the following error. ![Make sure you set the permissions correctly.](/images/amazon27.png) You are now logged into the EC2 instance. ![You're on the EC2 instance.](/images/amazon28.png) If you have not already done so, update the instance to make sure it has the latest operating system and security updates: ```bash sudo apt update; sudo apt upgrade -y; sudo reboot ``` This also reboots the instance. Wait 5 minutes, then log in again by running this command on your local machine: You're logged into the EC2 instance again. Now we'll need to set up our Avalanche node. To do this, follow the [Set Up Avalanche Node With Installer](/docs/nodes/run-a-node/using-install-script/installing-avalanche-go) tutorial which automates the installation process. You will need the `PUBLICIP` we set up earlier. Your AvalancheGo node should now be running and in the process of bootstrapping, which can take a few hours. To check if it's done, you can issue an API call using `curl`. If you're making the request from the EC2 instance, the request is: ```bash curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"info.isBootstrapped", "params": { "chain":"X" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info ``` Once the node is finished bootstrapping, the response will be: ```json { "jsonrpc": "2.0", "result": { "isBootstrapped": true }, "id": 1 } ``` You can continue on, even if AvalancheGo isn't done bootstrapping. In order to make your node a validator, you'll need its node ID. To get it, run: ```bash curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"info.getNodeID" }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info ``` The response contains the node ID. ```json {"jsonrpc":"2.0","result":{"nodeID":"NodeID-DznHmm3o7RkmpLkWMn9NqafH66mqunXbM"},"id":1} ``` In the above example the node ID is`NodeID-DznHmm3o7RkmpLkWMn9NqafH66mqunXbM`. Copy your node ID for later. Your node ID is not a secret, so you can just paste it into a text editor. AvalancheGo has other APIs, such as the [Health API](/docs/rpcs/other/health-rpc), that may be used to interact with the node. Some APIs are disabled by default. To enable such APIs, modify the ExecStart section of `/etc/systemd/system/avalanchego.service` (created during the installation process) to include flags that enable these endpoints. Don't manually enable any APIs unless you have a reason to. ![Some APIs are disabled by default.](/images/amazon29.png) Back up the node's staking key and certificate in case the EC2 instance is corrupted or otherwise unavailable. The node's ID is derived from its staking key and certificate. If you lose your staking key or certificate then your node will get a new node ID, which could cause you to become ineligible for a staking reward if your node is a validator. **It is very strongly advised that you copy your node's staking key and certificate**. The first time you run a node, it will generate a new staking key/certificate pair and store them in directory `/home/ubuntu/.avalanchego/staking`. Exit out of the SSH instance by running: Now you're no longer connected to the EC2 instance; you're back on your local machine. To copy the staking key and certificate to your machine, run the following command. As always, replace `PUBLICIP`. ```bash scp -r ubuntu@PUBLICIP:/home/ubuntu/.avalanchego/staking ~/aws_avalanche_backup ``` Now your staking key and certificate are in directory `~/aws_avalanche_backup` . **The contents of this directory are secret.** You should hold this directory on storage not connected to the internet (like an external hard drive.) ### Upgrading Your Node[​](#upgrading-your-node "Direct link to heading") AvalancheGo is an ongoing project and there are regular version upgrades. Most upgrades are recommended but not required. Advance notice will be given for upgrades that are not backwards compatible. To update your node to the latest version, SSH into your AWS instance as before and run the installer script again. ```bash ./avalanchego-installer.sh ``` Your machine is now running the newest AvalancheGo version. To see the status of the AvalancheGo service, run `sudo systemctl status avalanchego.` Increase Volume Size[​](#increase-volume-size "Direct link to heading") ----------------------------------------------------------------------- If you need to increase the volume size, follow these instructions from AWS: - [Request modifications to your EBS volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/requesting-ebs-volume-modifications.html) - [Extend a Linux file system after resizing a volume](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html) Wrap Up[​](#wrap-up "Direct link to heading") --------------------------------------------- That's it! You now have an AvalancheGo node running on an AWS EC2 instance. We recommend setting up [node monitoring](/docs/nodes/maintain/monitoring)for your AvalancheGo node. We also recommend setting up AWS billing alerts so you're not surprised when the bill arrives. If you have feedback on this tutorial, or anything else, send us a message on [Discord](https://chat.avalabs.org/). # AWS Marketplace (/docs/nodes/run-a-node/on-third-party-services/aws-marketplace) --- title: AWS Marketplace description: Learn how to run a node on AWS Marketplace. --- ## How to Launch an Avalanche Validator using AWS With the intention of enabling developers and entrepreneurs to on-ramp into the Avalanche ecosystem with as little friction as possible, Ava Labs recently launched an offering to deploy an Avalanche Validator node via the AWS Marketplace. This tutorial will show the main steps required to get this node running and validating on the Avalanche Fuji testnet. Product Overview[​](#product-overview "Direct link to heading") --------------------------------------------------------------- The Avalanche Validator node is available via [the AWS Marketplace](https://aws.amazon.com/marketplace/pp/prodview-nd6wgi2bhhslg). There you'll find a high level product overview. This includes a product description, pricing information, usage instructions, support information and customer reviews. After reviewing this information you want to click the "Continue to Subscribe" button. Subscribe to This Software[​](#subscribe-to-this-software "Direct link to heading") ----------------------------------------------------------------------------------- Once on the "Subscribe to this Software" page you will see a button which enables you to subscribe to this AWS Marketplace offering. In addition you'll see Terms of service including the seller's End User License Agreement and the [AWS Privacy Notice](https://aws.amazon.com/privacy/). After reviewing these you want to click on the "Continue to Configuration" button. Configure This Software[​](#configure-this-software "Direct link to heading") ----------------------------------------------------------------------------- This page lets you choose a fulfillment option and software version to launch this software. No changes are needed as the default settings are sufficient. Leave the `Fulfillment Option` as `64-bit (x86) Amazon Machine Image (AMI)`. The software version is the latest build of [the AvalancheGo full node](https://github.com/ava-labs/avalanchego/releases), `v1.9.5 (Dec 22, 2022)`, AKA `Banff.5`. This will always show the latest version. Also, the Region to deploy in can be left as `US East (N. Virginia)`. On the right you'll see the software and infrastructure pricing. Lastly, click the "Continue to Launch" button. Launch This Software[​](#launch-this-software "Direct link to heading") ----------------------------------------------------------------------- Here you can review the launch configuration details and follow the instructions to launch the Avalanche Validator Node. The changes are very minor. Leave the action as "Launch from Website." The EC2 Instance Type should remain `c5.2xlarge`. The primary change you'll need to make is to choose a keypair which will enable you to `ssh` into the newly created EC2 instance to run `curl` commands on the Validator node. You can search for existing Keypairs or you can create a new keypair and download it to your local machine. If you create a new keypair you'll need to move the keypair to the appropriate location, change the permissions and add it to the OpenSSH authentication agent. For example, on MacOS it would look similar to the following: ```bash # In this example we have a keypair called avalanche.pem which was downloaded from AWS to ~/Downloads/avalanche.pem # Confirm the file exists with the following command test -f ~/Downloads/avalanche.pem && echo "Avalanche.pem exists." # Running the above command will output the following: # Avalanche.pem exists. # Move the avalanche.pem keypair from the ~/Downloads directory to the hidden ~/.ssh directory mv ~/Downloads/avalanche.pem ~/.ssh # Next add the private key identity to the OpenSSH authentication agent ssh-add ~/.ssh/avalanche.pem; # Change file modes or Access Control Lists sudo chmod 600 ~/.ssh/avalanche.pem ``` Once these steps are complete you are ready to launch the Validator node on EC2. To make that happen click the "Launch" button ![launch successful](/images/aws1.png) You now have an Avalanche node deployed on an AWS EC2 instance! Copy the `AMI ID` and click on the `EC2 Console` link for the next step. EC2 Console[​](#ec2-console "Direct link to heading") ----------------------------------------------------- Now take the `AMI ID` from the previous step and input it into the search bar on the EC2 Console. This will bring you to the dashboard where you can find the EC2 instances public IP address. ![AMI instance](/images/aws2.png) Copy that public IP address and open a Terminal or command line prompt. Once you have the new Terminal open `ssh` into the EC2 instance with the following command. Node Configuration[​](#node-configuration "Direct link to heading") ------------------------------------------------------------------- ### Switch to Fuji Testnet[​](#switch-to-fuji-testnet "Direct link to heading") By default the Avalanche Node available through the AWS Marketplace syncs the Mainnet. If this is what you are looking for, you can skip this step. For this tutorial you want to sync and validate the Fuji Testnet. Now that you're `ssh`ed into the EC2 instance you can make the required changes to sync Fuji instead of Mainnet. First, confirm that the node is syncing the Mainnet by running the `info.getNetworkID` command. #### `info.getNetworkID` Request[​](#infogetnetworkid-request "Direct link to heading") ```bash curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"info.getNetworkID", "params": { } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info ``` #### `info.getNetworkID` Response[​](#infogetnetworkid-response "Direct link to heading") The returned `networkID` will be 1 which is the network ID for Mainnet. ```json { "jsonrpc": "2.0", "result": { "networkID": "1" }, "id": 1 } ``` Now you want to edit `/etc/avalanchego/conf.json` and change the `"network-id"` property from `"mainnet"` to `"fuji"`. To see the contents of `/etc/avalanchego/conf.json` you can `cat` the file. ```bash cat /etc/avalanchego/conf.json { "api-keystore-enabled": false, "http-host": "0.0.0.0", "log-dir": "/var/log/avalanchego", "db-dir": "/data/avalanchego", "api-admin-enabled": false, "public-ip-resolution-service": "opendns", "network-id": "mainnet" } ``` Edit that `/etc/avalanchego/conf.json` with your favorite text editor and change the value of the `"network-id"` property from `"mainnet"` to `"fuji"`. Once that's complete, save the file and restart the Avalanche node via `sudo systemctl restart avalanchego`. You can then call the `info.getNetworkID` endpoint to confirm the change was successful. #### `info.getNetworkID` Request[​](#infogetnetworkid-request-1 "Direct link to heading") ```bash curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"info.getNetworkID", "params": { } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info ``` #### `info.getNetworkID` Response[​](#infogetnetworkid-response-1 "Direct link to heading") The returned `networkID` will be 5 which is the network ID for Fuji. ```json { "jsonrpc": "2.0", "result": { "networkID": "5" }, "id": 1 } ``` Next you run the `info.isBoostrapped` command to confirm if the Avalanche Validator node has finished bootstrapping. ### `info.isBootstrapped` Request[​](#infoisbootstrapped-request "Direct link to heading") ```bash curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"info.isBootstrapped", "params": { "chain":"P" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info ``` Once the node is finished bootstrapping, the response will be: ### `info.isBootstrapped` Response[​](#infoisbootstrapped-response "Direct link to heading") ```json { "jsonrpc": "2.0", "result": { "isBootstrapped": true }, "id": 1 } ``` **Note** that initially the response is `false` because the network is still syncing. When you're adding your node as a Validator on the Avalanche Mainnet you'll want to wait for this response to return `true` so that you don't suffer from any downtime while validating. For this tutorial you're not going to wait for it to finish syncing as it's not strictly necessary. ### `info.getNodeID` Request[​](#infogetnodeid-request "Direct link to heading") Next, you want to get the NodeID which will be used to add the node as a Validator. To get the node's ID you call the `info.getNodeID` jsonrpc endpoint. ```bash curl --location --request POST 'http://127.0.0.1:9650/ext/info' \ --header 'Content-Type: application/json' \ --data-raw '{ "jsonrpc":"2.0", "id" :1, "method" :"info.getNodeID", "params" :{ } }' ``` ### `info.getNodeID` Response[​](#infogetnodeid-response "Direct link to heading") Take a note of the `nodeID` value which is returned as you'll need to use it in the next step when adding a validator via the Avalanche Web Wallet. In this case the `nodeID` is `NodeID-Q8Gfaaio9FAqCmZVEXDq9bFvNPvDi7rt5` ```json { "jsonrpc": "2.0", "result": { "nodeID": "NodeID-Q8Gfaaio9FAqCmZVEXDq9bFvNPvDi7rt5", "nodePOP": { "publicKey": "0x85675db18b326a9585bfd43892b25b71bf01b18587dc5fac136dc5343a9e8892cd6c49b0615ce928d53ff5dc7fd8945d", "proofOfPossession": "0x98a56f092830161243c1f1a613ad68a7f1fb25d2462ecf85065f22eaebb4e93a60e9e29649a32252392365d8f628b2571174f520331ee0063a94473f8db6888fc3a722be330d5c51e67d0d1075549cb55376e1f21d1b48f859ef807b978f65d9" } }, "id": 1 } ``` Add Node as Validator on Fuji via Core web[​](#add-node-as-validator-on-fuji-via-core-web "Direct link to heading") ------------------------------------------------------------------------------------------------------------------- For adding the new node as a Validator on the Fuji testnet's Primary Network you can use the [Core web](https://core.app/) [connected](https://support.avax.network/en/articles/6639869-core-web-how-do-i-connect-to-core-web) to [Core extension](https://core.app). If you don't have a Core extension already, check out this [guide](https://support.avax.network/en/articles/6100129-core-extension-how-do-i-create-a-new-wallet). If you'd like to import an existing wallet to Core extension, follow [these steps](https://support.avax.network/en/articles/6078933-core-extension-how-do-i-access-my-existing-account). ![Core web](/images/aws3.png) Core web is a free, all-in-one command center that gives users a more intuitive and comprehensive way to view assets, and use dApps across the Avalanche network, its various Avalanche L1s, and Ethereum. Core web is optimized for use with the Core browser extension and Core mobile (available on both iOS & Android). Together, they are key components of the Core product suite that brings dApps, NFTs, Avalanche Bridge, Avalanche L1s, L2s, and more, directly to users. ### Switching to Testnet Mode[​](#switching-to-testnet-mode "Direct link to heading") By default, Core web and Core extension are connected to Mainnet. For the sake of this demo, you want to connect to the Fuji Testnet. #### On Core Extension[​](#on-core-extension "Direct link to heading") Click the Tools icon in the sidebar, select Settings, and then toggle the Testnet Mode on. ![](/images/aws4.gif) You can follow the same steps for switching back to Mainnet. #### On Core web[​](#on-core-web "Direct link to heading") Click on the Settings button top-right corner of the page, then toggle Testnet Mode on. ![](/images/aws5.gif) You can follow the same steps for switching back to Mainnet. ### Adding the Validator[​](#adding-the-validator "Direct link to heading") - Node ID: A unique ID derived from each individual node's staker certificate. Use the `NodeID` which was returned in the `info.getNodeID` response. In this example it's `NodeID-Q8Gfaaio9FAqCmZVEXDq9bFvNPvDi7rt5` - Staking End Date: Your AVAX tokens will be locked until this date. - Stake Amount: The amount of AVAX to lock for staking. On Mainnet, the minimum required amount is 2,000 AVAX. On Testnet the minimum required amount is 1 AVAAX. - Delegation Fee: You will claim this % of the rewards from the delegators on your node. - Reward Address: A reward address is the destination address of the accumulated staking rewards. To add a node as a Validator, first select the Stake tab on Core web, in the left hand nav menu. Next click the Validate button, and select Get Started. ![](/images/aws6.gif) This page will open up. ![](/images/aws7.png) Choose the desired Staking Amount, then click Next. ![](/images/aws8.png) Enter you Node ID, then click Next. ![](/images/aws9.png) Here, you'll need to choose the staking duration. There are predefined values, like 1 day, 1 month and so on. You can also choose a custom period of time. For this example, 22 days were chosen. ![](/images/aws10.png) Choose the address that the network will send rewards to. Make sure it's the correct address because once the transaction is submitted this cannot be changed later or undone. You can choose the wallet's P-Chain address, or a custom P-Chain address. After entering the address, click Next. ![](/images/aws11.png) Other individuals can stake to your validator and receive rewards too, known as "delegating." You will claim this percent of the rewards from the delegators on your node. Click Next. ![](/images/aws12.png) After entering all these details, a summary of your validation will show up. If everything is correct, you can proceed and click on Submit Validation. A new page will open up, prompting you to accept the transaction. Here, please approve the transaction. ![](/images/aws13.png) After the transaction is approved, you will see a message saying that your validation transaction was submitted. ![](/images/aws14.png) If you click on View on explorer, a new browser tab will open with the details of the `AddValidatorTx`. It will show details such as the total value of AVAX transferred, any AVAX which were burned, the blockchainID, the blockID, the NodeID of the validator, and the total time which has elapsed from the entire Validation period. ![](/images/aws15.png) Confirm That the Node is a Pending Validator on Fuji[​](#confirm-that-the-node-is-a-pending-validator-on-fuji "Direct link to heading") --------------------------------------------------------------------------------------------------------------------------------------- As a last step you can call the `platform.getPendingvalidators` endpoint to confirm that the Avalanche node which was recently spun up on AWS is no in the pending validators queue where it will stay for 5 minutes. ### `platform.getPendingValidators` Request[​](#platformgetpendingvalidators-request "Direct link to heading") ```bash curl --location --request POST 'https://api.avax-test.network/ext/bc/P' \ --header 'Content-Type: application/json' \ --data-raw '{ "jsonrpc": "2.0", "method": "platform.getPendingValidators", "params": { "subnetID": "11111111111111111111111111111111LpoYY", "nodeIDs": [] }, "id": 1 }' ``` ### `platform.getPendingValidators` Response[​](#platformgetpendingvalidators-response "Direct link to heading") ```json { "jsonrpc": "2.0", "result": { "validators": [ { "txID": "4d7ZboCrND4FjnyNaF3qyosuGQsNeJ2R4KPJhHJ55VCU1Myjd", "startTime": "1673411918", "endTime": "1675313170", "stakeAmount": "1000000000", "nodeID": "NodeID-Q8Gfaaio9FAqCmZVEXDq9bFvNPvDi7rt5", "delegationFee": "2.0000", "connected": false, "delegators": null } ], "delegators": [] }, "id": 1 } ``` You can also pass in the `NodeID` as a string to the `nodeIDs` array in the request body. ```bash curl --location --request POST 'https://api.avax-test.network/ext/bc/P' \ --header 'Content-Type: application/json' \ --data-raw '{ "jsonrpc": "2.0", "method": "platform.getPendingValidators", "params": { "subnetID": "11111111111111111111111111111111LpoYY", "nodeIDs": ["NodeID-Q8Gfaaio9FAqCmZVEXDq9bFvNPvDi7rt5"] }, "id": 1 }' ``` This will filter the response by the `nodeIDs` array which will save you time by no longer requiring you to search through the entire response body for the NodeIDs. ```json { "jsonrpc": "2.0", "result": { "validators": [ { "txID": "4d7ZboCrND4FjnyNaF3qyosuGQsNeJ2R4KPJhHJ55VCU1Myjd", "startTime": "1673411918", "endTime": "1675313170", "stakeAmount": "1000000000", "nodeID": "NodeID-Q8Gfaaio9FAqCmZVEXDq9bFvNPvDi7rt5", "delegationFee": "2.0000", "connected": false, "delegators": null } ], "delegators": [] }, "id": 1 } ``` After 5 minutes the node will officially start validating the Avalanche Fuji testnet and you will no longer see it in the response body for the `platform.getPendingValidators` endpoint. Now you will access it via the `platform.getCurrentValidators` endpoint. ### `platform.getCurrentValidators` Request[​](#platformgetcurrentvalidators-request "Direct link to heading") ```bash curl --location --request POST 'https://api.avax-test.network/ext/bc/P' \ --header 'Content-Type: application/json' \ --data-raw '{ "jsonrpc": "2.0", "method": "platform.getCurrentValidators", "params": { "subnetID": "11111111111111111111111111111111LpoYY", "nodeIDs": ["NodeID-Q8Gfaaio9FAqCmZVEXDq9bFvNPvDi7rt5"] }, "id": 1 }' ``` ### `platform.getCurrentValidators` Response[​](#platformgetcurrentvalidators-response "Direct link to heading") ```json { "jsonrpc": "2.0", "result": { "validators": [ { "txID": "2hy57Z7KiZ8L3w2KonJJE1fs5j4JDzVHLjEALAHaXPr6VMeDhk", "startTime": "1673411918", "endTime": "1675313170", "stakeAmount": "1000000000", "nodeID": "NodeID-Q8Gfaaio9FAqCmZVEXDq9bFvNPvDi7rt5", "rewardOwner": { "locktime": "0", "threshold": "1", "addresses": [ "P-fuji1tgj2c3k56enytw5d78rt0tsq3lzg8wnftffwk7" ] }, "validationRewardOwner": { "locktime": "0", "threshold": "1", "addresses": [ "P-fuji1tgj2c3k56enytw5d78rt0tsq3lzg8wnftffwk7" ] }, "delegationRewardOwner": { "locktime": "0", "threshold": "1", "addresses": [ "P-fuji1tgj2c3k56enytw5d78rt0tsq3lzg8wnftffwk7" ] }, "potentialReward": "5400963", "delegationFee": "2.0000", "uptime": "0.0000", "connected": false, "delegators": null } ] }, "id": 1 } ``` Mainnet[​](#mainnet "Direct link to heading") --------------------------------------------- All of these steps can be applied to Mainnet. However, the minimum required Avax token amounts to become a validator is 2,000 on the Mainnet. For more information, please read [this doc](/docs/primary-network/validate/how-to-stake#validators). Maintenance[​](#maintenance "Direct link to heading") ----------------------------------------------------- AWS one click is meant to be used in automated environments, not as an end-user solution. You can still manage it manually, but it is not as easy as an Ubuntu instance or using the script: - AvalancheGo binary is at `/usr/local/bin/avalanchego` - Main node config is at `/etc/avalanchego/conf.json` - Working directory is at `/home/avalanche/.avalanchego/ (and belongs to avalanchego user)` - Database is at `/data/avalanchego` - Logs are at `/var/log/avalanchego` For a simple upgrade you would need to place the new binary at `/usr/local/bin/`. If you run an Avalanche L1, you would also need to place the VM binary into `/home/avalanche/.avalanchego/plugins`. You can also look at using [this guide](https://docs.aws.amazon.com/systems-manager/latest/userguide/automation-tutorial-update-ami.html), but that won't address updating the Avalanche L1, if you have one. Summary[​](#summary "Direct link to heading") --------------------------------------------- Avalanche is the first decentralized smart contracts platform built for the scale of global finance, with near-instant transaction finality. Now with an Avalanche Validator node available as a one-click install from the AWS Marketplace developers and entrepreneurs can on-ramp into the Avalanche ecosystem in a matter of minutes. If you have any questions or want to follow up in any way please join our Discord server at [https://discord.gg/avax](https://discord.gg/avax/). For more developer resources please check out our [Developer Documentation](/docs). # Google Cloud (/docs/nodes/run-a-node/on-third-party-services/google-cloud) --- title: Google Cloud description: Learn how to run an Avalanche node on Google Cloud. --- This document was written by a community member, some information may be outdated. Introduction[​](#introduction "Direct link to heading") ------------------------------------------------------- Google's Cloud Platform (GCP) is a scalable, trusted and reliable hosting platform. Google operates a significant amount of it's own global networking infrastructure. It's [fiber network](https://cloud.google.com/blog/products/networking/google-cloud-networking-in-depth-cloud-cdn) can provide highly stable and consistent global connectivity. In this article, we will leverage GCP to deploy a node on which Avalanche can installed via [terraform](https://www.terraform.io/). Leveraging `terraform` may seem like overkill, it should set you apart as an operator and administrator as it will enable you greater flexibility and provide the basis on which you can easily build further automation. Conventions[​](#conventions "Direct link to heading") ----------------------------------------------------- - `Items` highlighted in this manor are GCP parlance and can be searched for further reference in the Google documentation for their cloud products. Important Notes[​](#important-notes "Direct link to heading") ------------------------------------------------------------- - The machine type used in this documentation is for reference only and the actual sizing you use will depend entirely upon the amount that is staked and delegated to the node. Architectural Description[​](#architectural-description "Direct link to heading") --------------------------------------------------------------------------------- This section aims to describe the architecture of the system that the steps in the [Setup Instructions](#-setup-instructions) section deploy when enacted. This is done so that the executor can not only deploy the reference architecture, but also understand and potentially optimize it for their needs. ### Project[​](#project "Direct link to heading") We will create and utilize a single GCP `Project` for deployment of all resources. #### Service Enablement[​](#service-enablement "Direct link to heading") Within our GCP project we will need to enable the following Cloud Services: - `Compute Engine` - `IAP` ### Networking[​](#networking "Direct link to heading") #### Compute Network[​](#compute-network "Direct link to heading") We will deploy a single `Compute Network` object. This unit is where we will deploy all subsequent networking objects. It provides a logical boundary and securitization context should you wish to deploy other chain stacks or other infrastructure in GCP. #### Public IP[​](#public-ip "Direct link to heading") Avalanche requires that a validator communicate outbound on the same public IP address that it advertises for other peers to connect to it on. Within GCP this precludes the possibility of us using a Cloud NAT Router for the outbound communications and requires us to bind the public IP that we provision to the interface of the machine. We will provision a single `EXTERNAL` static IPv4 `Compute Address`. #### Avalanche L1s[​](#avalanche-l1s "Direct link to heading") For the purposes of this documentation we will deploy a single `Compute Subnetwork` in the US-EAST1 `Region` with a /24 address range giving us 254 IP addresses (not all usable but for the sake of generalized documentation). ### Compute[​](#compute "Direct link to heading") #### Disk[​](#disk "Direct link to heading") We will provision a single 400GB `PD-SSD` disk that will be attached to our VM. #### Instance[​](#instance "Direct link to heading") We will deploy a single `Compute Instance` of size `e2-standard-8`. Observations of operations using this machine specification suggest it is memory over provisioned and could be brought down to 16GB using custom machine specification; but please review and adjust as needed (the beauty of compute virtualization!!). #### Zone[​](#zone "Direct link to heading") We will deploy our instance into the `US-EAST1-B` `Zone` #### Firewall[​](#firewall "Direct link to heading") We will provision the following `Compute Firewall` rules: - IAP INGRESS for SSH (TCP 22) - this only allows GCP IAP sources inbound on SSH. - P2P INGRESS for AVAX Peers (TCP 9651) These are obviously just default ports and can be tailored to your needs as you desire. Setup Instructions[​](#-setup-instructions "Direct link to heading") -------------------------------------------------------------------- ### GCP Account[​](#gcp-account "Direct link to heading") 1. If you don't already have a GCP account go create one [here](https://console.cloud.google.com/freetrial) You will get some free bucks to run a trial, the trial is feature complete but your usage will start to deplete your free bucks so turn off anything you don't need and/or add a credit card to your account if you intend to run things long term to avoid service shutdowns. ### Project[​](#project-1 "Direct link to heading") Login to the GCP `Cloud Console` and create a new `Project` in your organization. Let's use the name `my-avax-nodes` for the sake of this setup. 1. ![Select Project Dropdown](/images/cloud1.png) 2. ![Click New Project Button](/images/cloud2.png) 3. ![Create New Project](/images/cloud3.png) ### Terraform State[​](#terraform-state "Direct link to heading") Terraform uses a state files to compose a differential between current infrastructure configuration and the proposed plan. You can store this state in a variety of different places, but using GCP storage is a reasonable approach given where we are deploying so we will stick with that. 1. ![Select Cloud Storage Browser](/images/cloud4.png) 2. ![Create New Bucket](/images/cloud5.png) Authentication to GCP from terraform has a few different options which are laid out [here](https://www.terraform.io/language/settings/backends/gcs). Please chose the option that aligns with your context and ensure those steps are completed before continuing. Depending upon how you intend to execute your terraform operations you may or may not need to enable public access to the bucket. Obviously, not exposing the bucket for `public` access (even if authenticated) is preferable. If you intend to simply run terraform commands from your local machine then you will need to open the access up. I recommend to employ a full CI/CD pipeline using GCP Cloud Build which if utilized will mean the bucket can be marked as `private`. A full walk through of Cloud Build setup in this context can be found [here](https://cloud.google.com/architecture/managing-infrastructure-as-code) ### Clone GitHub Repository[​](#clone-github-repository "Direct link to heading") I have provided a rudimentary terraform construct to provision a node on which to run Avalanche which can be found [here](https://github.com/meaghanfitzgerald/deprecated-avalanche-docs/tree/master/static/scripts). Documentation below assumes you are using this repository but if you have another terraform skeleton similar steps will apply. ### Terraform Configuration[​](#terraform-configuration "Direct link to heading") 1. If running terraform locally, please [install](https://learn.hashicorp.com/tutorials/terraform/install-cli) it. 2. In this repository, navigate to the `terraform` directory. 3. Under the `projects` directory, rename the `my-avax-project` directory to match your GCP project name that you created (not required, but nice to be consistent) 4. Under the folder you just renamed locate the `terraform.tfvars` file. 5. Edit this file and populate it with the values which make sense for your context and save it. 6. Locate the `backend.tf` file in the same directory. 7. Edit this file ensuring to replace the `bucket` property with the GCS bucket name that you created earlier. If you do not with to use cloud storage to persist terraform state then simply switch the `backend` to some other desirable provider. ### Terraform Execution[​](#terraform-execution "Direct link to heading") Terraform enables us to see what it would do if we were to run it without actually applying any changes... this is called a `plan` operation. This plan is then enacted (optionally) by an `apply`. #### Plan[​](#plan "Direct link to heading") 1. In a terminal which is able to execute the `tf` binary, `cd` to the ~`my-avax-project` directory that you renamed in step 3 of `Terraform Configuration`. 2. Execute the command `tf plan` 3. You should see a JSON output to the stdout of the terminal which lays out the operations that terraform will execute to apply the intended state. #### Apply[​](#apply "Direct link to heading") 1. In a terminal which is able to execute the `tf` binary, `cd` to the ~`my-avax-project` directory that you renamed in step 3 of `Terraform Configuration`. 2. Execute the command `tf apply` If you want to ensure that terraform does **exactly** what you saw in the `apply` output, you can optionally request for the `plan` output to be saved to a file to feed to `apply`. This is generally considered best practice in highly fluid environments where rapid change is occurring from multiple sources. Conclusion[​](#conclusion "Direct link to heading") --------------------------------------------------- Establishing CI/CD practices using tools such as GitHub and Terraform to manage your infrastructure assets is a great way to ensure base disaster recovery capabilities and to ensure you have a place to embed any ~tweaks you have to make operationally removing the potential to miss them when you have to scale from 1 node to 10. Having an automated pipeline also gives you a place to build a bigger house... what starts as your interest in building and managing a single AVAX node today can quickly change into you building an infrastructure operation for many different chains working with multiple different team members. I hope this may have inspired you to take a leap into automation in this context! # Latitude (/docs/nodes/run-a-node/on-third-party-services/latitude) --- title: Latitude description: Learn how to run an Avalanche node on Latitude.sh. --- Introduction[​](#introduction "Direct link to heading") ------------------------------------------------------- This tutorial will guide you through setting up an Avalanche node on [Latitude.sh](https://latitude.sh/). Latitude.sh provides high-performance lighting-fast bare metal servers to ensure that your node is highly secure, available, and accessible. To get started, you'll need: - A Latitude.sh account - A terminal with which to SSH into your Latitude.sh machine For the instructions on creating an account and server with Latitude.sh, please reference their [GitHub tutorial](https://github.com/NottherealIllest/Latitude.sh-post/blob/main/avalanhe/avax-copy.md) , or visit [this page](https://www.latitude.sh/dashboard/signup) to sign up and create your first project. This tutorial assumes your local machine has a Unix-style terminal. If you're on Windows, you'll have to adapt some of the commands used here. Configuring Your Server[​](#configuring-your-server "Direct link to heading") ----------------------------------------------------------------------------- ### Create a Latitude.sh Account[​](#create-a-latitudesh-account "Direct link to heading") At this point your account has been verified, and you have created a new project and deployed the server according to the instructions linked above. ### Access Your Server & Further Steps[​](#access-your-server--further-steps "Direct link to heading") All your Latitude.sh credentials are available by clicking the `server` under your project, and can be used to access your Latitude.sh machine from your local machine using a terminal. You will need to install the `avalanche node installer script` directly in the server's terminal. After gaining access, we'll need to set up our Avalanche node. To do this, follow the instructions here to install and run your node [Set Up Avalanche Node With Installer](/docs/nodes/run-a-node/using-install-script/installing-avalanche-go). Your AvalancheGo node should now be running and in the process of bootstrapping, which can take a few hours. To check if it's done, you can issue an API call using `curl`. The request is: ```bash curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"info.isBootstrapped", "params": { "chain":"X" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info ``` Once the node is finished bootstrapping, the response will be: ```json { "jsonrpc": "2.0", "result": { "isBootstrapped": true }, "id": 1 } ``` You can continue on, even if AvalancheGo isn't done bootstrapping. In order to make your node a validator, you'll need its node ID. To get it, run: ```bash curl -X POST --data '{ "jsonrpc": "2.0", "id": 1, "method": "info.getNodeID" }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info ``` The response contains the node ID. ```json { "jsonrpc": "2.0", "result": { "nodeID": "KhDnAoZDW8iRJ3F26iQgK5xXVFMPcaYeu" }, "id": 1 } ``` In the above example the node ID is `NodeID-KhDnAoZDW8iRJ3F26iQgK5xXVFMPcaYeu`. AvalancheGo has other APIs, such as the [Health API](/docs/rpcs/other/health-rpc), that may be used to interact with the node. Some APIs are disabled by default. To enable such APIs, modify the ExecStart section of `/etc/systemd/system/avalanchego.service` (created during the installation process) to include flags that enable these endpoints. Don't manually enable any APIs unless you have a reason to. Exit out of the SSH server by running: ### Upgrading Your Node[​](#upgrading-your-node "Direct link to heading") AvalancheGo is an ongoing project and there are regular version upgrades. Most upgrades are recommended but not required. Advance notice will be given for upgrades that are not backwards compatible. To update your node to the latest version, SSH into your server using a terminal and run the installer script again. ```bash ./avalanchego-installer.sh ``` Your machine is now running the newest AvalancheGo version. To see the status of the AvalancheGo service, run `sudo systemctl status avalanchego.` Wrap Up[​](#wrap-up "Direct link to heading") --------------------------------------------- That's it! You now have an AvalancheGo node running on a Latitude.sh machine. We recommend setting up [node monitoring](/docs/nodes/maintain/monitoring) for your AvalancheGo node. # Microsoft Azure (/docs/nodes/run-a-node/on-third-party-services/microsoft-azure) --- title: Microsoft Azure description: How to run an Avalanche node on Microsoft Azure. --- This document was written by a community member, some information may be out of date. Running a validator and staking with Avalanche provides extremely competitive rewards of between 9.69% and 11.54% depending on the length you stake for. The maximum rate is earned by staking for a year, whilst the lowest rate for 14 days. There is also no slashing, so you don't need to worry about a hardware failure or bug in the client which causes you to lose part or all of your stake. Instead with Avalanche you only need to currently maintain at least 80% uptime to receive rewards. If you fail to meet this requirement you don't get slashed, but you don't receive the rewards. **You also do not need to put your private keys onto a node to begin validating on that node.** Even if someone breaks into your cloud environment and gains access to the node, the worst they can do is turn off the node. Not only does running a validator node enable you to receive rewards in AVAX, but later you will also be able to validate other Avalanche L1s in the ecosystem as well and receive rewards in the token native to their Avalanche L1s. Hardware requirements to run a validator are relatively modest: 8 CPU cores, 16 GB of RAM and 1 TB SSD. It also doesn't use enormous amounts of energy. Avalanche's [revolutionary consensus mechanism](/docs/primary-network/avalanche-consensus) is able to scale to millions of validators participating in consensus at once, offering unparalleled decentralisation. Currently the minimum amount required to stake to become a validator is 2,000 AVAX. Alternatively, validators can also charge a small fee to enable users to delegate their stake with them to help towards running costs. In this article we will step through the process of configuring a node on Microsoft Azure. This tutorial assumes no prior experience with Microsoft Azure and will go through each step with as few assumptions possible. At the time of this article, spot pricing for a virtual machine with 2 Cores and 8 GB memory costs as little as 0.01060perhourwhichworksoutatabout0.01060 per hour which works out at about 113.44 a year, **a saving of 83.76%! compared to normal pay as you go prices.** In comparison a virtual machine in AWS with 2 Cores and 4 GB Memory with spot pricing is around $462 a year. Initial Subscription Configuration[​](#initial-subscription-configuration "Direct link to heading") --------------------------------------------------------------------------------------------------- ### Set up 2 Factor[​](#set-up-2-factor "Direct link to heading") First you will need a Microsoft Account, if you don't have one already you will see an option to create one at the following link. If you already have one, make sure to set up 2 Factor authentication to secure your node by going to the following link and then selecting "Two-step verification" and following the steps provided. [https://account.microsoft.com/security](https://account.microsoft.com/security) ![Image for post](/images/azure1.png) Once two factor has been configured log into the Azure portal by going to [https://portal.azure.com](https://portal.azure.com/) and signing in with your Microsoft account. When you login you won't have a subscription, so we need to create one first. Select "Subscriptions" as highlighted below: ![Image for post](/images/azure2.png) Then select "+ Add" to add a new subscription ![Image for post](/images/azure3.png) If you want to use Spot Instance VM Pricing (which will be considerably cheaper) you can't use a Free Trial account (and you will receive an error upon validation), so **make sure to select Pay-As-You-Go.** ![Image for post](/images/azure4.png) Enter your billing details and confirm identity as part of the sign-up process, when you get to Add technical support select the without support option (unless you want to pay extra for support) and press Next. ![Image for post](/images/azure5.png) Create a Virtual Machine[​](#create-a-virtual-machine "Direct link to heading") ------------------------------------------------------------------------------- Now that we have a subscription, we can create the Ubuntu Virtual Machine for our Avalanche Node. Select the Icon in the top left for the Menu and choose "+ Create a resource" ![Image for post](/images/azure6.png) Select Ubuntu Server 18.04 LTS (this will normally be under the popular section or alternatively search for it in the marketplace) ![Image for post](/images/azure7.png) This will take you to the Create a virtual machine page as shown below: ![Image for post](/images/azure8.png) First, enter a virtual machine a name, this can be anything but in my example, I have called it Avalanche (This will also automatically change the resource group name to match) Then select a region from the drop-down list. Select one of the recommended ones in a region that you prefer as these tend to be the larger ones with most features enabled and cheaper prices. In this example I have selected North Europe. ![Image for post](/images/azure9.png) You have the option of using spot pricing to save significant amounts on running costs. Spot instances use a supply and demand market price structure. As demand for instances goes up, the price for the spot instance goes up. If there is insufficient capacity, then your VM will be turned off. The chances of this happening are incredibly low though, especially if you select the Capacity only option. Even in the unlikely event it does get turned off temporarily you only need to maintain at least 80% up time to receive the staking rewards and there is no slashing implemented in Avalanche. Select Yes for Azure Spot instance, select Eviction type to Capacity Only and **make sure to set the eviction policy to Stop / Deallocate. This is very important otherwise the VM will be deleted** ![Image for post](/images/azure10.png) Choose "Select size" to change the Virtual Machine size, and from the menu select D2s\_v4 under the D-Series v4 selection (This size has 2 Cores, 8 GB Memory and enables Premium SSDs). You can use F2s\_v2 instances instead, with are 2 Cores, 4 GB Memory and enables Premium SSDs) but the spot price actually works out cheaper for the larger VM currently with spot instance prices. You can use [this link](https://azure.microsoft.com/en-us/pricing/details/virtual-machines/linux/) to view the prices across the different regions. ![Image for post](/images/azure11.png) Once you have selected the size of the Virtual Machine, select "View pricing history and compare prices in nearby regions" to see how the spot price has changed over the last 3 months, and whether it's cheaper to use a nearby region which may have more spare capacity. ![Image for post](/images/azure12.png) At the time of this article, spot pricing for D2s\_v4 in North Europe costs 0.07975perhour,oraround0.07975 per hour, or around 698.61 a year. With spot pricing, the price falls to 0.01295perhour,whichworksoutatabout0.01295 per hour, which works out at about 113.44 a year, **a saving of 83.76%!** There are some regions which are even cheaper, East US for example is 0.01060perhouroraround0.01060 per hour or around 92.86 a year! ![Image for post](/images/azure13.png) Below you can see the price history of the VM over the last 3 months for North Europe and regions nearby. ![Image for post](/images/azure14.png) ### Cheaper Than Amazon AWS[​](#cheaper-than-amazon-aws "Direct link to heading") As a comparison a c5.large instance costs 0.085USDperhouronAWS.Thistotals 0.085 USD per hour on AWS. This totals ~745 USD per year. Spot instances can save 62%, bringing that total down to $462. The next step is to change the username for the VM, to align with other Avalanche tutorials change the username to Ubuntu. Otherwise you will need to change several commands later in this article and swap out Ubuntu with your new username. ![Image for post](/images/azure15.png) ### Disks[​](#disks "Direct link to heading") Select Next: Disks to then configure the disks for the instance. There are 2 choices for disks, either Premium SSD which offer greater performance with a 64 GB disk costs around 10amonth,orthereisthestandardSSDwhichofferslowerperformanceandisaround10 a month, or there is the standard SSD which offers lower performance and is around 5 a month. You also have to pay $0.002 per 10,000 transaction units (reads / writes and deletes) with the Standard SSD, whereas with Premium SSDs everything is included. Personally, I chose the Premium SSD for greater performance, but also because the disks are likely to be heavily used and so may even work out cheaper in the long run. Select Next: Networking to move onto the network configuration ![Image for post](/images/azure16.png) ### Network Config[​](#network-config "Direct link to heading") You want to use a Static IP so that the public IP assigned to the node doesn't change in the event it stops. Under Public IP select "Create new" ![Image for post](/images/azure17.png) Then select "Static" as the Assignment type ![Image for post](/images/azure19.png) Then we need to configure the network security group to control access inbound to the Avalanche node. Select "Advanced" as the NIC network security group type and select "Create new" ![Image for post](/images/azure20.png) For security purposes you want to restrict who is able to remotely connect to your node. To do this you will first want to find out what your existing public IP is. This can be done by going to google and searching for "what's my IP" ![Image for post](/images/azure21.png) It's likely that you have been assigned a dynamic public IP for your home, unless you have specifically requested it, and so your assigned public IP may change in the future. It's still recommended to restrict access to your current IP though, and then in the event your home IP changes and you are no longer able to remotely connect to the VM, you can just update the network security rules with your new public IP so you are able to connect again. NOTE: If you need to change the network security group rules after deployment if your home IP has changed, search for "avalanche-nsg" and you can modify the rule for SSH and Port 9650 with the new IP. **Port 9651 needs to remain open to everyone** though as that's how it communicates with other Avalanche nodes. ![Image for post](/images/azure22.png) Now that you have your public IP select the default allow ssh rule on the left under inbound rules to modify it. Change Source from "Any" to "IP Addresses" and then enter in your Public IP address that you found from google in the Source IP address field. Change the Priority towards the bottom to 100 and then press Save. ![Image for post](/images/azure23.png) Then select "+ Add an inbound rule" to add another rule for RPC access, this should also be restricted to only your IP. Change Source to "IP Addresses" and enter in your public IP returned from google into the Source IP field. This time change the "Destination port ranges" field to 9650 and select "TCP" as the protocol. Change the priority to 110 and give it a name of "Avalanche\_RPC" and press Add. ![Image for post](/images/azure24.png) Select "+ Add an inbound rule" to add a final rule for the Avalanche Protocol so that other nodes can communicate with your node. This rule needs to be open to everyone so keep "Source" set to "Any." Change the Destination port range to "9651" and change the protocol to "TCP." Enter a priority of 120 and a name of Avalanche\_Protocol and press Add. ![Image for post](/images/azure25.png) The network security group should look like the below (albeit your public IP address will be different) and press OK. ![Image for post](/images/azure26.png) Leave the other settings as default and then press "Review + create" to create the Virtual machine. ![Image for post](/images/azure27.png) First it will perform a validation test. If you receive an error here, make sure you selected Pay-As-You-Go subscription model and you are not using the Free Trial subscription as Spot instances are not available. Verify everything looks correct and press "Create" ![Image for post](/images/azure28.png) You should then receive a prompt asking you to generate a new key pair to connect your virtual machine. Select "Download private key and create resource" to download the private key to your PC. ![Image for post](/images/azure29.png) Once your deployment has finished, select "Go to resource" ![Image for post](/images/azure30.png) Change the Provisioned Disk Size[​](#change-the-provisioned-disk-size "Direct link to heading") ----------------------------------------------------------------------------------------------- By default, the Ubuntu VM will be provisioned with a 30 GB Premium SSD. You should increase this to 250 GB, to allow for database growth. ![Image for post](/images/azure31.png) To change the Disk size, the VM needs to be stopped and deallocated. Select "Stop" and wait for the status to show deallocated. Then select "Disks" on the left. ![Image for post](/images/azure32.png) Select the Disk name that's current provisioned to modify it ![Image for post](/images/azure33.png) Select "Size + performance" on the left under settings and change the size to 250 GB and press "Resize" ![Image for post](/images/azure34.png) Doing this now will also extend the partition automatically within Ubuntu. To go back to the virtual machine overview page, select Avalanche in the navigation setting. ![Image for post](/images/azure35.png) Then start the VM ![Image for post](/images/azure36.png) Connect to the Avalanche Node[​](#connect-to-the-avalanche-node "Direct link to heading") ----------------------------------------------------------------------------------------- The following instructions show how to connect to the Virtual Machine from a Windows 10 machine. For instructions on how to connect from a Ubuntu machine see the [AWS tutorial](/docs/nodes/run-a-node/on-third-party-services/amazon-web-services). On your local PC, create a folder on the root of the C: drive called Avalanche and then move the Avalanche\_key.pem file you downloaded before into the folder. Then right click the file and select Properties. Go to the security tab and select "Advanced" at the bottom ![Image for post](/images/azure37.png) Select "Disable inheritance" and then "Remove all inherited permissions from this object" to remove all existing permissions on that file. ![Image for post](/images/azure38.png) Then select "Add" to add a new permission and choose "Select a principal" at the top. From the pop-up box enter in your user account that you use to log into your machine. In this example I log on with a local user called Seq, you may have a Microsoft account that you use to log in, so use whatever account you login to your PC with and press "Check Names" and it should underline it to verify and press OK. ![Image for post](/images/azure39.png) Then from the permissions section make sure only "Read & Execute" and "Read" are selected and press OK. ![Image for post](/images/azure40.png) It should look something like the below, except with a different PC name / user account. This just means the key file can't be modified or accessed by any other accounts on this machine for security purposes so they can't access your Avalanche Node. ![Image for post](/images/azure41.png) ### Find your Avalanche Node Public IP[​](#find-your-avalanche-node-public-ip "Direct link to heading") From the Azure Portal make a note of your static public IP address that has been assigned to your node. ![Image for post](/images/azure42.png) To log onto the Avalanche node, open command prompt by searching for `cmd` and selecting "Command Prompt" on your Windows 10 machine. ![Image for post](/images/azure43.png) Then use the following command and replace the EnterYourAzureIPHere with the static IP address shown on the Azure portal. ssh -i C:\\Avalanche\\Avalanche\_key.pem ubuntu@EnterYourAzureIPHere for my example its: ssh -i C:\\Avalanche\\Avalanche\_key.pem The first time you connect you will receive a prompt asking to continue, enter yes. ![Image for post](/images/azure44.png) You should now be connected to your Node. ![Image for post](/images/azure45.png) The following section is taken from Colin's excellent tutorial for [configuring an Avalanche Node on Amazon's AWS](/docs/nodes/run-a-node/on-third-party-services/amazon-web-services). ### Update Linux with Security Patches[​](#update-linux-with-security-patches "Direct link to heading") Now that we are on our node, it's a good idea to update it to the latest packages. To do this, run the following commands, one-at-a-time, in order: ``` sudo apt update sudo apt upgrade -y sudo reboot ``` ![Image for post](/images/azure46.png) This will make our instance up to date with the latest security patches for our operating system. This will also reboot the node. We'll give the node a minute or two to boot back up, then log in again, same as before. ### Set up the Avalanche Node[​](#set-up-the-avalanche-node "Direct link to heading") Now we'll need to set up our Avalanche node. To do this, follow the [Set Up Avalanche Node With Installer](/docs/nodes/run-a-node/using-install-script/installing-avalanche-go) tutorial which automates the installation process. You will need the "IPv4 Public IP" copied from the Azure Portal we set up earlier. Once the installation is complete, our node should now be bootstrapping! We can run the following command to take a peek at the latest status of the AvalancheGo node: ``` sudo systemctl status avalanchego ``` To check the status of the bootstrap, we'll need to make a request to the local RPC using `curl`. This request is as follows: ``` curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"info.isBootstrapped", "params": { "chain":"X" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info ``` The node can take some time (upward of an hour at this moment writing) to bootstrap. Bootstrapping means that the node downloads and verifies the history of the chains. Give this some time. Once the node is finished bootstrapping, the response will be: ``` { "jsonrpc": "2.0", "result": { "isBootstrapped": true }, "id": 1 } ``` We can always use `sudo systemctl status avalanchego` to peek at the latest status of our service as before, as well. ### Get Your NodeID[​](#get-your-nodeid "Direct link to heading") We absolutely must get our NodeID if we plan to do any validating on this node. This is retrieved from the RPC as well. We call the following curl command to get our NodeID. ``` curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"info.getNodeID" }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info ``` If all is well, the response should look something like: ``` {"jsonrpc":"2.0","result":{"nodeID":"NodeID-Lve2PzuCvXZrqn8Stqwy9vWZux6VyGUCR"},"id":1} ``` That portion that says, "NodeID-Lve2PzuCvXZrqn8Stqwy9vWZux6VyGUCR" is our NodeID, the entire thing. Copy that and keep that in our notes. There's nothing confidential or secure about this value, but it's an absolute must for when we submit this node to be a validator. ### Backup Your Staking Keys[​](#backup-your-staking-keys "Direct link to heading") The last thing that should be done is backing up our staking keys in the untimely event that our instance is corrupted or terminated. It's just good practice for us to keep these keys. To back them up, we use the following command: ``` scp -i C:\Avalanche\avalanche_key.pem -r ubuntu@EnterYourAzureIPHere:/home/ubuntu/.avalanchego/staking C:\Avalanche ``` As before, we'll need to replace "EnterYourAzureIPHere" with the appropriate value that we retrieved. This backs up our staking key and staking certificate into the C:\\Avalanche folder we created before. ![Image for post](/images/azure47.png) # Installing AvalancheGo (/docs/nodes/run-a-node/using-install-script/installing-avalanche-go) --- title: Installing AvalancheGo description: Learn how to install AvalancheGo on your system. --- ## Running the Script So, now that you prepared your system and have the info ready, let's get to it. To download and run the script, enter the following in the terminal: ```bash wget -nd -m https://raw.githubusercontent.com/ava-labs/avalanche-docs/master/scripts/avalanchego-installer.sh;\ chmod 755 avalanchego-installer.sh;\ ./avalanchego-installer.sh ``` And we're off! The output should look something like this: ```bash AvalancheGo installer --------------------- Preparing environment... Found arm64 architecture... Looking for the latest arm64 build... Will attempt to download: https://github.com/ava-labs/avalanchego/releases/download/v1.1.1/avalanchego-linux-arm64-v1.1.1.tar.gz avalanchego-linux-arm64-v1.1.1.tar.gz 100%[=========================================================================>] 29.83M 75.8MB/s in 0.4s 2020-12-28 14:57:47 URL:https://github-production-release-asset-2e65be.s3.amazonaws.com/246387644/f4d27b00-4161-11eb-8fb2-156a992fd2c8?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20201228%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20201228T145747Z&X-Amz-Expires=300&X-Amz-Signature=ea838877f39ae940a37a076137c4c2689494c7e683cb95a5a4714c062e6ba018&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=246387644&response-content-disposition=attachment%3B%20filename%3Davalanchego-linux-arm64-v1.1.1.tar.gz&response-content-type=application%2Foctet-stream [31283052/31283052] -> "avalanchego-linux-arm64-v1.1.1.tar.gz" [1] Unpacking node files... avalanchego-v1.1.1/plugins/ avalanchego-v1.1.1/plugins/evm avalanchego-v1.1.1/avalanchego Node files unpacked into /home/ubuntu/avalanche-node ``` And then the script will prompt you for information about the network environment: ```bash To complete the setup some networking information is needed. Where is the node installed: 1) residential network (dynamic IP) 2) cloud provider (static IP) Enter your connection type [1,2]: ``` Enter `1` if you have dynamic IP, and `2` if you have a static IP. If you are on a static IP, it will try to auto-detect the IP and ask for confirmation. ```bash Detected '3.15.152.14' as your public IP. Is this correct? [y,n]: ``` Confirm with `y`, or `n` if the detected IP is wrong (or empty), and then enter the correct IP at the next prompt. Next, you have to set up RPC port access for your node. Those are used to query the node for its internal state, to send commands to the node, or to interact with the platform and its chains (sending transactions, for example). You will be prompted: ```bash RPC port should be public (this is a public API node) or private (this is a validator)? [public, private]: ``` - `private`: this setting only allows RPC requests from the node machine itself. - `public`: this setting exposes the RPC port to all network interfaces. As this is a sensitive setting you will be asked to confirm if choosing `public`. Please read the following note carefully: If you choose to allow RPC requests on any network interface you will need to set up a firewall to only let through RPC requests from known IP addresses, otherwise your node will be accessible to anyone and might be overwhelmed by RPC calls from malicious actors! If you do not plan to use your node to send RPC calls remotely, enter `private`. The script will then prompt you to choose whether to enable state sync setting or not: ```bash Do you want state sync bootstrapping to be turned on or off? [on, off]: ``` Turning state sync on will greatly increase the speed of bootstrapping, but will sync only the current network state. If you intend to use your node for accessing historical data (archival node) you should select `off`. Otherwise, select `on`. Validators can be bootstrapped with state sync turned on. The script will then continue with system service creation and finish with starting the service. ```bash Created symlink /etc/systemd/system/multi-user.target.wants/avalanchego.service → /etc/systemd/system/avalanchego.service. Done! Your node should now be bootstrapping. Node configuration file is /home/ubuntu/.avalanchego/configs/node.json C-Chain configuration file is /home/ubuntu/.avalanchego/configs/chains/C/config.json Plugin directory, for storing subnet VM binaries, is /home/ubuntu/.avalanchego/plugins To check that the service is running use the following command (q to exit): sudo systemctl status avalanchego To follow the log use (ctrl-c to stop): sudo journalctl -u avalanchego -f Reach us over on https://discord.gg/avax if you're having problems. ``` The script is finished, and you should see the system prompt again. ## Post Installation AvalancheGo should be running in the background as a service. You can check that it's running with: ```bash sudo systemctl status avalanchego ``` Below is an example of what the node's latest logs should look like: ```bash ● avalanchego.service - AvalancheGo systemd service Loaded: loaded (/etc/systemd/system/avalanchego.service; enabled; vendor preset: enabled) Active: active (running) since Tue 2021-01-05 10:38:21 UTC; 51s ago Main PID: 2142 (avalanchego) Tasks: 8 (limit: 4495) Memory: 223.0M CGroup: /system.slice/avalanchego.service └─2142 /home/ubuntu/avalanche-node/avalanchego --public-ip-resolution-service=opendns --http-host= Jan 05 10:38:45 ip-172-31-30-64 avalanchego[2142]: INFO [01-05|10:38:45]

avalanchego/vms/platformvm/vm.go#322: initializing last accepted block as 2FUFPVPxbTpKNn39moGSzsmGroYES4NZRdw3mJgNvMkMiMHJ9e Jan 05 10:38:45 ip-172-31-30-64 avalanchego[2142]: INFO [01-05|10:38:45]

avalanchego/snow/engine/snowman/transitive.go#58: initializing consensus engine Jan 05 10:38:45 ip-172-31-30-64 avalanchego[2142]: INFO [01-05|10:38:45] avalanchego/api/server.go#143: adding route /ext/bc/11111111111111111111111111111111LpoYY Jan 05 10:38:45 ip-172-31-30-64 avalanchego[2142]: INFO [01-05|10:38:45] avalanchego/api/server.go#88: HTTP API server listening on ":9650" Jan 05 10:38:58 ip-172-31-30-64 avalanchego[2142]: INFO [01-05|10:38:58]

avalanchego/snow/engine/common/bootstrapper.go#185: Bootstrapping started syncing with 1 vertices in the accepted frontier Jan 05 10:39:02 ip-172-31-30-64 avalanchego[2142]: INFO [01-05|10:39:02]

avalanchego/snow/engine/snowman/bootstrap/bootstrapper.go#210: fetched 2500 blocks Jan 05 10:39:04 ip-172-31-30-64 avalanchego[2142]: INFO [01-05|10:39:04]

avalanchego/snow/engine/snowman/bootstrap/bootstrapper.go#210: fetched 5000 blocks Jan 05 10:39:06 ip-172-31-30-64 avalanchego[2142]: INFO [01-05|10:39:06]

avalanchego/snow/engine/snowman/bootstrap/bootstrapper.go#210: fetched 7500 blocks Jan 05 10:39:09 ip-172-31-30-64 avalanchego[2142]: INFO [01-05|10:39:09]

avalanchego/snow/engine/snowman/bootstrap/bootstrapper.go#210: fetched 10000 blocks Jan 05 10:39:11 ip-172-31-30-64 avalanchego[2142]: INFO [01-05|10:39:11]

avalanchego/snow/engine/snowman/bootstrap/bootstrapper.go#210: fetched 12500 blocks ``` Note the `active (running)` which indicates the service is running OK. You may need to press `q` to return to the command prompt. To find out your NodeID, which is used to identify your node to the network, run the following command: ```bash sudo journalctl -u avalanchego | grep "NodeID" ``` It will produce output like: ```bash Jan 05 10:38:38 ip-172-31-30-64 avalanchego[2142]: INFO [01-05|10:38:38] avalanchego/node/node.go#428: Set node's ID to 6seStrauyCnVV7NEVwRbfaT9B6EnXEzfY ``` Prepend `NodeID-` to the value to get, for example, `NodeID-6seStrauyCnVV7NEVwRbfaT9B6EnXEzfY`. Store that; it will be needed for staking or looking up your node. Your node should be in the process of bootstrapping now. You can monitor the progress by issuing the following command: ```bash sudo journalctl -u avalanchego -f ``` Press `ctrl+C` when you wish to stop reading node output. # Managing AvalancheGo (/docs/nodes/run-a-node/using-install-script/managing-avalanche-go) --- title: Managing AvalancheGo description: Learn how to start, stop and upgrade your AvalancheGo node --- ## Stop Your Node To stop AvalancheGo, run: ```bash sudo systemctl stop avalanchego ``` ## Start Your Node To start your node again, run: ```bash sudo systemctl start avalanchego ``` ## Upgrade Your Node AvalancheGo is an ongoing project and there are regular version upgrades. Most upgrades are recommended but not required. Advance notice will be given for upgrades that are not backwards compatible. When a new version of the node is released, you will notice log lines like: ```bash Jan 08 10:26:45 ip-172-31-16-229 avalanchego[6335]: INFO [01-08|10:26:45] avalanchego/network/peer.go#526: beacon 9CkG9MBNavnw7EVSRsuFr7ws9gascDQy3 attempting to connect with newer version avalanche/1.1.1. You may want to update your client ``` It is recommended to always upgrade to the latest version, because new versions bring bug fixes, new features and upgrades. To upgrade your node, just run the installer script again: ```bash ./avalanchego-installer.sh ``` It will detect that you already have AvalancheGo installed: ```bash AvalancheGo installer --------------------- Preparing environment... Found 64bit Intel/AMD architecture... Found AvalancheGo systemd service already installed, switching to upgrade mode. Stopping service... ``` It will then upgrade your node to the latest version, and after it's done, start the node back up, and print out the information about the latest version: ```bash Node upgraded, starting service... New node version: avalanche/1.1.1 [network=mainnet, database=v1.0.0, commit=f76f1fd5f99736cf468413bbac158d6626f712d2] Done! ``` # Node Config and Maintenance (/docs/nodes/run-a-node/using-install-script/node-config-maintenance) --- title: Node Config and Maintenance description: Advanced options for configuring and maintaining your AvalancheGo node. --- ## Advanced Node Configuration Without any additional arguments, the script installs the node in a most common configuration. But the script also enables various advanced options to be configured, via the command line prompts. Following is a list of advanced options and their usage: - `admin` - [Admin API](/docs/rpcs/other/admin-rpc) will be enabled - `archival` - disables database pruning and preserves the complete transaction history - `state-sync` - if `on` state-sync for the C-Chain is used, if `off` it will use regular transaction replay to bootstrap; state-sync is much faster, but has no historical data - `db-dir` - use to provide the full path to the location where the database will be stored - `fuji` - node will connect to Fuji testnet instead of the Mainnet - `index` - [Index API](/docs/rpcs/other/index-rpc) will be enabled - `ip` - use `dynamic`, `static` arguments, of enter a desired IP directly to be used as the public IP node will advertise to the network - `rpc` - use `any` or `local` argument to select any or local network interface to be used to listen for RPC calls - `version` - install a specific node version, instead of the latest. See [here](#using-a-previous-version) for usage. Configuring the `index` and `archival` options on an existing node will require a fresh bootstrap to recreate the database. Complete script usage can be displayed by entering: ```bash ./avalanchego-installer.sh --help ``` ### Unattended Installation[​](#unattended-installation "Direct link to heading") If you want to use the script in an automated environment where you cannot enter the data at the prompts you must provide at least the `rpc` and `ip` options. For example: ```bash ./avalanchego-installer.sh --ip 1.2.3.4 --rpc local ``` ### Usage Examples[​](#usage-examples "Direct link to heading") - To run a Fuji node with indexing enabled and autodetected static IP: ```bash ./avalanchego-installer.sh --fuji --ip static --index ``` - To run an archival Mainnet node with dynamic IP and database located at `/home/node/db`: ```bash ./avalanchego-installer.sh --archival --ip dynamic --db-dir /home/node/db ``` - To use C-Chain state-sync to quickly bootstrap a Mainnet node, with dynamic IP and local RPC only: ```bash ./avalanchego-installer.sh --state-sync on --ip dynamic --rpc local ``` - To reinstall the node using node version 1.7.10 and use specific IP and local RPC only: ```bash ./avalanchego-installer.sh --reinstall --ip 1.2.3.4 --version v1.7.10 --rpc local ``` Node Configuration[​](#node-configuration "Direct link to heading") ------------------------------------------------------------------- The file that configures node operation is `~/.avalanchego/configs/node.json`. You can edit it to add or change configuration options. The documentation of configuration options can be found [here](/docs/nodes/configure/configs-flags). Configuration may look like this: ```json { "public-ip-resolution-service": "opendns", "http-host": "" } ``` Note that the configuration file needs to be a properly formatted `JSON` file, so switches should formatted differently than they would be for the command line. Therefore, don't enter options like `--public-ip-resolution-service=opendns` as shown in the example above. The script also creates an empty C-Chain config file, located at `~/.avalanchego/configs/chains/C/config.json`. By editing that file, you can configure the C-Chain, as described in detail [here](/docs/nodes/configure/configs-flags). Using a Previous Version[​](#using-a-previous-version "Direct link to heading") ------------------------------------------------------------------------------- The installer script can also be used to install a version of AvalancheGo other than the latest version. To see a list of available versions for installation, run: ```bash ./avalanchego-installer.sh --list ``` It will print out a list, something like: ```bash AvalancheGo installer --------------------- Available versions: v1.3.2 v1.3.1 v1.3.0 v1.2.4-arm-fix v1.2.4 v1.2.3-signed v1.2.3 v1.2.2 v1.2.1 v1.2.0 ``` To install a specific version, run the script with `--version` followed by the tag of the version. For example: ```bash ./avalanchego-installer.sh --version v1.3.1 ``` Note that not all AvalancheGo versions are compatible. You should generally run the latest version. Running a version other than latest may lead to your node not working properly and, for validators, not receiving a staking reward. Thanks to community member [Jean Zundel](https://github.com/jzu) for the inspiration and help implementing support for installing non-latest node versions. Reinstall and Script Update[​](#reinstall-and-script-update "Direct link to heading") ------------------------------------------------------------------------------------- The installer script gets updated from time to time, with new features and capabilities added. To take advantage of new features or to recover from modifications that made the node fail, you may want to reinstall the node. To do that, fetch the latest version of the script from the web with: ```bash wget -nd -m https://raw.githubusercontent.com/ava-labs/builders-hub/master/scripts/avalanchego-installer.sh ``` After the script has updated, run it again with the `--reinstall` config flag: ```bash ./avalanchego-installer.sh --reinstall ``` This will delete the existing service file, and run the installer from scratch, like it was started for the first time. Note that the database and NodeID will be left intact. Removing the Node Installation[​](#removing-the-node-installation "Direct link to heading") ------------------------------------------------------------------------------------------- If you want to remove the node installation from the machine, you can run the script with the `--remove` option, like this: ```bash ./avalanchego-installer.sh --remove ``` This will remove the service, service definition file and node binaries. It will not remove the working directory, node ID definition or the node database. To remove those as well, you can type: Please note that this is irreversible and the database and node ID will be deleted! What Next?[​](#what-next "Direct link to heading") -------------------------------------------------- That's it, you're running an AvalancheGo node! Congratulations! Let us know you did it on our [X](https://x.com/avax), [Telegram](https://t.me/avalancheavax) or [Reddit](https://www.reddit.com/r/Avax/)! If you're on a residential network (dynamic IP), don't forget to set up port forwarding. If you're on a cloud service provider, you're good to go. Now you can [interact with your node](/docs/rpcs/other/guides/issuing-api-calls), [stake your tokens](/docs/primary-network/validate/what-is-staking), or level up your installation by setting up [node monitoring](/docs/nodes/maintain/monitoring) to get a better insight into what your node is doing. Also, you might want to use our [Postman Collection](/docs/tooling/avalanche-postman) to more easily issue commands to your node. Finally, if you haven't already, it is a good idea to [back up](/docs/nodes/maintain/backup-restore) important files in case you ever need to restore your node to a different machine. If you have any questions, or need help, feel free to contact us on our [Discord](https://chat.avalabs.org/) server. # Preparing Your Environment (/docs/nodes/run-a-node/using-install-script/preparing-environment) --- title: Preparing Your Environment description: Learn how to prepare your environment before using install script. --- We have a shell (bash) script that installs AvalancheGo on your computer. This script sets up full, running node in a matter of minutes with minimal user input required. Script can also be used for unattended, automated installs. This install script assumes: - AvalancheGo is not running and not already installed as a service - User running the script has superuser privileges (can run `sudo`) Environment Considerations[​](#environment-considerations "Direct link to heading") ----------------------------------------------------------------------------------- If you run a different flavor of Linux, the script might not work as intended. It assumes `systemd` is used to run system services. Other Linux flavors might use something else, or might have files in different places than is assumed by the script. It will probably work on any distribution that uses `systemd` but it has been developed for and tested on Ubuntu. If you have a node already running on the computer, stop it before running the script. Script won't touch the node working directory so you won't need to bootstrap the node again. ### Node Running from Terminal[​](#node-running-from-terminal "Direct link to heading") If your node is running in a terminal stop it by pressing `ctrl+C`. ### Node Running as a Service[​](#node-running-as-a-service "Direct link to heading") If your node is already running as a service, then you probably don't need this script. You're good to go. ### Node Running in the Background[​](#node-running-in-the-background "Direct link to heading") If your node is running in the background (by running with `nohup`, for example) then find the process running the node by running `ps aux | grep avalanche`. This will produce output like: ```bash ubuntu 6834 0.0 0.0 2828 676 pts/1 S+ 19:54 0:00 grep avalanche ubuntu 2630 26.1 9.4 2459236 753316 ? Sl Dec02 1220:52 /home/ubuntu/build/avalanchego ``` Look for line that doesn't have `grep` on it. In this example, that is the second line. It shows information about your node. Note the process id, in this case, `2630`. Stop the node by running `kill -2 2630`. ### Node Working Files[​](#node-working-files "Direct link to heading") If you previously ran an AvalancheGo node on this computer, you will have local node files stored in `$HOME/.avalanchego` directory. Those files will not be disturbed, and node set up by the script will continue operation with the same identity and state it had before. That being said, for your node's security, back up `staker.crt` and `staker.key` files, found in `$HOME/.avalanchego/staking` and store them somewhere secure. You can use those files to recreate your node on a different computer if you ever need to. Check out this [tutorial](/docs/nodes/maintain/backup-restore) for backup and restore procedure. Networking Considerations[​](#networking-considerations "Direct link to heading") --------------------------------------------------------------------------------- To run successfully, AvalancheGo needs to accept connections from the Internet on the network port `9651`. Before you proceed with the installation, you need to determine the networking environment your node will run in. ### Running on a Cloud Provider[​](#running-on-a-cloud-provider "Direct link to heading") If your node is running on a cloud provider computer instance, it will have a static IP. Find out what that static IP is, or set it up if you didn't already. The script will try to find out the IP by itself, but that might not work in all environments, so you will need to check the IP or enter it yourself. ### Running on a Home Connection[​](#running-on-a-home-connection "Direct link to heading") If you're running a node on a computer that is on a residential internet connection, you have a dynamic IP; that is, your IP will change periodically. The install script will configure the node appropriately for that situation. But, for a home connection, you will need to set up inbound port forwarding of port `9651` from the internet to the computer the node is installed on. As there are too many models and router configurations, we cannot provide instructions on what exactly to do, but there are online guides to be found (like [this](https://www.noip.com/support/knowledgebase/general-port-forwarding-guide/), or [this](https://www.howtogeek.com/66214/how-to-forward-ports-on-your-router/) ), and your service provider support might help too. Please note that a fully connected Avalanche node maintains and communicates over a couple of thousand of live TCP connections. For some low-powered and older home routers that might be too much to handle. If that is the case you may experience lagging on other computers connected to the same router, node getting benched, failing to sync and similar issues. # Banff Changes (/docs/rpcs/other/guides/banff-changes) --- title: Banff Changes description: This document specifies the changes in Avalanche “Banff”, which was released in AvalancheGo v1.9.x. --- Block Changes[​](#block-changes "Direct link to heading") --------------------------------------------------------- ### Apricot[​](#apricot "Direct link to heading") Apricot allows the following block types with the following content: - _Standard Blocks_ may contain multiple transactions of the following types: - CreateChainTx - CreateSubnetTx - ImportTx - ExportTx - _Proposal Blocks_ may contain a single transaction of the following types: - AddValidatorTx - AddDelegatorTx - AddSubnetValidatorTx - RewardValidatorTx - AdvanceTimeTx - _Options Blocks_, that is _Commit Block_ and _Abort Block_ do not contain any transactions. Each block has a header containing: - ParentID - Height ### Banff[​](#banff "Direct link to heading") Banff allows the following block types with the following content: - _Standard Blocks_ may contain multiple transactions of the following types: - CreateChainTx - CreateSubnetTx - ImportTx - ExportTx - AddValidatorTx - AddDelegatorTx - AddSubnetValidatorTx - _RemoveSubnetValidatorTx_ - _TransformSubnetTx_ - _AddPermissionlessValidatorTx_ - _AddPermissionlessDelegatorTx_ - _Proposal Blocks_ may contain a single transaction of the following types: - RewardValidatorTx - _Options blocks_, that is _Commit Block_ and _Abort Block_ do not contain any transactions. Note that each block has an header containing: - ParentID - Height - _Time_ So the two main differences with respect to Apricot are: - _AddValidatorTx_, _AddDelegatorTx_, _AddSubnetValidatorTx_ are included into Standard Blocks rather than Proposal Blocks so that they don't need to be voted on (that is followed by a Commit/Abort Block). - New Transaction types: _RemoveSubnetValidatorTx_, _TransformSubnetTx_, _AddPermissionlessValidatorTx_, and _AddPermissionlessDelegatorTx_ have been added into Standard Blocks. - Block timestamp is explicitly serialized into block header, to allow chain time update. ### New Transactions[​](#new-transactions "Direct link to heading") #### RemoveSubnetValidatorTx[​](#removesubnetvalidatortx "Direct link to heading") ``` type RemoveSubnetValidatorTx struct { BaseTx `serialize:"true"` // The node to remove from the Avalanche L1. NodeID ids.NodeID `serialize:"true" json:"nodeID"` // The Avalanche L1 to remove the node from. Subnet ids.ID `serialize:"true" json:"subnet"` // Proves that the issuer has the right to remove the node from the Avalanche L1. SubnetAuth verify.Verifiable `serialize:"true" json:"subnetAuthorization"` } ``` #### TransformSubnetTx[​](#transformsubnettx "Direct link to heading") ``` type TransformSubnetTx struct { // Metadata, inputs and outputs BaseTx `serialize:"true"` // ID of the Subnet to transform // Restrictions: // - Must not be the Primary Network ID Subnet ids.ID `serialize:"true" json:"subnetID"` // Asset to use when staking on the Avalanche L1 // Restrictions: // - Must not be the Empty ID // - Must not be the AVAX ID AssetID ids.ID `serialize:"true" json:"assetID"` // Amount to initially specify as the current supply // Restrictions: // - Must be > 0 InitialSupply uint64 `serialize:"true" json:"initialSupply"` // Amount to specify as the maximum token supply // Restrictions: // - Must be >= [InitialSupply] MaximumSupply uint64 `serialize:"true" json:"maximumSupply"` // MinConsumptionRate is the rate to allocate funds if the validator's stake // duration is 0 MinConsumptionRate uint64 `serialize:"true" json:"minConsumptionRate"` // MaxConsumptionRate is the rate to allocate funds if the validator's stake // duration is equal to the minting period // Restrictions: // - Must be >= [MinConsumptionRate] // - Must be <= [reward.PercentDenominator] MaxConsumptionRate uint64 `serialize:"true" json:"maxConsumptionRate"` // MinValidatorStake is the minimum amount of funds required to become a // validator. // Restrictions: // - Must be > 0 // - Must be <= [InitialSupply] MinValidatorStake uint64 `serialize:"true" json:"minValidatorStake"` // MaxValidatorStake is the maximum amount of funds a single validator can // be allocated, including delegated funds. // Restrictions: // - Must be >= [MinValidatorStake] // - Must be <= [MaximumSupply] MaxValidatorStake uint64 `serialize:"true" json:"maxValidatorStake"` // MinStakeDuration is the minimum number of seconds a staker can stake for. // Restrictions: // - Must be > 0 MinStakeDuration uint32 `serialize:"true" json:"minStakeDuration"` // MaxStakeDuration is the maximum number of seconds a staker can stake for. // Restrictions: // - Must be >= [MinStakeDuration] // - Must be <= [GlobalMaxStakeDuration] MaxStakeDuration uint32 `serialize:"true" json:"maxStakeDuration"` // MinDelegationFee is the minimum percentage a validator must charge a // delegator for delegating. // Restrictions: // - Must be <= [reward.PercentDenominator] MinDelegationFee uint32 `serialize:"true" json:"minDelegationFee"` // MinDelegatorStake is the minimum amount of funds required to become a // delegator. // Restrictions: // - Must be > 0 MinDelegatorStake uint64 `serialize:"true" json:"minDelegatorStake"` // MaxValidatorWeightFactor is the factor which calculates the maximum // amount of delegation a validator can receive. // Note: a value of 1 effectively disables delegation. // Restrictions: // - Must be > 0 MaxValidatorWeightFactor byte `serialize:"true" json:"maxValidatorWeightFactor"` // UptimeRequirement is the minimum percentage a validator must be online // and responsive to receive a reward. // Restrictions: // - Must be <= [reward.PercentDenominator] UptimeRequirement uint32 `serialize:"true" json:"uptimeRequirement"` // Authorizes this transformation SubnetAuth verify.Verifiable `serialize:"true" json:"subnetAuthorization"` } ``` #### AddPermissionlessValidatorTx[​](#addpermissionlessvalidatortx "Direct link to heading") ``` type AddPermissionlessValidatorTx struct { // Metadata, inputs and outputs BaseTx `serialize:"true"` // Describes the validator Validator validator.Validator `serialize:"true" json:"validator"` // ID of the Avalanche L1 this validator is validating Subnet ids.ID `serialize:"true" json:"subnet"` // Where to send staked tokens when done validating StakeOuts []*avax.TransferableOutput `serialize:"true" json:"stake"` // Where to send validation rewards when done validating ValidatorRewardsOwner fx.Owner `serialize:"true" json:"validationRewardsOwner"` // Where to send delegation rewards when done validating DelegatorRewardsOwner fx.Owner `serialize:"true" json:"delegationRewardsOwner"` // Fee this validator charges delegators as a percentage, times 10,000 // For example, if this validator has DelegationShares=300,000 then they // take 30% of rewards from delegators DelegationShares uint32 `serialize:"true" json:"shares"` } ``` #### AddPermissionlessDelegatorTx[​](#addpermissionlessdelegatortx "Direct link to heading") ``` type AddPermissionlessDelegatorTx struct { // Metadata, inputs and outputs BaseTx `serialize:"true"` // Describes the validator Validator validator.Validator `serialize:"true" json:"validator"` // ID of the Avalanche L1 this validator is validating Subnet ids.ID `serialize:"true" json:"subnet"` // Where to send staked tokens when done validating Stake []*avax.TransferableOutput `serialize:"true" json:"stake"` // Where to send staking rewards when done validating RewardsOwner fx.Owner `serialize:"true" json:"rewardsOwner"` } ``` #### New TypeIDs[​](#new-typeids "Direct link to heading") ``` ApricotProposalBlock = 0 ApricotAbortBlock = 1 ApricotCommitBlock = 2 ApricotStandardBlock = 3 ApricotAtomicBlock = 4 secp256k1fx.TransferInput = 5 secp256k1fx.MintOutput = 6 secp256k1fx.TransferOutput = 7 secp256k1fx.MintOperation = 8 secp256k1fx.Credential = 9 secp256k1fx.Input = 10 secp256k1fx.OutputOwners = 11 AddValidatorTx = 12 AddSubnetValidatorTx = 13 AddDelegatorTx = 14 CreateChainTx = 15 CreateSubnetTx = 16 ImportTx = 17 ExportTx = 18 AdvanceTimeTx = 19 RewardValidatorTx = 20 stakeable.LockIn = 21 stakeable.LockOut = 22 RemoveSubnetValidatorTx = 23 TransformSubnetTx = 24 AddPermissionlessValidatorTx = 25 AddPermissionlessDelegatorTx = 26 EmptyProofOfPossession = 27 BLSProofOfPossession = 28 BanffProposalBlock = 29 BanffAbortBlock = 30 BanffCommitBlock = 31 BanffStandardBlock = 32 ``` # Flow of a Single Blockchain (/docs/rpcs/other/guides/blockchain-flow) --- title: Flow of a Single Blockchain --- ![](/images/flow1.png) Intro[​](#intro "Direct link to heading") ----------------------------------------- The Avalanche network consists of 3 built-in blockchains: X-Chain, C-Chain, and P-Chain. The X-Chain is used to manage assets and uses the Avalanche consensus protocol. The C-Chain is used to create and interact with smart contracts and uses the Snowman consensus protocol. The P-Chain is used to coordinate validators and stake and also uses the Snowman consensus protocol. At the time of writing, the Avalanche network has ~1200 validators. A set of validators makes up an Avalanche L1. Avalanche L1s can validate 1 or more chains. It is a common misconception that 1 Avalanche L1 = 1 chain and this is shown by the primary Avalanche L1 of Avalanche which is made up of the X-Chain, C-Chain, and P-Chain. A node in the Avalanche network can either be a validator or a non-validator. A validator stakes AVAX tokens and participates in consensus to earn rewards. A non-validator does not participate in consensus or have any AVAX staked but can be used as an API server. Both validators and non-validators need to have their own copy of the chain and need to know the current state of the network. At the time of writing, there are ~1200 validators and ~1800 non-validators. Each blockchain on Avalanche has several components: the virtual machine, database, consensus engine, sender, and handler. These components help the chain run smoothly. Blockchains also interact with the P2P layer and the chain router to send and receive messages. Peer-to-Peer (P2P)[​](#peer-to-peer-p2p "Direct link to heading") ----------------------------------------------------------------- ### Outbound Messages[​](#outbound-messages "Direct link to heading") [The `OutboundMsgBuilder` interface](https://github.com/ava-labs/avalanchego/blob/master/message/outbound_msg_builder.go) specifies methods that build messages of type `OutboundMessage`. Nodes communicate to other nodes by sending `OutboundMessage` messages. All messaging functions in `OutboundMsgBuilder` can be categorized as follows: - **Handshake** - Nodes need to be on a certain version before they can be accepted into the network. - **State Sync** - A new node can ask other nodes for the current state of the network. It only syncs the required state for a specific block. - **Bootstrapping** - Nodes can ask other nodes for blocks to build their own copy of the chain. A node can fetch all blocks from the locally last accepted block to the current last accepted block in the network. - **Consensus** - Once a node is up to tip they can participate in consensus! During consensus, a node conducts a poll to several different small random samples of the validator set. They can communicate decisions on whether or not they have accepted/rejected a block. - **App** - VMs communicate application-specific messages to other nodes through app messages. A common example is mempool gossiping. Currently, AvalancheGo implements its own message serialization to communicate. In the future, AvalancheGo will use protocol buffers to communicate. ### Network[​](#network "Direct link to heading") [The networking interface](https://github.com/ava-labs/avalanchego/blob/master/network/network.go) is shared across all chains. It implements functions from the `ExternalSender` interface. The two functions it implements are `Send` and `Gossip`. `Send` sends a message of type `OutboundMessage` to a specific set of nodes (specified by an array of `NodeIDs`). `Gossip` sends a message of type `OutboundMessage` to a random group of nodes in an Avalanche L1 (can be a validator or a non-validator). Gossiping is used to push transactions across the network. The networking protocol uses TLS to pass messages between peers. Along with sending and gossiping, the networking library is also responsible for making connections and maintaining connections. Any node, either a validator or non-validator, will attempt to connect to the primary network. Router[​](#router "Direct link to heading") ------------------------------------------- [The `ChainRouter`](https://github.com/ava-labs/avalanchego/blob/master/snow/networking/router/chain_router.go) routes all incoming messages to its respective blockchain using `ChainID`. It does this by pushing all the messages onto the respective Chain handler's queue. The `ChainRouter` references all existing chains on the network such as the X-chain, C-chain, P-chain and possibly any other chain. The `ChainRouter` handles timeouts as well. When sending messages on the P2P layer, timeouts are registered on the sender and cleared on the `ChainRouter` side when a response is received. If no response is received, then it triggers a timeout. Because timeouts are handled on the `ChainRouter` side, the handler is reliable. Timeouts are triggered when peers do not respond and the `ChainRouter` will still notify the handler of failure cases. The timeout manager within `ChainRouter` is also adaptive. If the network is experiencing long latencies, timeouts will then be adjusted as well. Handler[​](#handler "Direct link to heading") --------------------------------------------- The main function of [the `Handler`](https://github.com/ava-labs/avalanchego/blob/master/snow/networking/handler/handler.go) is to pass messages from the network to the consensus engine. It receives these messages from the `ChainRouter`. It passes messages by pushing them onto a sync or Async queue (depends on message type). Messages are then popped from the queue, parsed, and routed to the correct function in consensus engine. This can be one of the following. - **State sync message (sync queue)** - **Bootstrapping message (sync queue)** - **Consensus message (sync queue)** - **App message (Async queue)** Sender[​](#sender "Direct link to heading") ------------------------------------------- The main role of [the `sender`](https://github.com/ava-labs/avalanchego/blob/master/snow/networking/sender/sender.go) is to build and send outbound messages. It is actually a very thin wrapper around the normal networking code. The main difference here is that sender registers timeouts and tells the router to expect a response message. The timer starts on the sender side. If there is no response, sender will send a failed response to the router. If a node is repeatedly unresponsive, that node will get benched and the sender will immediately start marking those messages as failed. If a sufficient amount of network deems the node benched, it might not get rewards (as a validator). Consensus Engine[​](#consensus-engine "Direct link to heading") --------------------------------------------------------------- Consensus is defined as getting a group of distributed systems to agree on an outcome. In the case of the Avalanche network, consensus is achieved when validators are in agreement with the state of the blockchain. The novel consensus algorithm is documented in the [white paper](https://assets.website-files.com/5d80307810123f5ffbb34d6e/6009805681b416f34dcae012_Avalanche%20Consensus%20Whitepaper.pdf). There are two main consensus algorithms: Avalanche and [Snowman](https://github.com/ava-labs/avalanchego/blob/master/snow/consensus/snowman/consensus.go). The engine is responsible for adding proposing a new block to consensus, repeatedly polling the network for decisions (accept/reject), and communicating that decision to the `Sender`. Blockchain Creation[​](#blockchain-creation "Direct link to heading") --------------------------------------------------------------------- [The `Manager`](https://github.com/ava-labs/avalanchego/blob/master/chains/manager.go) is what kick-starts everything in regards to blockchain creation, starting with the P-Chain. Once the P-Chain finishes bootstrapping, it will kickstart C-Chain and X-Chain and any other chains. The `Manager`'s job is not done yet, if a create-chain transaction is seen by a validator, a whole new process to create a chain will be started by the `Manager`. This can happen dynamically, long after the 3 chains in the Primary Network have been created and bootstrapped. # Issuing API Calls (/docs/rpcs/other/guides/issuing-api-calls) --- title: Issuing API Calls description: This guide explains how to make calls to APIs exposed by Avalanche nodes. --- Endpoints[​](#endpoints "Direct link to heading") ------------------------------------------------- An API call is made to an endpoint, which is a URL, made up of the base URI which is the address and the port of the node, and the path the particular endpoint the API call is on. ### Base URL[​](#base-url "Direct link to heading") The base of the URL is always: `[node-ip]:[http-port]` where - `node-ip` is the IP address of the node the call is to. - `http-port` is the port the node listens on for HTTP calls. This is specified by [command-line argument](/docs/nodes/configure/configs-flags#http-server) `http-port` (default value `9650`). For example, if you're making RPC calls on the local node, the base URL might look like this: `127.0.0.1:9650`. If you're making RPC calls to remote nodes, then the instead of `127.0.0.1` you should use the public IP of the server where the node is. Note that by default the node will only accept API calls on the local interface, so you will need to set up the [`http-host`](/docs/nodes/configure/configs-flags#--http-host-string) config flag on the node. Also, you will need to make sure the firewall and/or security policy allows access to the `http-port` from the internet. When setting up RPC access to a node, make sure you don't leave the `http-port` accessible to everyone! There are malicious actors that scan for nodes that have unrestricted access to their RPC port and then use those nodes for spamming them with resource-intensive queries which can knock the node offline. Only allow access to your node's RPC port from known IP addresses! ### Endpoint Path[​](#endpoint-path "Direct link to heading") Each API's documentation specifies what endpoint path a user should make calls to in order to access the API's methods. In general, they are formatted like: So for the Admin API, the endpoint path is `/ext/admin`, for the Info API it is `/ext/info` and so on. Note that some APIs have additional path components, most notably the chain RPC endpoints which includes the Avalanche L1 chain RPCs. We'll go over those in detail in the next section. So, in combining the base URL and the endpoint path we get the complete URL for making RPC calls. For example, to make a local RPC call on the Info API, the full URL would be: ``` http://127.0.0.1:9650/ext/info ``` Primary Network and Avalanche L1 RPC calls[​](#primary-network-and-avalanche-l1-rpc-calls "Direct link to heading") ------------------------------------------------------------------------------------------------------- Besides the APIs that are local to the node, like Admin or Metrics APIs, nodes also expose endpoints for talking to particular chains that are either part of the Primary Network (the X, P and C chains), or part of any Avalanche L1s the node might be syncing or validating. In general, chain endpoints are formatted as: ### Primary Network Endpoints[​](#primary-network-endpoints "Direct link to heading") The Primary Network consists of three chains: X, P and C chain. As those chains are present on every node, there are also convenient aliases defined that can be used instead of the full blockchainIDs. So, the endpoints look like: ### C-Chain and Subnet-EVM Endpoints[​](#c-chain-and-subnet-evm-endpoints "Direct link to heading") C-Chain and many Avalanche L1s run a version of the EthereumVM (EVM). EVM exposes its own endpoints, which are also accessible on the node: JSON-RPC, and Websocket. #### JSON-RPC EVM Endpoints[​](#json-rpc-evm-endpoints "Direct link to heading") To interact with C-Chain EVM via the JSON-RPC use the endpoint: To interact with Avalanche L1 instances of the EVM via the JSON-RPC endpoint: ``` /ext/bc/[blockchainID]/rpc ``` where `blockchainID` is the ID of the blockchain running the EVM. So for example, the RPC URL for the DFK Network (an Avalanche L1 that runs the DeFi Kingdoms:Crystalvale game) running on a local node would be: ``` http://127.0.0.1/ext/bc/q2aTwKuyzgs8pynF7UXBZCU7DejbZbZ6EUyHr3JQzYgwNPUPi/rpc ``` Or for the WAGMI Avalanche L1 on the Fuji testnet: ``` http://127.0.0.1/ext/bc/2ebCneCbwthjQ1rYT41nhd7M76Hc6YmosMAQrTFhBq8qeqh6tt/rpc ``` #### Websocket EVM Endpoints[​](#websocket-evm-endpoints "Direct link to heading") To interact with C-Chain via the websocket endpoint, use: To interact with other instances of the EVM via the websocket endpoint: where `blockchainID` is the ID of the blockchain running the EVM. For example, to interact with the C-Chain's Ethereum APIs via websocket on localhost you can use: ``` ws://127.0.0.1:9650/ext/bc/C/ws ``` When using the [Public API](/docs/rpcs) or another host that supports HTTPS, use `https://` or `wss://` instead of `http://` or `ws://`. Also, note that the [public API](/docs/rpcs#using-the-public-api-nodes) only supports C-Chain websocket API calls for API methods that don't exist on the C-Chain's HTTP API. Making a JSON RPC Request[​](#making-a-json-rpc-request "Direct link to heading") --------------------------------------------------------------------------------- Most of the built-in APIs use the [JSON RPC 2.0](https://www.jsonrpc.org/specification) format to describe their requests and responses. Such APIs include the Platform API and the X-Chain API. Suppose we want to call the `getTxStatus` method of the [X-Chain API](/docs/rpcs/x-chain). The X-Chain API documentation tells us that the endpoint for this API is `/ext/bc/X`. That means that the endpoint we send our API call to is: `[node-ip]:[http-port]/ext/bc/X` The X-Chain API documentation tells us that the signature of `getTxStatus` is: [`avm.getTxStatus`](/docs/rpcs/x-chain#avmgettxstatus)`(txID:bytes) -> (status:string)` where: - Argument `txID` is the ID of the transaction we're getting the status of. - Returned value `status` is the status of the transaction in question. To call this method, then: ``` curl -X POST --data '{ "jsonrpc":"2.0", "id" :4, "method" :"avm.getTxStatus", "params" :{ "txID":"2QouvFWUbjuySRxeX5xMbNCuAaKWfbk5FeEa2JmoF85RKLk2dD" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X ``` - `jsonrpc` specifies the version of the JSON RPC protocol. (In practice is always 2.0) - `method` specifies the service (`avm`) and method (`getTxStatus`) that we want to invoke. - `params` specifies the arguments to the method. - `id` is the ID of this request. Request IDs should be unique. That's it! ### JSON RPC Success Response[​](#json-rpc-success-response "Direct link to heading") If the call is successful, the response will look like this: ``` { "jsonrpc": "2.0", "result": { "Status": "Accepted" }, "id": 1 } ``` - `id` is the ID of the request that this response corresponds to. - `result` is the returned values of `getTxStatus`. ### JSON RPC Error Response[​](#json-rpc-error-response "Direct link to heading") If the API method invoked returns an error then the response will have a field `error` in place of `result`. Additionally, there is an extra field, `data`, which holds additional information about the error that occurred. Such a response would look like: ``` { "jsonrpc": "2.0", "error": { "code": -32600, "message": "[Some error message here]", "data": [Object with additional information about the error] }, "id": 1 } ``` Other API Formats[​](#other-api-formats "Direct link to heading") ----------------------------------------------------------------- Some APIs may use a standard other than JSON RPC 2.0 to format their requests and responses. Such extension should specify how to make calls and parse responses to them in their documentation. Sending and Receiving Bytes[​](#sending-and-receiving-bytes "Direct link to heading") ------------------------------------------------------------------------------------- Unless otherwise noted, when bytes are sent in an API call/response, they are in hex representation. However, Transaction IDs (TXIDs), ChainIDs, and subnetIDs are in [CB58](https://support.avalabs.org/en/articles/4587395-what-is-cb58) representation, a base-58 encoding with a checksum. # Transaction Fees (/docs/rpcs/other/guides/txn-fees) --- title: Transaction Fees --- In order to prevent spam, transactions on Avalanche require the payment of a transaction fee. The fee is paid in AVAX. **The transaction fee is burned (destroyed forever).** When you issue a transaction through Avalanche's API, the transaction fee is automatically deducted from one of the addresses you control. The [avalanchego wallet](https://github.com/ava-labs/avalanchego/blob/master/wallet/chain) contains example code written in golang for building and signing transactions on all three mainnet chains. X-Chain Fees[​](#fee-schedule) ------------------------------------------------------- The X-Chain currently operates under a fixed fee mechanism. This table shows the X-Chain transaction fee schedule: ``` +----------+---------------------------+--------------------------------+ | Chain | Transaction Type | Mainnet Transaction Fee (AVAX) | +----------+---------------------------+--------------------------------+ | X | Send | 0.001 | +----------+---------------------------+--------------------------------+ | X | Create Asset | 0.01 | +----------+---------------------------+--------------------------------+ | X | Mint Asset | 0.001 | +----------+---------------------------+--------------------------------+ | X | Import AVAX | 0.001 | +----------+---------------------------+--------------------------------+ | X | Export AVAX | 0.001 | +----------+---------------------------+--------------------------------+ ``` C-Chain Fees[​](#c-chain-fees) ------------------------------------------------------- The Avalanche C-Chain uses an algorithm to determine the "base fee" for a transaction. The base fee increases when network utilization is above the target utilization and decreases when network utilization is below the target. ### Dynamic Fee Transactions[​](#dynamic-fee-transactions ) Transaction fees for non-atomic transactions are based on Ethereum's EIP-1559 style Dynamic Fee Transactions, which consists of a gas fee cap and a gas tip cap. The fee cap specifies the maximum price the transaction is willing to pay per unit of gas. The tip cap (also called the priority fee) specifies the maximum amount above the base fee that the transaction is willing to pay per unit of gas. Therefore, the effective gas price paid by a transaction will be `min(gasFeeCap, baseFee + gasTipCap)`. Unlike in Ethereum, where the priority fee is paid to the miner that produces the block, in Avalanche both the base fee and the priority fee are burned. For legacy transactions, which only specify a single gas price, the gas price serves as both the gas fee cap and the gas tip cap. Use the [`eth_baseFee`](/docs/rpcs/c-chain#eth_basefee) API method to estimate the base fee for the next block. If more blocks are produced in between the time that you construct your transaction and it is included in a block, the base fee could be different from the base fee estimated by the API call, so it is important to treat this value as an estimate. Next, use [eth\_maxPriorityFeePerGas](/docs/rpcs/c-chain#eth_maxpriorityfeepergas) API call to estimate the priority fee needed to be included in a block. This API call will look at the most recent blocks and see what tips have been paid by recent transactions in order to be included in the block. Transactions are ordered by the priority fee, then the timestamp (oldest first). Based off of this information, you can specify the `gasFeeCap` and `gasTipCap` to your liking based on how you prioritize getting your transaction included as quickly as possible vs. minimizing the price paid per unit of gas. #### Base Fee[​](#base-fee) The base fee can go as low as 1 nAVAX (Gwei) and has no upper bound. You can use the [`eth_baseFee`](/docs/rpcs/c-chain#eth_basefee) and [eth\_maxPriorityFeePerGas](/docs/rpcs/c-chain#eth_maxpriorityfeepergas) API methods, or [Snowtrace's C-Chain Gas Tracker](https://snowtrace.io/gastracker), to estimate the gas price to use in your transactions. ### Atomic Transaction Fees[​](#atomic-transaction-fees) C-Chain atomic transactions (that is imports and exports from/to other chains) charge dynamic fees based on the amount of gas used by the transaction and the base fee of the block that includes the atomic transaction. Gas Used: ``` +---------------------+-------+ | Item : Gas | +---------------------+-------+ | Unsigned Tx Byte : 1 | +---------------------+-------+ | Signature : 1000 | +---------------------+-------+ | Per Atomic Tx : 10000 | +---------------------+-------+ ``` Therefore, the gas used by an atomic transaction is `1 * len(unsignedTxBytes) + 1,000 * numSignatures + 10,000` The TX fee additionally takes the base fee into account. Due to the fact that atomic transactions use units denominated in 9 decimal places, the base fee must be converted to 9 decimal places before calculating the actual fee paid by the transaction. Therefore, the actual fee is: `gasUsed * baseFee (converted to 9 decimals)`. P-Chain Fees[​](#p-chain-fees) ------------------------------------------------------- The Avalanche P-Chain utilizes a dynamic fee mechanism to optimize transaction costs and network utilization. This system adapts fees based on gas consumption to maintain a target utilization rate. ### Dimensions of Gas Consumption Gas consumption is measured across four dimensions: 1. **Bandwidth** The transaction size in bytes. 2. **Reads** The number of state/database reads. 3. **Writes** The number of state/database writes. 4. **Compute** The compute time in microseconds. The total gas consumed ($G$) by a transaction is: ```math G = B + 1000R + 1000W + 4C ``` The current fee dimension weight configurations as well as the parameter configurations of the P-Chain can be read at any time with the [`platform.getFeeConfig`](/docs/rpcs/p-chain#platformgetfeeconfig) API endpoint. ### Fee Adjustment Mechanism Fees adjust dynamically based on excess gas consumption, the difference between current gas usage and the target gas rate. The exponential adjustment ensures consistent reactivity regardless of the current gas price. Fee changes scale proportionally with excess gas consumption, maintaining fairness and network stability. The technical specification of this mechanism is documented in [ACP-103](/docs/acps/103-dynamic-fees#mechanism). # X-Chain Migration (/docs/rpcs/other/guides/x-chain-migration) --- title: X-Chain Migration --- Overview[​](#overview "Direct link to heading") ----------------------------------------------- This document summarizes all of the changes made to the X-Chain API to support Avalanche Cortina (v1.10.0), which migrates the X-Chain to run Snowman++. In summary, the core transaction submission and confirmation flow is unchanged, however, there are new APIs that must be called to index all transactions. Transaction Broadcast and Confirmation[​](#transaction-broadcast-and-confirmation "Direct link to heading") ----------------------------------------------------------------------------------------------------------- The transaction format on the X-Chain does not change in Cortina. This means that wallets that have already integrated with the X-Chain don't need to change how they sign transactions. Additionally, there is no change to the format of the [avm.issueTx](/docs/rpcs/x-chain#avmissuetx) or the [avm.getTx](/docs/rpcs/x-chain#avmgettx) API. However, the [avm.getTxStatus](/docs/rpcs/x-chain#avmgettxstatus) endpoint is now deprecated and its usage should be replaced with [avm.getTx](/docs/rpcs/x-chain#avmgettx) (which only returns accepted transactions for AvalancheGo >= v1.9.12). [avm.getTxStatus](/docs/rpcs/x-chain#avmgettxstatus) will still work up to and after the Cortina activation if you wish to migrate after the network upgrade has occurred. Vertex -> Block Indexing[​](#vertex---block-indexing "Direct link to heading") ------------------------------------------------------------------------------ Before Cortina, indexing the X-Chain required polling the `/ext/index/X/vtx` endpoint to fetch new vertices. During the Cortina activation, a “stop vertex” will be produced using a [new codec version](https://github.com/ava-labs/avalanchego/blob/c27721a8da1397b218ce9e9ec69839b8a30f9860/snow/engine/avalanche/vertex/codec.go#L17-L18) that will contain no transactions. This new vertex type will be the [same format](https://github.com/ava-labs/avalanchego/blob/c27721a8da1397b218ce9e9ec69839b8a30f9860/snow/engine/avalanche/vertex/stateless_vertex.go#L95-L102) as previous vertices. To ensure historical data can still be accessed in Cortina, the `/ext/index/X/vtx` will remain accessible even though it will no longer be populated with chain data. The index for the X-chain tx and vtx endpoints will never increase again. The index for the X-chain blocks will increase as new blocks are added. After Cortina activation, you will need to migrate to using the new _ext/index/X/block_ endpoint (shares the same semantics as [/ext/index/P/block](/docs/rpcs/other/index-rpc#p-chain-blocks)) to continue indexing X-Chain activity. Because X-Chain ordering is deterministic in Cortina, this means that X-Chain blocks across all heights will be consistent across all nodes and will include a timestamp. Here is an example of iterating over these blocks in Golang: ``` package main import ( "context" "fmt" "log" "time" "github.com/ava-labs/avalanchego/indexer" "github.com/ava-labs/avalanchego/vms/proposervm/block" "github.com/ava-labs/avalanchego/wallet/chain/x" "github.com/ava-labs/avalanchego/wallet/subnet/primary" ) func main() { var ( uri = fmt.Sprintf("%s/ext/index/X/block", primary.LocalAPIURI) client = indexer.NewClient(uri) ctx = context.Background() nextIndex uint64 ) for { log.Printf("polling for next accepted block") container, err := client.GetContainerByIndex(ctx, nextIndex) if err != nil { time.Sleep(time.Second) continue } proposerVMBlock, err := block.Parse(container.Bytes) if err != nil { log.Fatalf("failed to parse proposervm block: %s\n", err) } avmBlockBytes := proposerVMBlock.Block() avmBlock, err := x.Parser.ParseBlock(avmBlockBytes) if err != nil { log.Fatalf("failed to parse avm block: %s\n", err) } acceptedTxs := avmBlock.Txs() log.Printf("accepted block %s with %d transactions", avmBlock.ID(), len(acceptedTxs)) for _, tx := range acceptedTxs { log.Printf("accepted transaction %s", tx.ID()) } nextIndex++ } } ``` After Cortina activation, it will also be possible to fetch X-Chain blocks directly without enabling the Index API. You can use the [avm.getBlock](/docs/rpcs/x-chain#avmgetblock), [avm.getBlockByHeight](/docs/rpcs/x-chain#avmgetblockbyheight), and [avm.getHeight](/docs/rpcs/x-chain#avmgetheight) endpoints to do so. This, again, will be similar to the [P-Chain semantics](/docs/rpcs/p-chain#platformgetblock). Deprecated API Calls[​](#deprecated-api-calls "Direct link to heading") ----------------------------------------------------------------------- This long-term deprecation effort will better align usage of AvalancheGo with its purpose, to be a minimal and efficient runtime that supports only what is required to validate the Primary Network and Avalanche L1s. Integrators should make plans to migrate to tools and services that are better optimized for serving queries over Avalanche Network state and avoid keeping any keys on the node itself. This deprecation ONLY applies to APIs that AvalancheGo exposes over the HTTP port. Transaction types with similar names to these APIs are NOT being deprecated. - ipcs - ipcs.publishBlockchain - ipcs.unpublishBlockchain - ipcs.getPublishedBlockchains - keystore - keystore.createUser - keystore.deleteUser - keystore.listUsers - keystore.importUser - keystore.exportUser - avm/pubsub - avm - avm.getAddressTxs - avm.getBalance - avm.getAllBalances - avm.createAsset - avm.createFixedCapAsset - avm.createVariableCapAsset - avm.createNFTAsset - avm.createAddress - avm.listAddresses - avm.exportKey - avm.importKey - avm.mint - avm.sendNFT - avm.mintNFT - avm.import - avm.export - avm.send - avm.sendMultiple - avm/wallet - wallet.issueTx - wallet.send - wallet.sendMultiple - platform - platform.exportKey - platform.importKey - platform.getBalance - platform.createAddress - platform.listAddresses - platform.getSubnets - platform.addValidator - platform.addDelegator - platform.addSubnetValidator - platform.createSubnet - platform.exportAVAX - platform.importAVAX - platform.createBlockchain - platform.getBlockchains - platform.getStake - platform.getMaxStakeAmount - platform.getRewardUTXOs Cortina FAQ[​](#cortina-faq "Direct link to heading") ----------------------------------------------------- ### Do I Have to Upgrade my Node?[​](#do-i-have-to-upgrade-my-node "Direct link to heading") If you don't upgrade your validator to `v1.10.0` before the Avalanche Mainnet activation date, your node will be marked as offline and other nodes will report your node as having lower uptime, which may jeopardize your staking rewards. ### Is There any Change in Hardware Requirements?[​](#is-there-any-change-in-hardware-requirements "Direct link to heading") No. ### Will Updating Decrease my Validator's Uptime?[​](#will-updating-decrease-my-validators-uptime "Direct link to heading") No. As a reminder, you can check your validator's estimated uptime using the [`info.uptime` API call](/docs/rpcs/other/info-rpc#infouptime). ### I Think Something Is Wrong. What Should I Do?[​](#i-think-something-is-wrong-what-should-i-do "Direct link to heading") First, make sure that you've read the documentation thoroughly and checked the [FAQs](https://support.avax.network/en/). If you don't see an answer to your question, go to our [Discord](https://discord.com/invite/RwXY7P6) server and search for your question. If it has not already been asked, please post it in the appropriate channel. # Avalanche Network Protocol (/docs/rpcs/other/standards/avalanche-network-protocol) --- title: Avalanche Network Protocol --- Overview[​](#overview "Direct link to heading") ----------------------------------------------- Avalanche network defines the core communication format between Avalanche nodes. It uses the [primitive serialization](/docs/rpcs/other/standards/serialization-primitives) format for payload packing. `"Containers"` are mentioned extensively in the description. A Container is simply a generic term for blocks. This document describes the protocol for peer-to-peer communication using Protocol Buffers (proto3). The protocol defines a set of messages exchanged between peers in a peer-to-peer network. Each message is represented by the `Message` proto message, which can encapsulate various types of messages, including network messages, state-sync messages, bootstrapping messages, consensus messages, and application messages. Message[​](#message "Direct link to heading") --------------------------------------------- The `Message` proto message is the main container for all peer-to-peer communication. It uses the `oneof` construct to represent different message types. The supported compression algorithms include Gzip and Zstd. ``` message Message { oneof message { bytes compressed_gzip = 1; bytes compressed_zstd = 2; // ... (other compression algorithms can be added) Ping ping = 11; Pong pong = 12; Version version = 13; PeerList peer_list = 14; // ... (other message types) } } ``` ### Compression[​](#compression "Direct link to heading") The `compressed_gzip` and `compressed_zstd` fields are used for Gzip and Zstd compression, respectively, of the encapsulated message. These fields are set only if the message type supports compression. Network Messages[​](#network-messages "Direct link to heading") --------------------------------------------------------------- ### Ping[​](#ping "Direct link to heading") The `Ping` message reports a peer's perceived uptime percentage. ``` message Ping { uint32 uptime = 1; repeated SubnetUptime subnet_uptimes = 2; } ``` - `uptime`: Uptime percentage on the primary network \[0, 100\]. - `subnet_uptimes`: Uptime percentages on Avalanche L1s. ### Pong[​](#pong "Direct link to heading") The `Pong` message is sent in response to a `Ping` with the perceived uptime of the peer. ``` message Pong { uint32 uptime = 1; // Deprecated: uptime is now sent in Ping repeated SubnetUptime subnet_uptimes = 2; // Deprecated: uptime is now sent in Ping } ``` ### Version[​](#version "Direct link to heading") The `Version` message is the first outbound message sent to a peer during the p2p handshake. ``` message Version { uint32 network_id = 1; uint64 my_time = 2; bytes ip_addr = 3; uint32 ip_port = 4; string my_version = 5; uint64 my_version_time = 6; bytes sig = 7; repeated bytes tracked_subnets = 8; } ``` - `network_id`: Network identifier (e.g., local, testnet, Mainnet). - `my_time`: Unix timestamp when the `Version` message was created. - `ip_addr`: IP address of the peer. - `ip_port`: IP port of the peer. - `my_version`: Avalanche client version. - `my_version_time`: Timestamp of the IP. - `sig`: Signature of the peer IP port pair at a provided timestamp. - `tracked_subnets`: Avalanche L1s the peer is tracking. ### PeerList[​](#peerlist "Direct link to heading") The `PeerList` message contains network-level metadata for a set of validators. ``` message PeerList { repeated ClaimedIpPort claimed_ip_ports = 1; } ``` - `claimed_ip_ports`: List of claimed IP and port pairs. ### PeerListAck[​](#peerlistack "Direct link to heading") The `PeerListAck` message is sent in response to `PeerList` to acknowledge the subset of peers that the peer will attempt to connect to. ``` message PeerListAck { reserved 1; // deprecated; used to be tx_ids repeated PeerAck peer_acks = 2; } ``` - `peer_acks`: List of acknowledged peers. State-Sync Messages[​](#state-sync-messages "Direct link to heading") --------------------------------------------------------------------- ### GetStateSummaryFrontier[​](#getstatesummaryfrontier "Direct link to heading") The `GetStateSummaryFrontier` message requests a peer's most recently accepted state summary. ``` message GetStateSummaryFrontier { bytes chain_id = 1; uint32 request_id = 2; uint64 deadline = 3; } ``` - `chain_id`: Chain being requested from. - `request_id`: Unique identifier for this request. - `deadline`: Timeout (ns) for this request. ### StateSummaryFrontier[​](#statesummaryfrontier "Direct link to heading") The `StateSummaryFrontier` message is sent in response to a `GetStateSummaryFrontier` request. ``` message StateSummaryFrontier { bytes chain_id = 1; uint32 request_id = 2; bytes summary = 3; } ``` - `chain_id`: Chain being responded from. - `request_id`: Request ID of the original `GetStateSummaryFrontier` request. - `summary`: The requested state summary. ### GetAcceptedStateSummary[​](#getacceptedstatesummary "Direct link to heading") The `GetAcceptedStateSummary` message requests a set of state summaries at specified block heights. ``` message GetAcceptedStateSummary { bytes chain_id = 1; uint32 request_id = 2; uint64 deadline = 3; repeated uint64 heights = 4; } ``` - `chain_id`: Chain being requested from. - `request_id`: Unique identifier for this request. - `deadline`: Timeout (ns) for this request. - `heights`: Heights being requested. ### AcceptedStateSummary[​](#acceptedstatesummary "Direct link to heading") The `AcceptedStateSummary` message is sent in response to `GetAcceptedStateSummary`. ``` message AcceptedStateSummary { bytes chain_id = 1; uint32 request_id = 2; repeated bytes summary_ids = 3; } ``` - `chain_id`: Chain being responded from. - `request_id`: Request ID of the original `GetAcceptedStateSummary` request. - `summary_ids`: State summary IDs. Bootstrapping Messages[​](#bootstrapping-messages "Direct link to heading") --------------------------------------------------------------------------- ### GetAcceptedFrontier[​](#getacceptedfrontier "Direct link to heading") The `GetAcceptedFrontier` message requests the accepted frontier from a peer. ``` message GetAcceptedFrontier { bytes chain_id = 1; uint32 request_id = 2; uint64 deadline = 3; EngineType engine_type = 4; } ``` - `chain_id`: Chain being requested from. - `request_id`: Unique identifier for this request. - `deadline`: Timeout (ns) for this request. - `engine_type`: Consensus type the remote peer should use to handle this message. ### AcceptedFrontier[​](#acceptedfrontier "Direct link to heading") The `AcceptedFrontier` message contains the remote peer's last accepted frontier. ``` message AcceptedFrontier { reserved 4; // Until Cortina upgrade is activated bytes chain_id = 1; uint32 request_id = 2; bytes container_id = 3; } ``` - `chain_id`: Chain being responded from. - `request_id`: Request ID of the original `GetAcceptedFrontier` request. - `container_id`: The ID of the last accepted frontier. ### GetAccepted[​](#getaccepted "Direct link to heading") The `GetAccepted` message sends a request with the sender's accepted frontier to a remote peer. ``` message GetAccepted { bytes chain_id = 1; uint32 request_id = 2; uint64 deadline = 3; repeated bytes container_ids = 4; EngineType engine_type = 5; } ``` - `chain_id`: Chain being requested from. - `request_id`: Unique identifier for this message. - `deadline`: Timeout (ns) for this request. - `container_ids`: The sender's accepted frontier. - `engine_type`: Consensus type to handle this message. ### Accepted[​](#accepted "Direct link to heading") The `Accepted` message is sent in response to `GetAccepted`. ``` message Accepted { reserved 4; // Until Cortina upgrade is activated bytes chain_id = 1; uint32 request_id = 2; repeated bytes container_ids = 3; } ``` - `chain_id`: Chain being responded from. - `request_id`: Request ID of the original `GetAccepted` request. - `container_ids`: Subset of container IDs from the `GetAccepted` request that the sender has accepted. ### GetAncestors[​](#getancestors "Direct link to heading") The `GetAncestors` message requests the ancestors for a given container. ``` message GetAncestors { bytes chain_id = 1; uint32 request_id = 2; uint64 deadline = 3; bytes container_id = 4; EngineType engine_type = 5; } ``` - `chain_id`: Chain being requested from. - `request_id`: Unique identifier for this request. - `deadline`: Timeout (ns) for this request. - `container_id`: Container for which ancestors are being requested. - `engine_type`: Consensus type to handle this message. ### Ancestors[​](#ancestors "Direct link to heading") The `Ancestors` message is sent in response to `GetAncestors`. ``` message Ancestors { reserved 4; // Until Cortina upgrade is activated bytes chain_id = 1; uint32 request_id = 2; repeated bytes containers = 3; } ``` - `chain_id`: Chain being responded from. - `request_id`: Request ID of the original `GetAncestors` request. - `containers`: Ancestry for the requested container. Consensus Messages[​](#consensus-messages "Direct link to heading") ------------------------------------------------------------------- ### Get[​](#get "Direct link to heading") The `Get` message requests a container from a remote peer. ``` message Get { bytes chain_id = 1; uint32 request_id = 2; uint64 deadline = 3; bytes container_id = 4; EngineType engine_type = 5; } ``` - `chain_id`: Chain being requested from. - `request_id`: Unique identifier for this request. - `deadline`: Timeout (ns) for this request. - `container_id`: Container being requested. - `engine_type`: Consensus type to handle this message. ### Put[​](#put "Direct link to heading") The `Put` message is sent in response to `Get` with the requested block. ``` message Put { bytes chain_id = 1; uint32 request_id = 2; bytes container = 3; EngineType engine_type = 4; } ``` - `chain_id`: Chain being responded from. - `request_id`: Request ID of the original `Get` request. - `container`: Requested container. - `engine_type`: Consensus type to handle this message. ### PushQuery[​](#pushquery "Direct link to heading") The `PushQuery` message requests the preferences of a remote peer given a container. ``` message PushQuery { bytes chain_id = 1; uint32 request_id = 2; uint64 deadline = 3; bytes container = 4; EngineType engine_type = 5; uint64 requested_height = 6; } ``` - `chain_id`: Chain being requested from. - `request_id`: Unique identifier for this request. - `deadline`: Timeout (ns) for this request. - `container`: Container being gossiped. - `engine_type`: Consensus type to handle this message. - `requested_height`: Requesting peer's last accepted height. ### PullQuery[​](#pullquery "Direct link to heading") The `PullQuery` message requests the preferences of a remote peer given a container id. ``` message PullQuery { bytes chain_id = 1; uint32 request_id = 2; uint64 deadline = 3; bytes container_id = 4; EngineType engine_type = 5; uint64 requested_height = 6; } ``` - `chain_id`: Chain being requested from. - `request_id`: Unique identifier for this request. - `deadline`: Timeout (ns) for this request. - `container_id`: Container id being gossiped. - `engine_type`: Consensus type to handle this message. - `requested_height`: Requesting peer's last accepted height. ### Chits[​](#chits "Direct link to heading") The `Chits` message contains the preferences of a peer in response to a `PushQuery` or `PullQuery` message. ``` message Chits { bytes chain_id = 1; uint32 request_id = 2; bytes preferred_id = 3; bytes accepted_id = 4; bytes preferred_id_at_height = 5; } ``` - `chain_id`: Chain being responded from. - `request_id`: Request ID of the original `PushQuery`/`PullQuery` request. - `preferred_id`: Currently preferred block. - `accepted_id`: Last accepted block. - `preferred_id_at_height`: Currently preferred block at the requested height. Application Messages[​](#application-messages "Direct link to heading") ----------------------------------------------------------------------- ### AppRequest[​](#apprequest "Direct link to heading") The `AppRequest` message is a VM-defined request. ``` message AppRequest { bytes chain_id = 1; uint32 request_id = 2; uint64 deadline = 3; bytes app_bytes = 4; } ``` - `chain_id`: Chain being requested from. - `request_id`: Unique identifier for this request. - `deadline`: Timeout (ns) for this request. - `app_bytes`: Request body. ### AppResponse[​](#appresponse "Direct link to heading") The `AppResponse` message is a VM-defined response sent in response to `AppRequest`. ``` message AppResponse { bytes chain_id = 1; uint32 request_id = 2; bytes app_bytes = 3; } ``` - `chain_id`: Chain being responded from. - `request_id`: Request ID of the original `AppRequest`. - `app_bytes`: Response body. ### AppGossip[​](#appgossip "Direct link to heading") The `AppGossip` message is a VM-defined message. ``` message AppGossip { bytes chain_id = 1; bytes app_bytes = 2; } ``` - `chain_id`: Chain the message is for. - `app_bytes`: Message body. # Cryptographic Primitives (/docs/rpcs/other/standards/cryptographic-primitives) --- title: Cryptographic Primitives --- Avalanche uses a variety of cryptographic primitives for its different functions. This file summarizes the type and kind of cryptography used at the network and blockchain layers. ## Cryptography in the Network Layer Avalanche uses Transport Layer Security, TLS, to protect node-to-node communications from eavesdroppers. TLS combines the practicality of public-key cryptography with the efficiency of symmetric-key cryptography. This has resulted in TLS becoming the standard for internet communication. Whereas most classical consensus protocols employ public-key cryptography to prove receipt of messages to third parties, the novel Snow\* consensus family does not require such proofs. This enables Avalanche to employ TLS in authenticating stakers and eliminates the need for costly public-key cryptography for signing network messages. ### TLS Certificates Avalanche does not rely on any centralized third-parties, and in particular, it does not use certificates issued by third-party authenticators. All certificates used within the network layer to identify endpoints are self-signed, thus creating a self-sovereign identity layer. No third parties are ever involved. ### TLS Addresses To avoid posting the full TLS certificate to the P-Chain, the certificate is first hashed. For consistency, Avalanche employs the same hashing mechanism for the TLS certificates as is used in Bitcoin. Namely, the DER representation of the certificate is hashed with sha256, and the result is then hashed with ripemd160 to yield a 20-byte identifier for stakers. This 20-byte identifier is represented by "NodeID-" followed by the data's [CB58](https://support.avalabs.org/en/articles/4587395-what-is-cb58) encoded string. ## Cryptography in the Avalanche Virtual Machine The Avalanche virtual machine uses elliptic curve cryptography, specifically `secp256k1`, for its signatures on the blockchain. This 32-byte identifier is represented by "PrivateKey-" followed by the data's [CB58](https://support.avalabs.org/en/articles/4587395-what-is-cb58) encoded string. ### Secp256k1 Addresses Avalanche is not prescriptive about addressing schemes, choosing to instead leave addressing up to each blockchain. The addressing scheme of the X-Chain and the P-Chain relies on secp256k1. Avalanche follows a similar approach as Bitcoin and hashes the ECDSA public key. The 33-byte compressed representation of the public key is hashed with sha256 **once**. The result is then hashed with ripemd160 to yield a 20-byte address. Avalanche uses the convention `chainID-address` to specify which chain an address exists on. `chainID` may be replaced with an alias of the chain. When transmitting information through external applications, the CB58 convention is required. ### Bech32 Addresses on the X-Chain and P-Chain use the [Bech32](http://support.avalabs.org/en/articles/4587392-what-is-bech32) standard outlined in [BIP 0173](https://en.bitcoin.it/wiki/BIP_0173). There are four parts to a Bech32 address scheme. In order of appearance: - A human-readable part (HRP). On Mainnet this is `avax`. - The number `1`, which separates the HRP from the address and error correction code. - A base-32 encoded string representing the 20 byte address. - A 6-character base-32 encoded error correction code. Additionally, an Avalanche address is prefixed with the alias of the chain it exists on, followed by a dash. For example, X-Chain addresses are prefixed with `X-`. The following regular expression matches addresses on the X-Chain, P-Chain and C-Chain for Mainnet, Fuji and localhost. Note that all valid Avalanche addresses will match this regular expression, but some strings that are not valid Avalanche addresses may match this regular expression. ``` ^([XPC]|[a-km-zA-HJ-NP-Z1-9]{36,72})-[a-zA-Z]{1,83}1[qpzry9x8gf2tvdw0s3jn54khce6mua7l]{38}$ ``` Read more about Avalanche's [addressing scheme](https://support.avalabs.org/en/articles/4596397-what-is-an-address). For example the following Bech32 address, `X-avax19rknw8l0grnfunjrzwxlxync6zrlu33y2jxhrg`, is composed like so: 1. HRP: `avax` 2. Separator: `1` 3. Address: `9rknw8l0grnfunjrzwxlxync6zrlu33y` 4. Checksum: `2jxhrg` Depending on the `networkID`, the encoded addresses will have a distinctive HRP per each network. - 0 - X-`custom`19rknw8l0grnfunjrzwxlxync6zrlu33yeg5dya - 1 - X-`avax`19rknw8l0grnfunjrzwxlxync6zrlu33y2jxhrg - 2 - X-`cascade`19rknw8l0grnfunjrzwxlxync6zrlu33ypmtvnh - 3 - X-`denali`19rknw8l0grnfunjrzwxlxync6zrlu33yhc357h - 4 - X-`everest`19rknw8l0grnfunjrzwxlxync6zrlu33yn44wty - 5 - X-`fuji`19rknw8l0grnfunjrzwxlxync6zrlu33yxqzg0h - 1337 - X-`custom`19rknw8l0grnfunjrzwxlxync6zrlu33yeg5dya - 12345 - X-`local`19rknw8l0grnfunjrzwxlxync6zrlu33ynpm3qq Here's the mapping of `networkID` to bech32 HRP. ``` 0: "custom", 1: "avax", 2: "cascade", 3: "denali", 4: "everest", 5: "fuji", 1337: "custom", 12345: "local" ``` ### Secp256k1 Recoverable Signatures Recoverable signatures are stored as the 65-byte **`[R || S || V]`** where **`V`** is 0 or 1 to allow quick public key recoverability. **`S`** must be in the lower half of the possible range to prevent signature malleability. Before signing a message, the message is hashed using sha256. ### Secp256k1 Example Suppose Rick and Morty are setting up a secure communication channel. Morty creates a new public-private key pair. Private Key: `0x98cb077f972feb0481f1d894f272c6a1e3c15e272a1658ff716444f465200070` Public Key (33-byte compressed): `0x02b33c917f2f6103448d7feb42614037d05928433cb25e78f01a825aa829bb3c27` Because of Rick's infinite wisdom, he doesn't trust himself with carrying around Morty's public key, so he only asks for Morty's address. Morty follows the instructions, SHA256's his public key, and then ripemd160's that result to produce an address. SHA256(Public Key): `0x28d7670d71667e93ff586f664937f52828e6290068fa2a37782045bffa7b0d2f` Address: `0xe8777f38c88ca153a6fdc25942176d2bf5491b89` Morty is quite confused because a public key should be safe to be public knowledge. Rick belches and explains that hashing the public key protects the private key owner from potential future security flaws in elliptic curve cryptography. In the event cryptography is broken and a private key can be derived from a public key, users can transfer their funds to an address that has never signed a transaction before, preventing their funds from being compromised by an attacker. This enables coin owners to be protected while the cryptography is upgraded across the clients. Later, once Morty has learned more about Rick's backstory, Morty attempts to send Rick a message. Morty knows that Rick will only read the message if he can verify it was from him, so he signs the message with his private key. Message: `0x68656c702049276d207472617070656420696e206120636f6d7075746572` Message Hash: `0x912800c29d554fb9cdce579c0abba991165bbbc8bfec9622481d01e0b3e4b7da` Message Signature: `0xb52aa0535c5c48268d843bd65395623d2462016325a86f09420c81f142578e121d11bd368b88ca6de4179a007e6abe0e8d0be1a6a4485def8f9e02957d3d72da01` Morty was never seen again. ### Signed Messages A standard for interoperable generic signed messages based on the Bitcoin Script format and Ethereum format. ``` sign(sha256(length(prefix) + prefix + length(message) + message)) ``` The prefix is simply the string `\x1AAvalanche Signed Message:\n`, where `0x1A` is the length of the prefix text and `length(message)` is an [integer](/docs/rpcs/other/standards/serialization-primitives#integer) of the message size. ### Gantt Pre-Image Specification ``` +---------------+-----------+------------------------------+ | prefix : [26]byte | 26 bytes | +---------------+-----------+------------------------------+ | messageLength : int | 4 bytes | +---------------+-----------+------------------------------+ | message : []byte | size(message) bytes | +---------------+-----------+------------------------------+ | 26 + 4 + size(message) | +------------------------------+ ``` ### Example As an example we will sign the message "Through consensus to the stars" ``` // prefix size: 26 bytes 0x1a // prefix: Avalanche Signed Message:\n 0x41 0x76 0x61 0x6c 0x61 0x6e 0x63 0x68 0x65 0x20 0x53 0x69 0x67 0x6e 0x65 0x64 0x20 0x4d 0x65 0x73 0x73 0x61 0x67 0x65 0x3a 0x0a // msg size: 30 bytes 0x00 0x00 0x00 0x1e // msg: Through consensus to the stars 54 68 72 6f 75 67 68 20 63 6f 6e 73 65 6e 73 75 73 20 74 6f 20 74 68 65 20 73 74 61 72 73 ``` After hashing with `sha256` and signing the pre-image we return the value [cb58](https://support.avalabs.org/en/articles/4587395-what-is-cb58) encoded: `4Eb2zAHF4JjZFJmp4usSokTGqq9mEGwVMY2WZzzCmu657SNFZhndsiS8TvL32n3bexd8emUwiXs8XqKjhqzvoRFvghnvSN`. Here's an example using [Core web](https://core.app/tools/signing-tools/sign/). A full guide on how to sign messages with Core web can be found [here](https://support.avax.network/en/articles/7206948-core-web-how-do-i-use-the-signing-tools). ![Sign message](/images/cryptography1.png) ## Cryptography in Ethereum Virtual Machine Avalanche nodes support the full Ethereum Virtual Machine (EVM) and precisely duplicate all of the cryptographic constructs used in Ethereum. This includes the Keccak hash function and the other mechanisms used for cryptographic security in the EVM. ## Cryptography in Other Virtual Machines Since Avalanche is an extensible platform, we expect that people will add additional cryptographic primitives to the system over time. # Serialization Primitives (/docs/rpcs/other/standards/serialization-primitives) --- title: Serialization Primitives --- Avalanche uses a simple, uniform, and elegant representation for all internal data. This document describes how primitive types are encoded on the Avalanche platform. Transactions are encoded in terms of these basic primitive types. Byte[​](#byte "Direct link to heading") --------------------------------------- Bytes are packed as-is into the message payload. Example: ``` Packing: 0x01 Results in: [0x01] ``` Short[​](#short "Direct link to heading") ----------------------------------------- Shorts are packed in BigEndian format into the message payload. Example: ``` Packing: 0x0102 Results in: [0x01, 0x02] ``` Integer[​](#integer "Direct link to heading") --------------------------------------------- Integers are 32-bit values packed in BigEndian format into the message payload. Example: ``` Packing: 0x01020304 Results in: [0x01, 0x02, 0x03, 0x04] ``` Long Integers[​](#long-integers "Direct link to heading") --------------------------------------------------------- Long integers are 64-bit values packed in BigEndian format into the message payload. Example: ``` Packing: 0x0102030405060708 Results in: [0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08] ``` IP Addresses[​](#ip-addresses "Direct link to heading") ------------------------------------------------------- IP addresses are represented as 16-byte IPv6 format, with the port appended into the message payload as a Short. IPv4 addresses are padded with 12 bytes of leading 0x00s. IPv4 example: ``` Packing: "127.0.0.1:9650" Results in: [ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0x7f, 0x00, 0x00, 0x01, 0x25, 0xb2, ] ``` IPv6 example: ``` Packing: "[2001:0db8:ac10:fe01::]:12345" Results in: [ 0x20, 0x01, 0x0d, 0xb8, 0xac, 0x10, 0xfe, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x30, 0x39, ] ``` Fixed-Length Array[​](#fixed-length-array "Direct link to heading") ------------------------------------------------------------------- Fixed-length arrays, whose length is known ahead of time and by context, are packed in order. Byte array example: ``` Packing: [0x01, 0x02] Results in: [0x01, 0x02] ``` Integer array example: ``` Packing: [0x03040506] Results in: [0x03, 0x04, 0x05, 0x06] ``` Variable Length Array[​](#variable-length-array "Direct link to heading") ------------------------------------------------------------------------- The length of the array is prefixed in Integer format, followed by the packing of the array contents in Fixed Length Array format. Byte array example: ``` Packing: [0x01, 0x02] Results in: [0x00, 0x00, 0x00, 0x02, 0x01, 0x02] ``` Int array example: ``` Packing: [0x03040506] Results in: [0x00, 0x00, 0x00, 0x01, 0x03, 0x04, 0x05, 0x06] ``` String[​](#string "Direct link to heading") ------------------------------------------- A String is packed similarly to a variable-length byte array. However, the length prefix is a short rather than an int. Strings are encoded in UTF-8 format. Example: ``` Packing: "Avax" Results in: [0x00, 0x04, 0x41, 0x76, 0x61, 0x78] ``` # platform.getBalance (/docs/rpcs/p-chain/balances-&-utxos/platform_getBalance) --- title: platform.getBalance full: true _openapi: method: POST route: /ext/bc/P#platform.getBalance toc: [] structuredData: headings: [] contents: - content: GetBalance gets the balance of an address --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} GetBalance gets the balance of an address # platform.getUTXOs (/docs/rpcs/p-chain/balances-&-utxos/platform_getUTXOs) --- title: platform.getUTXOs full: true _openapi: method: POST route: /ext/bc/P#platform.getUTXOs toc: [] structuredData: headings: [] contents: - content: GetUTXOs returns the UTXOs controlled by the given addresses --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} GetUTXOs returns the UTXOs controlled by the given addresses # platform.getBlockchainStatus (/docs/rpcs/p-chain/blockchains/platform_getBlockchainStatus) --- title: platform.getBlockchainStatus full: true _openapi: method: POST route: /ext/bc/P#platform.getBlockchainStatus toc: [] structuredData: headings: [] contents: - content: >- GetBlockchainStatus gets the status of a blockchain with the ID [args.BlockchainID]. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} GetBlockchainStatus gets the status of a blockchain with the ID [args.BlockchainID]. # platform.getBlockchains (/docs/rpcs/p-chain/blockchains/platform_getBlockchains) --- title: platform.getBlockchains full: true _openapi: method: POST route: /ext/bc/P#platform.getBlockchains toc: [] structuredData: headings: [] contents: - content: GetBlockchains returns all of the blockchains that exist --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} GetBlockchains returns all of the blockchains that exist # platform.validatedBy (/docs/rpcs/p-chain/blockchains/platform_validatedBy) --- title: platform.validatedBy full: true _openapi: method: POST route: /ext/bc/P#platform.validatedBy toc: [] structuredData: headings: [] contents: - content: >- ValidatedBy returns the ID of the Subnet that validates [args.BlockchainID] --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} ValidatedBy returns the ID of the Subnet that validates [args.BlockchainID] # platform.validates (/docs/rpcs/p-chain/blockchains/platform_validates) --- title: platform.validates full: true _openapi: method: POST route: /ext/bc/P#platform.validates toc: [] structuredData: headings: [] contents: - content: >- Validates returns the IDs of the blockchains validated by [args.SubnetID] --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Validates returns the IDs of the blockchains validated by [args.SubnetID] # platform.getBlock (/docs/rpcs/p-chain/blocks/platform_getBlock) --- title: platform.getBlock full: true _openapi: method: POST route: /ext/bc/P#platform.getBlock toc: [] structuredData: headings: [] contents: - content: Calls the platform.getBlock method --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Calls the platform.getBlock method # platform.getBlockByHeight (/docs/rpcs/p-chain/blocks/platform_getBlockByHeight) --- title: platform.getBlockByHeight full: true _openapi: method: POST route: /ext/bc/P#platform.getBlockByHeight toc: [] structuredData: headings: [] contents: - content: GetBlockByHeight returns the block at the given height. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} GetBlockByHeight returns the block at the given height. # platform.getCurrentSupply (/docs/rpcs/p-chain/chain-info/platform_getCurrentSupply) --- title: platform.getCurrentSupply full: true _openapi: method: POST route: /ext/bc/P#platform.getCurrentSupply toc: [] structuredData: headings: [] contents: - content: >- GetCurrentSupply returns an upper bound on the supply of AVAX in the system --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} GetCurrentSupply returns an upper bound on the supply of AVAX in the system # platform.getHeight (/docs/rpcs/p-chain/chain-info/platform_getHeight) --- title: platform.getHeight full: true _openapi: method: POST route: /ext/bc/P#platform.getHeight toc: [] structuredData: headings: [] contents: - content: GetHeight returns the height of the last accepted block --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} GetHeight returns the height of the last accepted block # platform.getProposedHeight (/docs/rpcs/p-chain/chain-info/platform_getProposedHeight) --- title: platform.getProposedHeight full: true _openapi: method: POST route: /ext/bc/P#platform.getProposedHeight toc: [] structuredData: headings: [] contents: - content: GetProposedHeight returns the current ProposerVM height --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} GetProposedHeight returns the current ProposerVM height # platform.getTimestamp (/docs/rpcs/p-chain/chain-info/platform_getTimestamp) --- title: platform.getTimestamp full: true _openapi: method: POST route: /ext/bc/P#platform.getTimestamp toc: [] structuredData: headings: [] contents: - content: GetTimestamp returns the current timestamp on chain. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} GetTimestamp returns the current timestamp on chain. # platform.getFeeConfig (/docs/rpcs/p-chain/fees/platform_getFeeConfig) --- title: platform.getFeeConfig full: true _openapi: method: POST route: /ext/bc/P#platform.getFeeConfig toc: [] structuredData: headings: [] contents: - content: GetFeeConfig returns the dynamic fee config of the chain. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} GetFeeConfig returns the dynamic fee config of the chain. # platform.getFeeState (/docs/rpcs/p-chain/fees/platform_getFeeState) --- title: platform.getFeeState full: true _openapi: method: POST route: /ext/bc/P#platform.getFeeState toc: [] structuredData: headings: [] contents: - content: GetFeeState returns the current fee state of the chain. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} GetFeeState returns the current fee state of the chain. # platform.getValidatorFeeConfig (/docs/rpcs/p-chain/fees/platform_getValidatorFeeConfig) --- title: platform.getValidatorFeeConfig full: true _openapi: method: POST route: /ext/bc/P#platform.getValidatorFeeConfig toc: [] structuredData: headings: [] contents: - content: GetValidatorFeeConfig returns the validator fee config of the chain. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} GetValidatorFeeConfig returns the validator fee config of the chain. # platform.getValidatorFeeState (/docs/rpcs/p-chain/fees/platform_getValidatorFeeState) --- title: platform.getValidatorFeeState full: true _openapi: method: POST route: /ext/bc/P#platform.getValidatorFeeState toc: [] structuredData: headings: [] contents: - content: >- GetValidatorFeeState returns the current validator fee state of the chain. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} GetValidatorFeeState returns the current validator fee state of the chain. # platform.getRewardUTXOs (/docs/rpcs/p-chain/rewards/platform_getRewardUTXOs) --- title: platform.getRewardUTXOs full: true _openapi: method: POST route: /ext/bc/P#platform.getRewardUTXOs toc: [] structuredData: headings: [] contents: - content: >- GetRewardUTXOs returns the UTXOs that were rewarded after the provided transaction's staking period ended. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} GetRewardUTXOs returns the UTXOs that were rewarded after the provided transaction's staking period ended. # platform.getMinStake (/docs/rpcs/p-chain/staking/platform_getMinStake) --- title: platform.getMinStake full: true _openapi: method: POST route: /ext/bc/P#platform.getMinStake toc: [] structuredData: headings: [] contents: - content: GetMinStake returns the minimum staking amount in nAVAX. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} GetMinStake returns the minimum staking amount in nAVAX. # platform.getStake (/docs/rpcs/p-chain/staking/platform_getStake) --- title: platform.getStake full: true _openapi: method: POST route: /ext/bc/P#platform.getStake toc: [] structuredData: headings: [] contents: - content: >- GetStake returns the amount of nAVAX that [args.Addresses] have cumulatively staked on the Primary Network. This method assumes that each stake output has only owner This method assumes only AVAX can be staked This method only concerns itself with the Primary Network, not subnets TODO: Improve the performance of this method by maintaining this data in a data structure rather than re-calculating it by iterating over stakers --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} GetStake returns the amount of nAVAX that [args.Addresses] have cumulatively staked on the Primary Network. This method assumes that each stake output has only owner This method assumes only AVAX can be staked This method only concerns itself with the Primary Network, not subnets TODO: Improve the performance of this method by maintaining this data in a data structure rather than re-calculating it by iterating over stakers # platform.getTotalStake (/docs/rpcs/p-chain/staking/platform_getTotalStake) --- title: platform.getTotalStake full: true _openapi: method: POST route: /ext/bc/P#platform.getTotalStake toc: [] structuredData: headings: [] contents: - content: GetTotalStake returns the total amount staked on the Primary Network --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} GetTotalStake returns the total amount staked on the Primary Network # platform.getStakingAssetID (/docs/rpcs/p-chain/subnets/platform_getStakingAssetID) --- title: platform.getStakingAssetID full: true _openapi: method: POST route: /ext/bc/P#platform.getStakingAssetID toc: [] structuredData: headings: [] contents: - content: >- GetStakingAssetID returns the assetID of the token used to stake on the provided subnet --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} GetStakingAssetID returns the assetID of the token used to stake on the provided subnet # platform.getSubnet (/docs/rpcs/p-chain/subnets/platform_getSubnet) --- title: platform.getSubnet full: true _openapi: method: POST route: /ext/bc/P#platform.getSubnet toc: [] structuredData: headings: [] contents: - content: Calls the platform.getSubnet method --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Calls the platform.getSubnet method # platform.getSubnets (/docs/rpcs/p-chain/subnets/platform_getSubnets) --- title: platform.getSubnets full: true _openapi: method: POST route: /ext/bc/P#platform.getSubnets toc: [] structuredData: headings: [] contents: - content: >- GetSubnets returns the subnets whose ID are in [args.IDs] The response will include the primary network --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} GetSubnets returns the subnets whose ID are in [args.IDs] The response will include the primary network # platform.getTx (/docs/rpcs/p-chain/transactions/platform_getTx) --- title: platform.getTx full: true _openapi: method: POST route: /ext/bc/P#platform.getTx toc: [] structuredData: headings: [] contents: - content: Calls the platform.getTx method --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Calls the platform.getTx method # platform.getTxStatus (/docs/rpcs/p-chain/transactions/platform_getTxStatus) --- title: platform.getTxStatus full: true _openapi: method: POST route: /ext/bc/P#platform.getTxStatus toc: [] structuredData: headings: [] contents: - content: GetTxStatus gets a tx's status --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} GetTxStatus gets a tx's status # platform.issueTx (/docs/rpcs/p-chain/transactions/platform_issueTx) --- title: platform.issueTx full: true _openapi: method: POST route: /ext/bc/P#platform.issueTx toc: [] structuredData: headings: [] contents: - content: Calls the platform.issueTx method --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Calls the platform.issueTx method # platform.getCurrentValidators (/docs/rpcs/p-chain/validators/platform_getCurrentValidators) --- title: platform.getCurrentValidators full: true _openapi: method: POST route: /ext/bc/P#platform.getCurrentValidators toc: [] structuredData: headings: [] contents: - content: >- GetCurrentValidators returns the current validators. If a single nodeID is provided, full delegators information is also returned. Otherwise only delegators' number and total weight is returned. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} GetCurrentValidators returns the current validators. If a single nodeID is provided, full delegators information is also returned. Otherwise only delegators' number and total weight is returned. # platform.getL1Validator (/docs/rpcs/p-chain/validators/platform_getL1Validator) --- title: platform.getL1Validator full: true _openapi: method: POST route: /ext/bc/P#platform.getL1Validator toc: [] structuredData: headings: [] contents: - content: GetL1Validator returns the L1 validator if it exists --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} GetL1Validator returns the L1 validator if it exists # platform.getValidatorsAt (/docs/rpcs/p-chain/validators/platform_getValidatorsAt) --- title: platform.getValidatorsAt full: true _openapi: method: POST route: /ext/bc/P#platform.getValidatorsAt toc: [] structuredData: headings: [] contents: - content: >- GetValidatorsAt returns the weights of the validator set of a provided subnet at the specified height. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} GetValidatorsAt returns the weights of the validator set of a provided subnet at the specified height. # platform.sampleValidators (/docs/rpcs/p-chain/validators/platform_sampleValidators) --- title: platform.sampleValidators full: true _openapi: method: POST route: /ext/bc/P#platform.sampleValidators toc: [] structuredData: headings: [] contents: - content: SampleValidators returns a sampling of the list of current validators --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} SampleValidators returns a sampling of the list of current validators # avax.getAtomicTx (/docs/rpcs/c-chain/avalanche/avax_getAtomicTx) --- title: avax.getAtomicTx full: true _openapi: method: POST route: /ext/bc/C/avax#avax.getAtomicTx toc: [] structuredData: headings: [] contents: - content: Returns the specified atomic transaction. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the specified atomic transaction. # avax.getAtomicTxStatus (/docs/rpcs/c-chain/avalanche/avax_getAtomicTxStatus) --- title: avax.getAtomicTxStatus full: true _openapi: method: POST route: /ext/bc/C/avax#avax.getAtomicTxStatus toc: [] structuredData: headings: [] contents: - content: Returns the status of the specified atomic transaction. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the status of the specified atomic transaction. # avax.getUTXOs (/docs/rpcs/c-chain/avalanche/avax_getUTXOs) --- title: avax.getUTXOs full: true _openapi: method: POST route: /ext/bc/C/avax#avax.getUTXOs toc: [] structuredData: headings: [] contents: - content: Gets all UTXOs for the specified addresses. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Gets all UTXOs for the specified addresses. # avax.issueTx (/docs/rpcs/c-chain/avalanche/avax_issueTx) --- title: avax.issueTx full: true _openapi: method: POST route: /ext/bc/C/avax#avax.issueTx toc: [] structuredData: headings: [] contents: - content: Issues a transaction to the network. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Issues a transaction to the network. # eth_suggestPriceOptions (/docs/rpcs/c-chain/avalanche/eth_suggestPriceOptions) --- title: eth_suggestPriceOptions full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_suggestPriceOptions toc: [] structuredData: headings: [] contents: - content: Returns suggested fee options (Coreth extension). --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns suggested fee options (Coreth extension). # debug_getRawBlock (/docs/rpcs/c-chain/debug/debug_getRawBlock) --- title: debug_getRawBlock full: true _openapi: method: POST route: /ext/bc/C/rpc#debug_getRawBlock toc: [] structuredData: headings: [] contents: - content: > Returns RLP-encoded bytes of a single block by number or hash. **⚠️ Note:** This method is NOT available on public RPC endpoints. You must run your own node or use a dedicated node service to access debug methods. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns RLP-encoded bytes of a single block by number or hash. **⚠️ Note:** This method is NOT available on public RPC endpoints. You must run your own node or use a dedicated node service to access debug methods. # debug_getRawTransaction (/docs/rpcs/c-chain/debug/debug_getRawTransaction) --- title: debug_getRawTransaction full: true _openapi: method: POST route: /ext/bc/C/rpc#debug_getRawTransaction toc: [] structuredData: headings: [] contents: - content: > Returns the bytes of the transaction for the given hash. **⚠️ Note:** This method is NOT available on public RPC endpoints. You must run your own node or use a dedicated node service to access debug methods. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the bytes of the transaction for the given hash. **⚠️ Note:** This method is NOT available on public RPC endpoints. You must run your own node or use a dedicated node service to access debug methods. # eth_accounts (/docs/rpcs/c-chain/eth/eth_accounts) --- title: eth_accounts full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_accounts toc: [] structuredData: headings: [] contents: - content: > Returns a list of addresses owned by the client. **Note:** Public RPC endpoints will return an empty array as they don't manage any accounts. This method only returns accounts when using a node you control with unlocked accounts. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns a list of addresses owned by the client. **Note:** Public RPC endpoints will return an empty array as they don't manage any accounts. This method only returns accounts when using a node you control with unlocked accounts. # eth_blockNumber (/docs/rpcs/c-chain/eth/eth_blockNumber) --- title: eth_blockNumber full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_blockNumber toc: [] structuredData: headings: [] contents: - content: Returns the block number of the chain head. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the block number of the chain head. # eth_call (/docs/rpcs/c-chain/eth/eth_call) --- title: eth_call full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_call toc: [] structuredData: headings: [] contents: - content: Executes a call without sending a transaction. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Executes a call without sending a transaction. # eth_chainId (/docs/rpcs/c-chain/eth/eth_chainId) --- title: eth_chainId full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_chainId toc: [] structuredData: headings: [] contents: - content: Returns the chain ID of the current network. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the chain ID of the current network. # eth_coinbase (/docs/rpcs/c-chain/eth/eth_coinbase) --- title: eth_coinbase full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_coinbase toc: [] structuredData: headings: [] contents: - content: > Returns the client coinbase address. **Note:** On public RPC endpoints, this returns the node operator's coinbase address, not yours. This method is mainly useful when running your own node. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the client coinbase address. **Note:** On public RPC endpoints, this returns the node operator's coinbase address, not yours. This method is mainly useful when running your own node. # eth_estimateGas (/docs/rpcs/c-chain/eth/eth_estimateGas) --- title: eth_estimateGas full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_estimateGas toc: [] structuredData: headings: [] contents: - content: Returns the lowest gas limit for a transaction to succeed. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the lowest gas limit for a transaction to succeed. # eth_feeHistory (/docs/rpcs/c-chain/eth/eth_feeHistory) --- title: eth_feeHistory full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_feeHistory toc: [] structuredData: headings: [] contents: - content: Returns the fee market history. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the fee market history. # eth_gasPrice (/docs/rpcs/c-chain/eth/eth_gasPrice) --- title: eth_gasPrice full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_gasPrice toc: [] structuredData: headings: [] contents: - content: Returns a suggestion for a legacy gas price. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns a suggestion for a legacy gas price. # eth_getBalance (/docs/rpcs/c-chain/eth/eth_getBalance) --- title: eth_getBalance full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_getBalance toc: [] structuredData: headings: [] contents: - content: Returns the balance of the account of given address. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the balance of the account of given address. # eth_getBlockByHash (/docs/rpcs/c-chain/eth/eth_getBlockByHash) --- title: eth_getBlockByHash full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_getBlockByHash toc: [] structuredData: headings: [] contents: - content: >- Returns the block for a given hash. Second param selects full transactions. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the block for a given hash. Second param selects full transactions. # eth_getBlockByNumber (/docs/rpcs/c-chain/eth/eth_getBlockByNumber) --- title: eth_getBlockByNumber full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_getBlockByNumber toc: [] structuredData: headings: [] contents: - content: >- Returns the block for a given number. Second param selects full transactions. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the block for a given number. Second param selects full transactions. # eth_getBlockTransactionCountByHash (/docs/rpcs/c-chain/eth/eth_getBlockTransactionCountByHash) --- title: eth_getBlockTransactionCountByHash full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_getBlockTransactionCountByHash toc: [] structuredData: headings: [] contents: - content: >- Returns the number of transactions in a block from a block matching the given block hash. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the number of transactions in a block from a block matching the given block hash. # eth_getBlockTransactionCountByNumber (/docs/rpcs/c-chain/eth/eth_getBlockTransactionCountByNumber) --- title: eth_getBlockTransactionCountByNumber full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_getBlockTransactionCountByNumber toc: [] structuredData: headings: [] contents: - content: >- Returns the number of transactions in a block matching the given block number. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the number of transactions in a block matching the given block number. # eth_getCode (/docs/rpcs/c-chain/eth/eth_getCode) --- title: eth_getCode full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_getCode toc: [] structuredData: headings: [] contents: - content: Returns code at a given address. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns code at a given address. # eth_getFilterChanges (/docs/rpcs/c-chain/eth/eth_getFilterChanges) --- title: eth_getFilterChanges full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_getFilterChanges toc: [] structuredData: headings: [] contents: - content: >- Polling method for a filter, which returns an array of logs which occurred since last poll. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Polling method for a filter, which returns an array of logs which occurred since last poll. # eth_getFilterLogs (/docs/rpcs/c-chain/eth/eth_getFilterLogs) --- title: eth_getFilterLogs full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_getFilterLogs toc: [] structuredData: headings: [] contents: - content: Returns an array of all logs matching filter with given id. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns an array of all logs matching filter with given id. # eth_getLogs (/docs/rpcs/c-chain/eth/eth_getLogs) --- title: eth_getLogs full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_getLogs toc: [] structuredData: headings: [] contents: - content: Returns logs matching the given filter object. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns logs matching the given filter object. # eth_getStorageAt (/docs/rpcs/c-chain/eth/eth_getStorageAt) --- title: eth_getStorageAt full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_getStorageAt toc: [] structuredData: headings: [] contents: - content: Returns the value from a storage position at a given address. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the value from a storage position at a given address. # eth_getTransactionByBlockHashAndIndex (/docs/rpcs/c-chain/eth/eth_getTransactionByBlockHashAndIndex) --- title: eth_getTransactionByBlockHashAndIndex full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_getTransactionByBlockHashAndIndex toc: [] structuredData: headings: [] contents: - content: >- Returns information about a transaction by block hash and transaction index position. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns information about a transaction by block hash and transaction index position. # eth_getTransactionByBlockNumberAndIndex (/docs/rpcs/c-chain/eth/eth_getTransactionByBlockNumberAndIndex) --- title: eth_getTransactionByBlockNumberAndIndex full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_getTransactionByBlockNumberAndIndex toc: [] structuredData: headings: [] contents: - content: >- Returns information about a transaction by block number and transaction index position. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns information about a transaction by block number and transaction index position. # eth_getTransactionByHash (/docs/rpcs/c-chain/eth/eth_getTransactionByHash) --- title: eth_getTransactionByHash full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_getTransactionByHash toc: [] structuredData: headings: [] contents: - content: >- Returns the information about a transaction requested by transaction hash. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the information about a transaction requested by transaction hash. # eth_getTransactionCount (/docs/rpcs/c-chain/eth/eth_getTransactionCount) --- title: eth_getTransactionCount full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_getTransactionCount toc: [] structuredData: headings: [] contents: - content: Returns the number of transactions sent from an address. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the number of transactions sent from an address. # eth_getTransactionReceipt (/docs/rpcs/c-chain/eth/eth_getTransactionReceipt) --- title: eth_getTransactionReceipt full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_getTransactionReceipt toc: [] structuredData: headings: [] contents: - content: Returns the receipt of a transaction by transaction hash. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the receipt of a transaction by transaction hash. # eth_maxPriorityFeePerGas (/docs/rpcs/c-chain/eth/eth_maxPriorityFeePerGas) --- title: eth_maxPriorityFeePerGas full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_maxPriorityFeePerGas toc: [] structuredData: headings: [] contents: - content: Returns a suggestion for a tip cap for dynamic fee transactions. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns a suggestion for a tip cap for dynamic fee transactions. # eth_newBlockFilter (/docs/rpcs/c-chain/eth/eth_newBlockFilter) --- title: eth_newBlockFilter full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_newBlockFilter toc: [] structuredData: headings: [] contents: - content: > Creates a filter in the node, to notify when a new block arrives. **⚠️ Note:** Filter methods may not be available or may have limited functionality on public RPC endpoints as they require server-side state management. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Creates a filter in the node, to notify when a new block arrives. **⚠️ Note:** Filter methods may not be available or may have limited functionality on public RPC endpoints as they require server-side state management. # eth_newFilter (/docs/rpcs/c-chain/eth/eth_newFilter) --- title: eth_newFilter full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_newFilter toc: [] structuredData: headings: [] contents: - content: > Creates a filter object, based on filter options, to notify when the state changes (logs). **⚠️ Note:** Filter methods may not be available or may have limited functionality on public RPC endpoints as they require server-side state management. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Creates a filter object, based on filter options, to notify when the state changes (logs). **⚠️ Note:** Filter methods may not be available or may have limited functionality on public RPC endpoints as they require server-side state management. # eth_newPendingTransactionFilter (/docs/rpcs/c-chain/eth/eth_newPendingTransactionFilter) --- title: eth_newPendingTransactionFilter full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_newPendingTransactionFilter toc: [] structuredData: headings: [] contents: - content: > Creates a filter in the node, to notify when new pending transactions arrive. **⚠️ Note:** Filter methods may not be available or may have limited functionality on public RPC endpoints as they require server-side state management. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Creates a filter in the node, to notify when new pending transactions arrive. **⚠️ Note:** Filter methods may not be available or may have limited functionality on public RPC endpoints as they require server-side state management. # eth_protocolVersion (/docs/rpcs/c-chain/eth/eth_protocolVersion) --- title: eth_protocolVersion full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_protocolVersion toc: [] structuredData: headings: [] contents: - content: Returns the current ethereum protocol version. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the current ethereum protocol version. # eth_sendRawTransaction (/docs/rpcs/c-chain/eth/eth_sendRawTransaction) --- title: eth_sendRawTransaction full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_sendRawTransaction toc: [] structuredData: headings: [] contents: - content: Submits a signed raw transaction. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Submits a signed raw transaction. # eth_sign (/docs/rpcs/c-chain/eth/eth_sign) --- title: eth_sign full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_sign toc: [] structuredData: headings: [] contents: - content: > Signs data with a given address. **⚠️ Security Note:** This method is typically disabled on public RPC endpoints for security reasons. It requires access to private keys and should only be used on nodes you control. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Signs data with a given address. **⚠️ Security Note:** This method is typically disabled on public RPC endpoints for security reasons. It requires access to private keys and should only be used on nodes you control. # eth_uninstallFilter (/docs/rpcs/c-chain/eth/eth_uninstallFilter) --- title: eth_uninstallFilter full: true _openapi: method: POST route: /ext/bc/C/rpc#eth_uninstallFilter toc: [] structuredData: headings: [] contents: - content: Uninstalls a filter with given id. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Uninstalls a filter with given id. # net_version (/docs/rpcs/c-chain/net/net_version) --- title: net_version full: true _openapi: method: POST route: /ext/bc/C/rpc#net_version toc: [] structuredData: headings: [] contents: - content: Returns the current network ID as a string. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the current network ID as a string. # personal_newAccount (/docs/rpcs/c-chain/personal/personal_newAccount) --- title: personal_newAccount full: true _openapi: method: POST route: /ext/bc/C/rpc#personal_newAccount toc: [] structuredData: headings: [] contents: - content: > Creates a new account with the given password. **⚠️ Security Note:** This method is NOT available on public RPC endpoints for security reasons. Personal methods manage private keys and must only be used on nodes you control. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Creates a new account with the given password. **⚠️ Security Note:** This method is NOT available on public RPC endpoints for security reasons. Personal methods manage private keys and must only be used on nodes you control. # personal_unlockAccount (/docs/rpcs/c-chain/personal/personal_unlockAccount) --- title: personal_unlockAccount full: true _openapi: method: POST route: /ext/bc/C/rpc#personal_unlockAccount toc: [] structuredData: headings: [] contents: - content: > Unlocks an account with the given password and optional duration. **⚠️ Security Note:** This method is NOT available on public RPC endpoints for security reasons. Personal methods manage private keys and must only be used on nodes you control. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Unlocks an account with the given password and optional duration. **⚠️ Security Note:** This method is NOT available on public RPC endpoints for security reasons. Personal methods manage private keys and must only be used on nodes you control. # txpool_status (/docs/rpcs/c-chain/txpool/txpool_status) --- title: txpool_status full: true _openapi: method: POST route: /ext/bc/C/rpc#txpool_status toc: [] structuredData: headings: [] contents: - content: > Returns transaction pool status. **⚠️ Note:** This method may be restricted or rate-limited on public RPC endpoints. Consider running your own node for unrestricted access. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns transaction pool status. **⚠️ Note:** This method may be restricted or rate-limited on public RPC endpoints. Consider running your own node for unrestricted access. # web3_clientVersion (/docs/rpcs/c-chain/web3/web3_clientVersion) --- title: web3_clientVersion full: true _openapi: method: POST route: /ext/bc/C/rpc#web3_clientVersion toc: [] structuredData: headings: [] contents: - content: Returns the client version. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the client version. # Deploy Custom VM (/docs/tooling/avalanche-cli/create-avalanche-nodes/deploy-custom-vm) --- title: Deploy Custom VM description: This page demonstrates how to deploy a custom VM into cloud-based validators using Avalanche-CLI. --- Currently, only Fuji network and Devnets are supported. ALPHA WARNING: This command is currently in experimental mode. Proceed at your own risk. ## Prerequisites Before we begin, you will need to have: - Created a cloud server node as described [here](/docs/tooling/avalanche-cli/create-avalanche-nodes/run-validators-aws) - Created a Custom VM, as described [here](/docs/primary-network/virtual-machines). - (Ignore for Devnet) Set up a key to be able to pay for transaction Fees, as described [here](/docs/tooling/avalanche-cli/create-deploy-avalanche-l1s/deploy-on-fuji-testnet). Currently, only AWS & GCP cloud services are supported. Deploying the VM[​](#deploying-the-vm "Direct link to heading") --------------------------------------------------------------- We will be deploying the [MorpheusVM](https://github.com/ava-labs/hypersdk/tree/main/examples/morpheusvm) example built with the HyperSDK. The following settings will be used: - Repo url: `https://github.com/ava-labs/hypersdk/` - Branch Name: `vryx-poc` - Build Script: `examples/morpheusvm/scripts/build.sh` The CLI needs a public repo url in order to be able to download and install the custom VM on cloud. ### Genesis File[​](#genesis-file "Direct link to heading") The following contents will serve as the chain genesis. They were generated using `morpheus-cli` as shown [here](https://github.com/ava-labs/hypersdk/blob/main/examples/morpheusvm/scripts/run.sh). Save it into a file with path `` (for example `~/morpheusvm_genesis.json`): ```bash { "stateBranchFactor":16, "minBlockGap":1000, "minUnitPrice":[1,1,1,1,1], "maxChunkUnits":[1800000,18446744073709551615,18446744073709551615,18446744073709551615,18446744073709551615], "epochDuration":60000, "validityWindow":59000, "partitions":8, "baseUnits":1, "baseWarpUnits":1024, "warpUnitsPerSigner":128, "outgoingWarpComputeUnits":1024, "storageKeyReadUnits":5, "storageValueReadUnits":2, "storageKeyAllocateUnits":20, "storageValueAllocateUnits":5, "storageKeyWriteUnits":10, "storageValueWriteUnits":3, "customAllocation": [ { "address":"morpheus1qrzvk4zlwj9zsacqgtufx7zvapd3quufqpxk5rsdd4633m4wz2fdjk97rwu", "balance":3000000000000000000 }, {"address":"morpheus1qryyvfut6td0l2vwn8jwae0pmmev7eqxs2vw0fxpd2c4lr37jj7wvrj4vc3", "balance":3000000000000000000 }, {"address":"morpheus1qp52zjc3ul85309xn9stldfpwkseuth5ytdluyl7c5mvsv7a4fc76g6c4w4", "balance":3000000000000000000 }, {"address":"morpheus1qzqjp943t0tudpw06jnvakdc0y8w790tzk7suc92aehjw0epvj93s0uzasn", "balance":3000000000000000000 }, {"address":"morpheus1qz97wx3vl3upjuquvkulp56nk20l3jumm3y4yva7v6nlz5rf8ukty8fh27r", "balance":3000000000000000000 } ] } ``` Create the Avalanche L1[​](#create-the-avalanche-l1 "Direct link to heading") ----------------------------------------------------------------- Let's create an Avalanche L1 called ``, with custom VM binary and genesis. ```bash avalanche blockchain create ``` Choose custom ```bash Use the arrow keys to navigate: ↓ ↑ → ← ? Choose your VM: Subnet-EVM ▸ Custom ``` Provide path to genesis: ```bash ✗ Enter path to custom genesis: ``` Provide the source code repo url: ```bash ✗ Source code repository URL: https://github.com/ava-labs/hypersdk/ ``` Set the branch and finally set the build script: ```bash ✗ Build script: examples/morpheusvm/scripts/build.sh ``` CLI will generate a locally compiled binary, and then create the Avalanche L1. ```bash Cloning into ... Successfully created subnet configuration ``` ## Deploy Avalanche L1 For this example, we will deploy the Avalanche L1 and blockchain on Fuji. Run: ```bash avalanche blockchain deploy ``` Choose Fuji: ```bash Use the arrow keys to navigate: ↓ ↑ → ← ? Choose a network to deploy on: Local Network ▸ Fuji Mainnet ``` Use the stored key: ```bash Use the arrow keys to navigate: ↓ ↑ → ← ? Which key source should be used to pay transaction fees?: ▸ Use stored key Use ledger ``` Choose `` as the key to use to pay the fees: ```bash Use the arrow keys to navigate: ↓ ↑ → ← ? Which stored key should be used to pay transaction fees?: ▸ ``` Use the same key as the control key for the Avalanche L1: ```bash Use the arrow keys to navigate: ↓ ↑ → ← ? How would you like to set your control keys?: ▸ Use fee-paying key Use all stored keys Custom list ``` The successfully creation of our Avalanche L1 and blockchain is confirmed by the following output: ```bash Your Subnet's control keys: [P-fuji1dlwux652lkflgz79g3nsphjzvl6t35xhmunfk1] Your subnet auth keys for chain creation: [P-fuji1dlwux652lkflgz79g3nsphjzvl6t35xhmunfk1] Subnet has been created with ID: RU72cWmBmcXber6ZBPT7R5scFFuVSoFRudcS3vayf3L535ZE3 Now creating blockchain... +--------------------+----------------------------------------------------+ | DEPLOYMENT RESULTS | | +--------------------+----------------------------------------------------+ | Chain Name | blockchainName | +--------------------+----------------------------------------------------+ | Subnet ID | RU72cWmBmcXber6ZBPT7R5scFFuVSoFRudcS3vayf3L535ZE3 | +--------------------+----------------------------------------------------+ | VM ID | srEXiWaHq58RK6uZMmUNaMF2FzG7vPzREsiXsptAHk9gsZNvN | +--------------------+----------------------------------------------------+ | Blockchain ID | 2aDgZRYcSBsNoLCsC8qQH6iw3kUSF5DbRHM4sGEqVKwMSfBDRf | +--------------------+ + | P-Chain TXID | | +--------------------+----------------------------------------------------+ ``` Set the Config Files[​](#set-the-config-files "Direct link to heading") ----------------------------------------------------------------------- Avalanche-CLI supports uploading the full set of configuration files for a blockchain: - Genesis File - Blockchain Config - Avalanche L1 Config - Network Upgrades - AvalancheGo Config The following example uses all of them, but the user can decide to provide a subset of those. ### AvalancheGo Flags[​](#avalanchego-flags "Direct link to heading") Save the following content (as defined [here](https://github.com/ava-labs/hypersdk/blob/vryx-poc/examples/morpheusvm/tests/e2e/e2e_test.go)) into a file with path `` (for example `~/morpheusvm_avago.json`): ```json { "log-level":"INFO", "log-display-level":"INFO", "proposervm-use-current-height":true, "throttler-inbound-validator-alloc-size":"10737418240", "throttler-inbound-at-large-alloc-size":"10737418240", "throttler-inbound-node-max-processing-msgs":"1000000", "throttler-inbound-node-max-at-large-bytes":"10737418240", "throttler-inbound-bandwidth-refill-rate":"1073741824", "throttler-inbound-bandwidth-max-burst-size":"1073741824", "throttler-inbound-cpu-validator-alloc":"100000", "throttler-inbound-cpu-max-non-validator-usage":"100000", "throttler-inbound-cpu-max-non-validator-node-usage":"100000", "throttler-inbound-disk-validator-alloc":"10737418240000", "throttler-outbound-validator-alloc-size":"10737418240", "throttler-outbound-at-large-alloc-size":"10737418240", "throttler-outbound-node-max-at-large-bytes":"10737418240", "consensus-on-accept-gossip-validator-size":"10", "consensus-on-accept-gossip-peer-size":"10", "network-compression-type":"zstd", "consensus-app-concurrency":"128", "profile-continuous-enabled":true, "profile-continuous-freq":"1m", "http-host":"", "http-allowed-origins": "*", "http-allowed-hosts": "*" } ``` Then set the Avalanche L1 to use it by executing: ```bash avalanche blockchain configure blockchainName ``` Select node-config.json: ```bash Use the arrow keys to navigate: ↓ ↑ → ← ? Which configuration file would you like to provide?: ▸ node-config.json chain.json subnet.json per-node-chain.json ``` Provide the path to the AvalancheGo config file: ```bash ✗ Enter the path to your configuration file: ``` Finally, choose no: ```bash Use the arrow keys to navigate: ↓ ↑ → ← ? Would you like to provide the chain.json file as well?: ▸ No Yes File ~/.avalanche-cli/subnets/blockchainName/node-config.json successfully written ``` ### Blockchain Config[​](#blockchain-config "Direct link to heading") `morpheus-cli` as shown [here](https://github.com/ava-labs/hypersdk/blob/vryx-poc/examples/morpheusvm/scripts/run.sh). Save the following content (generated by this [script](https://github.com/ava-labs/hypersdk/blob/vryx-poc/examples/morpheusvm/scripts/run.sh)) in a known file path (for example `~/morpheusvm_chain.json`): ```json { "chunkBuildFrequency": 250, "targetChunkBuildDuration": 250, "blockBuildFrequency": 100, "mempoolSize": 2147483648, "mempoolSponsorSize": 10000000, "authExecutionCores": 16, "precheckCores": 16, "actionExecutionCores": 8, "missingChunkFetchers": 48, "verifyAuth": true, "authRPCCores": 48, "authRPCBacklog": 10000000, "authGossipCores": 16, "authGossipBacklog": 10000000, "chunkStorageCores": 16, "chunkStorageBacklog": 10000000, "streamingBacklogSize": 10000000, "continuousProfilerDir":"/home/ubuntu/morpheusvm-profiles", "logLevel": "INFO" } ``` Then set the Avalanche L1 to use it by executing: ```bash avalanche blockchain configure blockchainName ``` Select chain.json: ```bash Use the arrow keys to navigate: ↓ ↑ → ← ? Which configuration file would you like to provide?: node-config.json ▸ chain.json subnet.json per-node-chain.json ``` Provide the path to the blockchain config file: ```bash ✗ Enter the path to your configuration file: ~/morpheusvm_chain.json ``` Finally choose no: ```bash Use the arrow keys to navigate: ↓ ↑ → ← ? Would you like to provide the subnet.json file as well?: ▸ No Yes File ~/.avalanche-cli/subnets/blockchainName/chain.json successfully written ``` ### Avalanche L1 Config[​](#avalanche-l1-config "Direct link to heading") Save the following content (generated by this [script](https://github.com/ava-labs/hypersdk/blob/vryx-poc/examples/morpheusvm/scripts/run.sh)) in a known path (for example `~/morpheusvm_subnet.json`): ```json { "proposerMinBlockDelay": 0, "proposerNumHistoricalBlocks": 512 } ``` Then set the Avalanche L1 to use it by executing: ```bash avalanche blockchain configure blockchainName ``` Select `subnet.json`: ```bash Use the arrow keys to navigate: ↓ ↑ → ← ? Which configuration file would you like to provide?: node-config.json chain.json ▸ subnet.json per-node-chain.json ``` Provide the path to the Avalanche L1 config file: ```bash ✗ Enter the path to your configuration file: ~/morpheusvm_subnet.json ``` Choose no: ```bash Use the arrow keys to navigate: ↓ ↑ → ← ? Would you like to provide the chain.json file as well?: ▸ No Yes File ~/.avalanche-cli/subnets/blockchainName/subnet.json successfully written ``` ### Network Upgrades[​](#network-upgrades "Direct link to heading") Save the following content (currently with no network upgrades) in a known path (for example `~/morpheusvm_upgrades.json`): Then set the Avalanche L1 to use it by executing: ```bash avalanche blockchain upgrade import blockchainName ``` Provide the path to the network upgrades file: ```bash ✗ Provide the path to the upgrade file to import: ~/morpheusvm_upgrades.json ``` Deploy Our Custom VM[​](#deploy-our-custom-vm "Direct link to heading") ----------------------------------------------------------------------- To deploy our Custom VM, run: ```bash avalanche node sync ``` ```bash Node(s) successfully started syncing with Subnet! ``` Your custom VM is successfully deployed! You can also use `avalanche node update blockchain ` to reinstall the binary when the branch is updated, or update the config files. # Execute SSH Command (/docs/tooling/avalanche-cli/create-avalanche-nodes/execute-ssh-commands) --- title: Execute SSH Command description: This page demonstrates how to execute a SSH command on a Cluster or Node managed by Avalanche-CLI --- ALPHA WARNING: This command is currently in experimental mode. Proceed at your own risk. ## Prerequisites Before we begin, you will need to have a cluster managed by CLI, either a [Fuji Cluster using AWS](/docs/tooling/avalanche-cli/create-avalanche-nodes/run-validators-aws), a [Fuji Cluster using GCP](/docs/tooling/avalanche-cli/create-avalanche-nodes/run-validators-gcp), or a [Devnet](/docs/tooling/avalanche-cli/create-avalanche-nodes/setup-devnet), SSH Warning[​](#ssh-warning "Direct link to heading") ----------------------------------------------------- Note: An expected warning may be seen when executing the command on a given cluster for the first time: ```bash Warning: Permanently added 'IP' (ED25519) to the list of known hosts. ``` Get SSH Connection Instructions for All Clusters[​](#get-ssh-connection-instructions-for-all-clusters "Direct link to heading") ------------------------------------------------------------------------------------------------------------------------------- Just execute `node ssh`: ```bash avalanche node ssh Cluster "" (Devnet) [i-0cf58a280bf3ef9a1] ssh -o IdentitiesOnly=yes -o StrictHostKeyChecking=no [email protected] -i /home/fm/.ssh/fm-us-east-1-avalanche-cli-us-east-1-kp.pem [i-0e2abd71a586e56b4] ssh -o IdentitiesOnly=yes -o StrictHostKeyChecking=no [email protected] -i /home/fm/.ssh/fm-us-east-1-avalanche-cli-us-east-1-kp.pem [i-027417a4f2ca0a478] ssh -o IdentitiesOnly=yes -o StrictHostKeyChecking=no [email protected] -i /home/fm/.ssh/fm-us-east-1-avalanche-cli-us-east-1-kp.pem [i-0360a867aa295d8a4] ssh -o IdentitiesOnly=yes -o StrictHostKeyChecking=no [email protected] -i /home/fm/.ssh/fm-us-east-1-avalanche-cli-us-east-1-kp.pem [i-0759b102acfd5b585] ssh -o IdentitiesOnly=yes -o StrictHostKeyChecking=no [email protected] -i /home/fm/.ssh/fm-us-east-1-avalanche-cli-us-east-1-kp.pem ``` Get the AvalancheGo PID for All Nodes in ``[​](#get-the-avalanchego-pid-for-all-nodes-in-clustername "Direct link to heading") ------------------------------------------------------------------------------------------------------------------------------------------- ```bash avalanche node ssh pgrep avalanchego [i-0cf58a280bf3ef9a1] ssh -o IdentitiesOnly=yes -o StrictHostKeyChecking=no [email protected] -i /home/fm/.ssh/fm-us-east-1-avalanche-cli-us-east-1-kp.pem pgrep avalanchego 14508 [i-0e2abd71a586e56b4] ssh -o IdentitiesOnly=yes -o StrictHostKeyChecking=no [email protected] -i /home/fm/.ssh/fm-us-east-1-avalanche-cli-us-east-1-kp.pem pgrep avalanchego 14555 [i-027417a4f2ca0a478] ssh -o IdentitiesOnly=yes -o StrictHostKeyChecking=no [email protected] -i /home/fm/.ssh/fm-us-east-1-avalanche-cli-us-east-1-kp.pem pgrep avalanchego 14545 [i-0360a867aa295d8a4] ssh -o IdentitiesOnly=yes -o StrictHostKeyChecking=no [email protected] -i /home/fm/.ssh/fm-us-east-1-avalanche-cli-us-east-1-kp.pem pgrep avalanchego 14531 [i-0759b102acfd5b585] ssh -o IdentitiesOnly=yes -o StrictHostKeyChecking=no [email protected] -i /home/fm/.ssh/fm-us-east-1-avalanche-cli-us-east-1-kp.pem pgrep avalanchego 14555 ``` Please note that commands via `ssh` on cluster are executed sequentially by default. It's possible to run command on all nodes at the same time by using `--parallel=true` flag Get the AvalancheGo Configuration for All Nodes in ``[​](#get-the-avalanchego-configuration-for-all-nodes-in-clustername "Direct link to heading") --------------------------------------------------------------------------------------------------------------------------------------------------------------- ```bash avalanche node ssh cat /home/ubuntu/.avalanchego/configs/node.json [i-0cf58a280bf3ef9a1] ssh -o IdentitiesOnly=yes -o StrictHostKeyChecking=no [email protected] -i /home/fm/.ssh/fm-us-east-1-avalanche-cli-us-east-1-kp.pem cat /home/ubuntu/.avalanchego/configs/node.json { "bootstrap-ids": "", "bootstrap-ips": "", "genesis-file": "/home/ubuntu/.avalanchego/configs/genesis.json", "http-allowed-hosts": "*", "http-allowed-origins": "*", "http-host": "", "log-display-level": "info", "log-level": "info", "network-id": "network-1338", "public-ip": "44.219.113.190", "track-subnets": "giY8tswWgZmcAWzPkoNrmjjrykited7GJ9799SsFzTiq5a1ML" } [i-0e2abd71a586e56b4] ssh -o IdentitiesOnly=yes -o StrictHostKeyChecking=no [email protected] -i /home/fm/.ssh/fm-us-east-1-avalanche-cli-us-east-1-kp.pem cat /home/ubuntu/.avalanchego/configs/node.json { "bootstrap-ids": "NodeID-EzxsrhoumLsQSWxsohfMFrM1rJcaiaBK8", "bootstrap-ips": "44.219.113.190:9651", "genesis-file": "/home/ubuntu/.avalanchego/configs/genesis.json", "http-allowed-hosts": "*", "http-allowed-origins": "*", "http-host": "", "log-display-level": "info", "log-level": "info", "network-id": "network-1338", "public-ip": "3.212.206.161", "track-subnets": "giY8tswWgZmcAWzPkoNrmjjrykited7GJ9799SsFzTiq5a1ML" } [i-027417a4f2ca0a478] ssh -o IdentitiesOnly=yes -o StrictHostKeyChecking=no [email protected] -i /home/fm/.ssh/fm-us-east-1-avalanche-cli-us-east-1-kp.pem cat /home/ubuntu/.avalanchego/configs/node.json { "bootstrap-ids": "NodeID-EzxsrhoumLsQSWxsohfMFrM1rJcaiaBK8,NodeID-6veKG5dAz1uJvKc7qm7v6wAPDod8hctb9", "bootstrap-ips": "44.219.113.190:9651,3.212.206.161:9651", "genesis-file": "/home/ubuntu/.avalanchego/configs/genesis.json", "http-allowed-hosts": "*", "http-allowed-origins": "*", "http-host": "", "log-display-level": "info", "log-level": "info", "network-id": "network-1338", "public-ip": "54.87.168.26", "track-subnets": "giY8tswWgZmcAWzPkoNrmjjrykited7GJ9799SsFzTiq5a1ML" } [i-0360a867aa295d8a4] ssh -o IdentitiesOnly=yes -o StrictHostKeyChecking=no [email protected] -i /home/fm/.ssh/fm-us-east-1-avalanche-cli-us-east-1-kp.pem cat /home/ubuntu/.avalanchego/configs/node.json { "bootstrap-ids": "NodeID-EzxsrhoumLsQSWxsohfMFrM1rJcaiaBK8,NodeID-6veKG5dAz1uJvKc7qm7v6wAPDod8hctb9,NodeID-ASseyUweBT82XquiGpmUFjd9QfkUjxiAY", "bootstrap-ips": "44.219.113.190:9651,3.212.206.161:9651,54.87.168.26:9651", "genesis-file": "/home/ubuntu/.avalanchego/configs/genesis.json", "http-allowed-hosts": "*", "http-allowed-origins": "*", "http-host": "", "log-display-level": "info", "log-level": "info", "network-id": "network-1338", "public-ip": "3.225.42.57", "track-subnets": "giY8tswWgZmcAWzPkoNrmjjrykited7GJ9799SsFzTiq5a1ML" } [i-0759b102acfd5b585] ssh -o IdentitiesOnly=yes -o StrictHostKeyChecking=no [email protected] -i /home/fm/.ssh/fm-us-east-1-avalanche-cli-us-east-1-kp.pem cat /home/ubuntu/.avalanchego/configs/node.json { "bootstrap-ids": "NodeID-EzxsrhoumLsQSWxsohfMFrM1rJcaiaBK8,NodeID-6veKG5dAz1uJvKc7qm7v6wAPDod8hctb9,NodeID-ASseyUweBT82XquiGpmUFjd9QfkUjxiAY,NodeID-LfwbUp9dkhmWTSGffer9kNWNzqUQc2TEJ", "bootstrap-ips": "44.219.113.190:9651,3.212.206.161:9651,54.87.168.26:9651,3.225.42.57:9651", "genesis-file": "/home/ubuntu/.avalanchego/configs/genesis.json", "http-allowed-hosts": "*", "http-allowed-origins": "*", "http-host": "", "log-display-level": "info", "log-level": "info", "network-id": "network-1338", "public-ip": "107.21.158.224", "track-subnets": "giY8tswWgZmcAWzPkoNrmjjrykited7GJ9799SsFzTiq5a1ML" } ``` Executing Command on a Single Node[​](#executing-command-on-a-single-node "Direct link to heading") --------------------------------------------------------------------------------------------------- As we all know command can be executed on single node similar to the examples above To execute ssh command on a single node, use ``, `` or `` instead of `` as an argument. For example: ```bash avalanche node ssh i-0225fc39626b1edd3 [or] avalanche node ssh NodeID-9wdKQ3KJU3GqvgFTc4CUYvmefEFe8t6ka [or] avalanche node ssh 54.159.59.123 ``` In this case `--parallel=true` flag will be ignored Opening SSH Shell for ``[​](#opening-ssh-shell-for-nodeid "Direct link to heading") ------------------------------------------------------------------------------------------- If no command is provided, Avalanche-CLI will open an interactive session for the specified node. For example: ```bash avalanche node ssh i-0225fc39626b1edd3 [or] avalanche node ssh NodeID-9wdKQ3KJU3GqvgFTc4CUYvmefEFe8t6ka [or] avalanche node ssh 54.159.59.123 ``` Please use `exit` shell command or Ctrl+D to end this session. # Run Load Test (/docs/tooling/avalanche-cli/create-avalanche-nodes/run-loadtest) --- title: Run Load Test description: This page demonstrates how to run load test on an Avalanche L1 deployed on a cluster of cloud-based validators using Avalanche-CLI. --- ## Prerequisites Before we begin, you will need to have: - Created an AWS account and have an updated AWS `credentials` file in home directory with [default] profile or set up your GCP account according to [here](/docs/tooling/avalanche-cli/create-avalanche-nodes/run-validators-gcp) - Created a cluster of cloud servers with monitoring enabled - Deployed an Avalanche L1 into the cluster - Added the cloud servers as validator nodes in the Avalanche L1 ## Run Load Test When the load test command is run, a new cloud server will be created to run the load test. The created cloud server is referred by the name `` and you can use any name of your choice. To start load test, run: ```bash avalanche node loadtest start ``` Next, you will need to provide the load test Git repository URL, load test Git Branch, the command to build the load test binary and the command to run the load test binary. We will use an example of running load test on an Avalanche L1 running custom VM MorpheusVM built with [HyperSDK](https://github.com/ava-labs/hypersdk/tree/main/examples/morpheusvm). The following settings will be used: - Load Test Repo URL: `https://github.com/ava-labs/hypersdk/` - Load Test Branch: `vryx-poc` - Load Test Build Script: `cd /home/ubuntu/hypersdk/examples/morpheusvm; CGO_CFLAGS=\"-O -D__BLST_PORTABLE__\" go build -o ~/simulator ./cmd/morpheus-cli` - Load Test Run Script: `/home/ubuntu/simulator spam run ed25519 --accounts=10000000 --txs-per-second=100000 --min-capacity=15000 --step-size=1000 --s-zipf=1.0001 --v-zipf=2.7 --conns-per-host=10 --cluster-info=/home/ubuntu/clusterInfo.yaml --private-key=323b1d8f4eed5f0da9da93071b034f2dce9d2d22692c172f3cb252a64ddfafd01b057de320297c29ad0c1f589ea216869cf1938d88c9fbd70d6748323dbf2fa7` Once the command is run, you will be able to see the logs from the load test in the cluster's Grafana URL like the example below: ![Centralized Logs](/images/centralized-logs.png) ## Stop Load Test To stop the load test process on the load test instance `` and terminate the load test instance, run: ```bash avalanche node loadtest stop ``` # Run Validator on AWS (/docs/tooling/avalanche-cli/create-avalanche-nodes/run-validators-aws) --- title: Run Validator on AWS description: This page demonstrates how to deploy Avalanche validators on AWS using just one Avalanche-CLI command. --- This page demonstrates how to deploy Avalanche validators on AWS using just one Avalanche-CLI command. Currently, only Fuji network and Devnets are supported. ALPHA WARNING: This command is currently in experimental mode. Proceed at your own risk. ## Prerequisites Before we begin, you will need to create an AWS account and have an AWS `credentials` file in home directory with \[default\] profile set. More info can be found [here](https://docs.aws.amazon.com/sdkref/latest/guide/file-format.html#file-format-creds) ## Create Validators To create Avalanche validators, run: ```bash avalanche node create ``` The created nodes will be part of cluster `clusterName` and all avalanche node commands applied to cluster `clusterName` will apply to all nodes in the cluster. Please note that running a validator on AWS will incur costs. Ava Labs is not responsible for the cost incurred from running an Avalanche validator on cloud services via Avalanche-CLI. Currently, we have set the following specs of the AWS cloud server to a fixed value, but we plan to enable customization in the near future: - OS Image: `Ubuntu 20.04 LTS (HVM), SSD Volume Type` - Storage: `1 TB` Instance type can be specified via `--node-type` parameter or via interactive menu. `c5.2xlarge` is the default(recommended) instance size. The command will ask which region you want to set up your cloud server in: ```bash Which AWS region do you want to set up your node in?: ▸ us-east-1 us-east-2 us-west-1 us-west-2 Choose custom region (list of regions available at https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html) ``` The command will next ask whether you want to set up monitoring for your nodes. ```bash Do you want to set up a separate instance to host monitoring? (This enables you to monitor all your set up instances in one dashboard): ▸ Yes No ``` Setting up monitoring on a separate AWS instance enables you to have a centralized Grafana logs and dashboard for all nodes in a cluster, as seen below: ![Centralized Logs](/images/centralized-logs.png) ![Main Dashboard](/images/run-validators1.png) The separate monitoring AWS instance will have similar specs to the default AWS cloud server, except for its storage, which will be set to 50 GB. Please note that setting up monitoring on a separate AWS instance will incur additional cost of setting up an additional AWS cloud server. The command will then ask which Avalanche Go version you would like to install in the cloud server. You can choose `default` (which will install the latest version) or you can enter the name of an Avalanche L1 created with CLI that you plan to be validated by this node (we will get the latest version that is compatible with the deployed Avalanche L1's RPC version). Once the command has successfully completed, Avalanche-CLI outputs all the created cloud server node IDs as well as the public IP that each node can be reached at. Avalanche-CLI also outputs the command that you can use to ssh into each cloud server node. Finally, if monitoring is set up, Avalanche-CLI will also output the Grafana link where the centralized dashboards and logs can be accessed. By the end of successful run of `create` command, Avalanche-CLI would have: - Installed Avalanche Go in cloud server - Installed Avalanche CLI in cloud server - Downloaded the `.pem` private key file to access the cloud server into your local `.ssh` directory. Back up this private key file as you will not be able to ssh into the cloud server node without it (unless `ssh-agent` is used). - Downloaded `staker.crt` and `staker.key` files to your local `.avalanche-cli` directory so that you can back up your node. More info about node backup can be found [here](/docs/nodes/maintain/backup-restore) - Started the process of bootstrapping your new Avalanche node to the Primary Network (for non-Devnet only). Please note that Avalance CLI can be configured to use `ssh-agent` for ssh communication. In this case public key will be read from there and cloud server will be accessible using it. Yubikey hardware can be also used to store private ssh key. Please use official Yubikey documentation, for example \[[https://developers.yubico.com/PGP/SSH\_authentication/](https://developers.yubico.com/PGP/SSH_authentication/)\] for more details. Check Bootstrap Status[​](#check-bootstrap-status "Direct link to heading") --------------------------------------------------------------------------- Ignore for Devnet Please note that you will have to wait until the nodes have finished bootstrapping before the nodes can be Primary Network or Avalanche L1 Validators. To check whether all the nodes in a cluster have finished bootstrapping, run `avalanche node status `. # Run Validator on GCP (/docs/tooling/avalanche-cli/create-avalanche-nodes/run-validators-gcp) --- title: Run Validator on GCP description: This page demonstrates how to deploy Avalanche validators on GCP using just one Avalanche-CLI command. --- This page demonstrates how to deploy Avalanche validators on GCP using just one Avalanche-CLI command. Currently, only Fuji network and Devnets are supported. ALPHA WARNING: This command is currently in experimental mode. Proceed at your own risk. ## Prerequisites Before we begin, you will need to: - Create a GCP account [here](https://console.cloud.google.com/freetrial) and create a new project - Enable Compute Engine API [here](https://console.cloud.google.com/apis/api/compute.googleapis.com) - Download the key json for the automatically created service account as shown [here](https://cloud.google.com/iam/docs/keys-create-delete#creating) ## Create Validator To create Avalanche validators, run: ```bash avalanche node create ``` The created nodes will be part of cluster `clusterName` and all avalanche node commands applied to cluster `clusterName` will apply to all nodes in the cluster. Please note that running a validator on GCP will incur costs. Ava Labs is not responsible for the cost incurred from running an Avalanche validator on cloud services via Avalanche-CLI. Currently, we have set the following specs of the GCP cloud server to a fixed value, but we plan to enable customization in the near future: - OS Image: `Ubuntu 20.04 LTS` - Storage: `1 TB` Instance type can be specified via `--node-type` parameter or via interactive menu. `e2-standard-8` is default(recommended) instance size. The command will ask which region you want to set up your cloud server in: ```bash Which Google Region do you want to set up your node(s) in?: ▸ us-east1 us-central1 us-west1 Choose custom Google Region (list of Google Regions available at https://cloud.google.com/compute/docs/regions-zones/) ``` The command will next ask whether you want to set up monitoring for your nodes. ```bash Do you want to set up a separate instance to host monitoring? (This enables you to monitor all your set up instances in one dashboard): ▸ Yes No ``` Setting up monitoring on a separate GCP instance enables you to have a unified Grafana dashboard for all nodes in a cluster, as seen below: ![Centralized Logs](/images/centralized-logs.png) ![Main Dashboard](/images/gcp1.png) The separate monitoring GCP instance will have similar specs to the default GCP cloud server, except for its storage, which will be set to 50 GB. Please note that setting up monitoring on a separate GCP instance will incur additional cost of setting up an additional GCP cloud server. The command will then ask which Avalanche Go version you would like to install in the cloud server. You can choose `default` (which will install the latest version) or you can enter the name of an Avalanche L1 created with CLI that you plan to be validated by this node (we will get the latest version that is compatible with the deployed Avalanche L1's RPC version). Once the command has successfully completed, Avalanche-CLI outputs all the created cloud server node IDs as well as the public IP that each node can be reached at. Avalanche-CLI also outputs the command that you can use to ssh into each cloud server node. Finally, if monitoring is set up, Avalanche-CLI will also output the Grafana link where the centralized dashboards and logs can be accessed. By the end of successful run of `create` command, Avalanche-CLI would have: - Installed Avalanche Go in cloud server - Installed Avalanche CLI in cloud server - Downloaded the `.pem` private key file to access the cloud server into your local `.ssh` directory. Back up this private key file as you will not be able to ssh into the cloud server node without it (unless `ssh-agent` is used). - Downloaded `staker.crt` and `staker.key` files to your local `.avalanche-cli` directory so that you can back up your node. More info about node backup can be found [here](/docs/nodes/maintain/backup-restore) - Started the process of bootstrapping your new Avalanche node to the Primary Network Please note that Avalanche CLI can be configured to use `ssh-agent` for ssh access to cloud server. Yubikey hardware can be also used to store private ssh key. Please use official Yubikey documentation, for example \[[https://developers.yubico.com/PGP/SSH\_authentication/](https://developers.yubico.com/PGP/SSH_authentication/)\] for more details. Check Bootstrap Status[​](#check-bootstrap-status "Direct link to heading") --------------------------------------------------------------------------- Ignore for Devnet Please note that you will have to wait until the nodes have finished bootstrapping before the nodes can be Primary Network or Avalanche L1 Validators. To check whether all the nodes in a cluster have finished bootstrapping, run `avalanche node status `. # Setup a Devnet (/docs/tooling/avalanche-cli/create-avalanche-nodes/setup-devnet) --- title: Setup a Devnet description: This page demonstrates how to setup a Devnet of cloud-based validators using Avalanche-CLI, and deploy a VM into it. --- Devnets (Developer Networks) are isolated Avalanche networks deployed on the cloud. Similar to local networks in terms of configuration and usage but installed on remote nodes. Think of DevNets as being an intermediate step in the developer testing process after local network and before Fuji network. ALPHA WARNING: This command is currently in experimental mode. Proceed at your own risk. ## Prerequisites Before we begin, you will need to create an AWS account and have an updated AWS `credentials` file in home directory with [default] profile or set up your GCP account according to [here](/docs/tooling/avalanche-cli/create-avalanche-nodes/run-validators-gcp). Note: the tutorial uses AWS hosts, but Devnets can also be created and operated in other supported cloud providers, such as GCP. ## Setting up a Devnet Setting up a Devnet consists of: - Creating a cluster of cloud servers - Deploying an Avalanche L1 into the cluster - Adding the cloud servers as validator nodes in the Avalanche L1 To execute all steps above in one command, run: ```bash avalanche node devnet wiz ``` Command line flags can be used instead of interacting with the prompts. The complete command line flags for `devnet wiz` command can be found [here](/docs/tooling/cli-commands#node-devnet-wiz). Let's go through several examples with the full command (with flags) provided. ### Create a Devnet and Deploy Subnet-EVM Based Avalanche L1 into the Devnet For example, to spin up a Devnet with 5 validator nodes and 1 API node in 5 regions each (us-west-2,us-east-1,ap-south-1,ap-northeast-1,eu-west-1) in AWS with each node having spec of c7g.8xlarge AWS EC2 instance type and io2 volume type, with Avalanche L1 `` deployed into the Devnet, we will run : ```bash avalanche node devnet wiz --authorize-access --aws --num-apis 1,1,1,1,1 --num-validators 5,5,5,5,5 --region us-west-2,us-east-1,ap-south-1,ap-northeast-1,eu-west-1 --default-validator-params --node-type c7g.8xlarge --aws-volume-type=io2 Creating the devnet ... Waiting for node(s) in cluster to be healthy... ... Nodes healthy after 33 seconds Deploying the subnet ... Setting the nodes as subnet trackers ... Waiting for node(s) in cluster to be healthy... Nodes healthy after 33 seconds ... Waiting for node(s) in cluster to be syncing subnet ... Nodes Syncing after 5 seconds Adding nodes as subnet validators ... Waiting for node(s) in cluster to be validating subnet ... Nodes Validating after 23 seconds Devnet has been created and is validating subnet ! ``` ### Create a Devnet and Deploy a Custom VM based Avalanche L1 into the Devnet For this example, we will be using the custom VM [MorpheusVM](https://github.com/ava-labs/hypersdk/tree/main/examples/morpheusvm) built with [HyperSDK](https://github.com/ava-labs/hypersdk). The following settings will be used: - `` `https://github.com/ava-labs/hypersdk/` - `` `vryx-poc` - `` `examples/morpheusvm/scripts/build.sh` - `` [Genesis File](/docs/tooling/avalanche-cli/create-avalanche-nodes/deploy-custom-vm#genesis-file) - `` [Blockchain Config](/docs/tooling/avalanche-cli/create-avalanche-nodes/deploy-custom-vm#blockchain-config) - `` [Avalanche L1 Config](/docs/tooling/avalanche-cli/create-avalanche-nodes/deploy-custom-vm#avalanche-l1-config) - `` [AvalancheGo Config](/docs/tooling/avalanche-cli/create-avalanche-nodes/deploy-custom-vm#avalanchego-flags) To spin up a Devnet with 5 validator nodes and 1 API node in 5 regions each (us-west-2,us-east-1,ap-south-1,ap-northeast-1,eu-west-1) in AWS with each node having spec of c7g.8xlarge AWS EC2 instance type and io2 volume type, with the Custom VM based Avalanche L1 `` deployed into the Devnet, we will run : ```bash avalanche node devnet wiz --custom-subnet \ --subnet-genesis --custom-vm-repo-url \ --custom-vm-branch --custom-vm-build-script \ --chain-config --subnet-config \ --node-config --authorize-access --aws --num-apis 1,1,1,1,1 \ --num-validators 5,5,5,5,5 --region us-west-2,us-east-1,ap-south-1,ap-northeast-1,eu-west-1 \ --default-validator-params --node-type default Creating the subnet ... Creating the devnet ... Waiting for node(s) in cluster to be healthy... ... Nodes healthy after 33 seconds Deploying the subnet ... Setting the nodes as subnet trackers ... Waiting for node(s) in cluster to be healthy... Nodes healthy after 33 seconds ... Waiting for node(s) in cluster to be syncing subnet ... Nodes Syncing after 5 seconds Adding nodes as subnet validators ... Waiting for node(s) in cluster to be validating subnet ... Nodes Validating after 23 seconds Devnet has been created and is validating subnet ! ``` # Terminate All Nodes (/docs/tooling/avalanche-cli/create-avalanche-nodes/stop-node) --- title: Terminate All Nodes description: This page provides instructions for terminating cloud server nodes created by Avalanche-CLI. --- ALPHA WARNING: This command is currently in experimental mode. Proceed at your own risk. Terminating All Nodes[​](#terminating-all-nodes "Direct link to heading") ------------------------------------------------------------------------- To terminate all nodes in a cluster, run: ```bash avalanche node destroy ``` ALPHA WARNING: This command will delete all files associated with the cloud servers in the cluster. This includes the downloaded `staker.crt` and `staker.key` files in your local `.avalanche-cli` directory (which are used to back up your node). More info about node backup can be found [here](/docs/nodes/maintain/backup-restore). Once completed, the instance set up on AWS / GCP would have been terminated and the Static Public IP associated with it would have been released. # Validate the Primary Network (/docs/tooling/avalanche-cli/create-avalanche-nodes/validate-primary-network) --- title: Validate the Primary Network description: This page demonstrates how to configure nodes to validate the Primary Network. Validation via Avalanche-CLI is currently only supported on Fuji. --- ALPHA WARNING: This command is currently in experimental mode. Proceed at your own risk. ## Prerequisites Before we begin, you will need to have: - Created a Cloud Server node as described for [AWS](/docs/tooling/avalanche-cli/create-avalanche-nodes/run-validators-aws) or [GCP](/docs/tooling/avalanche-cli/create-avalanche-nodes/run-validators-gcp) - A node bootstrapped to the Primary Network (run `avalanche node status `to check bootstrap status as described [here](/docs/tooling/avalanche-cli/create-avalanche-nodes/run-validators-aws#check-bootstrap-status) - Stored key / Ledger with AVAX to pay for gas fess associated with adding node as Primary Network. Instructions on how to fund stored key on Fuji can be found [here](/docs/tooling/avalanche-cli/create-deploy-avalanche-l1s/deploy-on-fuji-testnet#funding-the-key). Be a Primary Network Validator[​](#be-a-primary-network-validator "Direct link to heading") ------------------------------------------------------------------------------------------- Once all nodes in a cluster are bootstrapped to Primary Network, we can now have the nodes be Primary Network Validators. To have all nodes in cluster `clusterName` be Primary Network Validators, run: ```bash avalanche node validate primary ``` The nodes will start validating the Primary Network 20 seconds after the command is run. The wizard will ask us how we want to pay for the transaction fee. Choose `Use stored key` for Fuji: ```bash Which key source should be used to pay transaction fees?: ▸ Use stored key Use ledger ``` Once you have selected the key to pay with, choose how many AVAX you would like to stake in the validator. Default is the minimum amount of AVAX that can be staked in a Fuji Network Validator. More info regarding minimum staking amount in different networks can be found [here](/docs/primary-network/validate/how-to-stake#fuji-testnet). ```bash What stake weight would you like to assign to the validator?: ▸ Default (1.00 AVAX) Custom ``` Next, choose how long the node will be validating for: ```bash How long should your validator validate for?: ▸ Minimum staking duration on primary network Custom ``` Once all the inputs are completed you will see transaction IDs indicating that all the nodes in the cluster will be Primary Network Validators once the start time has elapsed. # On Local Network (/docs/tooling/avalanche-cli/create-deploy-avalanche-l1s/deploy-locally) --- title: On Local Network description: This guide shows you how to deploy an Avalanche L1 to a local Avalanche network. --- This how-to guide focuses on taking an already created Avalanche L1 configuration and deploying it to a local Avalanche network. ## Prerequisites - [Avalanche-CLI installed](/docs/tooling/avalanche-cli) - You have [created an Avalanche L1 configuration](/docs/tooling/avalanche-cli#create-your-avalanche-l1-configuration) Deploying Avalanche L1s Locally[​](#deploying-avalanche-l1s-locally "Direct link to heading") --------------------------------------------------------------------------------- In the following commands, make sure to substitute the name of your Avalanche L1 configuration for ``. To deploy your Avalanche L1, run: ```bash avalanche blockchain deploy ``` and select `Local Network` to deploy on. Alternatively, you can bypass this prompt by providing the `--local` flag. For example: ```bash avalanche blockchain deploy --local ``` The command may take a couple minutes to run. Note: If you run `bash` on your shell and are running Avalanche-CLI on ARM64 on Mac, you will require Rosetta 2 to be able to deploy Avalanche L1s locally. You can download Rosetta 2 using `softwareupdate --install-rosetta` . ### Results[​](#results "Direct link to heading") If all works as expected, the command output should look something like this: ```bash > avalanche blockchain deploy myblockchain ✔ Local Network Deploying [myblockchain] to Local Network AvalancheGo path: /Users/felipe.madero/.avalanche-cli/bin/avalanchego/avalanchego-v1.13.0/avalanchego Booting Network. Wait until healthy... Node logs directory: /Users/felipe.madero/.avalanche-cli/runs/network_20250410_104205//logs Network ready to use. Using [P-custom18jma8ppw3nhx5r4ap8clazz0dps7rv5u9xde7p] to be set as a change owner for leftover AVAX AvalancheGo path: /Users/felipe.madero/.avalanche-cli/bin/avalanchego/avalanchego-v1.13.0/avalanchego ✓ Local cluster myblockchain-local-node-local-network not found. Creating... Starting local avalanchego node using root: /Users/felipe.madero/.avalanche-cli/local/myblockchain-local-node-local-network ... ✓ Booting Network. Wait until healthy... ✓ Avalanchego started and ready to use from /Users/felipe.madero/.avalanche-cli/local/myblockchain-local-node-local-network Node logs directory: /Users/felipe.madero/.avalanche-cli/local/myblockchain-local-node-local-network//logs Network ready to use. URI: http://127.0.0.1:60172 NodeID: NodeID-NuQc8BQ8mV9TVksgMtpyc57VnWzU2J6aN Your blockchain control keys: [P-custom18jma8ppw3nhx5r4ap8clazz0dps7rv5u9xde7p] Your blockchain auth keys for chain creation: [P-custom18jma8ppw3nhx5r4ap8clazz0dps7rv5u9xde7p] CreateSubnetTx fee: 0.000010278 AVAX Blockchain has been created with ID: 2W9boARgCWL25z6pMFNtkCfNA5v28VGg9PmBgUJfuKndEdhrvw Now creating blockchain... CreateChainTx fee: 0.000129564 AVAX +--------------------------------------------------------------------+ | DEPLOYMENT RESULTS | +---------------+----------------------------------------------------+ | Chain Name | myblockchain | +---------------+----------------------------------------------------+ | Subnet ID | 2W9boARgCWL25z6pMFNtkCfNA5v28VGg9PmBgUJfuKndEdhrvw | +---------------+----------------------------------------------------+ | VM ID | qDNV9vtxZYYNqm7TN1mYBuaaknLdefDbFK8bFmMLTJQJKaWjV | +---------------+----------------------------------------------------+ | Blockchain ID | Yt9d8RRW9JcoqfvyefqJJMX14HawtBc28J9CQspQKPkdonp1y | +---------------+ | | P-Chain TXID | | +---------------+----------------------------------------------------+ Now calling ConvertSubnetToL1Tx... ConvertSubnetToL1Tx fee: 0.000036992 AVAX ConvertSubnetToL1Tx ID: 2d2EE7AorEhfKLBtnDGnAtcDYMGfPbWnHYDpNDm3SopYg6VtpV Waiting for the Subnet to be converted into a sovereign L1 ... 100% [===============] Validator Manager Protocol: ACP99 Restarting node NodeID-NuQc8BQ8mV9TVksgMtpyc57VnWzU2J6aN to track newly deployed subnet/s Waiting for blockchain Yt9d8RRW9JcoqfvyefqJJMX14HawtBc28J9CQspQKPkdonp1y to be bootstrapped ✓ Local Network successfully tracking myblockchain ✓ Checking if node is healthy... ✓ Node is healthy after 0 seconds Initializing Proof of Authority Validator Manager contract on blockchain myblockchain ... ✓ Proof of Authority Validator Manager contract successfully initialized on blockchain myblockchain Your L1 is ready for on-chain interactions. RPC Endpoint: http://127.0.0.1:60172/ext/bc/Yt9d8RRW9JcoqfvyefqJJMX14HawtBc28J9CQspQKPkdonp1y/rpc ICM Messenger successfully deployed to myblockchain (0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf) ICM Registry successfully deployed to myblockchain (0xEc7018552DC7E197Af85f157515f5976b1A15B12) ICM Messenger successfully deployed to c-chain (0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf) ICM Registry successfully deployed to c-chain (0x17aB05351fC94a1a67Bf3f56DdbB941aE6c63E25) ✓ ICM is successfully deployed Generating relayer config file at /Users/felipe.madero/.avalanche-cli/runs/network_20250410_104205/icm-relayer-config.json Relayer version icm-relayer-v1.6.2 Executing Relayer ✓ Relayer is successfully deployed +--------------------------------------------------------------------------------------------------------------------------------+ | MYBLOCKCHAIN | +---------------+----------------------------------------------------------------------------------------------------------------+ | Name | myblockchain | +---------------+----------------------------------------------------------------------------------------------------------------+ | VM ID | qDNV9vtxZYYNqm7TN1mYBuaaknLdefDbFK8bFmMLTJQJKaWjV | +---------------+----------------------------------------------------------------------------------------------------------------+ | VM Version | v0.7.3 | +---------------+----------------------------------------------------------------------------------------------------------------+ | Validation | Proof Of Authority | +---------------+--------------------------+-------------------------------------------------------------------------------------+ | Local Network | ChainID | 888 | | +--------------------------+-------------------------------------------------------------------------------------+ | | SubnetID | 2W9boARgCWL25z6pMFNtkCfNA5v28VGg9PmBgUJfuKndEdhrvw | | +--------------------------+-------------------------------------------------------------------------------------+ | | Owners (Threhold=1) | P-custom18jma8ppw3nhx5r4ap8clazz0dps7rv5u9xde7p | | +--------------------------+-------------------------------------------------------------------------------------+ | | BlockchainID (CB58) | Yt9d8RRW9JcoqfvyefqJJMX14HawtBc28J9CQspQKPkdonp1y | | +--------------------------+-------------------------------------------------------------------------------------+ | | BlockchainID (HEX) | 0x48644613a5ef255fa171bf4773df668b57ea0ea9593df8927a6d9f32376a9c6f | | +--------------------------+-------------------------------------------------------------------------------------+ | | RPC Endpoint | http://127.0.0.1:60172/ext/bc/Yt9d8RRW9JcoqfvyefqJJMX14HawtBc28J9CQspQKPkdonp1y/rpc | +---------------+--------------------------+-------------------------------------------------------------------------------------+ +------------------------------------------------------------------------------------+ | ICM | +---------------+-----------------------+--------------------------------------------+ | Local Network | ICM Messenger Address | 0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf | | +-----------------------+--------------------------------------------+ | | ICM Registry Address | 0xEc7018552DC7E197Af85f157515f5976b1A15B12 | +---------------+-----------------------+--------------------------------------------+ +--------------------------+ | TOKEN | +--------------+-----------+ | Token Name | TST Token | +--------------+-----------+ | Token Symbol | TST | +--------------+-----------+ +---------------------------------------------------------------------------------------------------------------------------------------+ | INITIAL TOKEN ALLOCATION | +-------------------------+------------------------------------------------------------------+--------------+---------------------------+ | DESCRIPTION | ADDRESS AND PRIVATE KEY | AMOUNT (TST) | AMOUNT (WEI) | +-------------------------+------------------------------------------------------------------+--------------+---------------------------+ | Main funded account | 0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC | 1000000 | 1000000000000000000000000 | | ewoq | 56289e99c94b6912bfc12adc093c9b51124f0dc54ac7a766b2bc5ccf558d8027 | | | +-------------------------+------------------------------------------------------------------+--------------+---------------------------+ | Used by ICM | 0xf34408C05e3B339B1c89d15163d4B9D96845597A | 600 | 600000000000000000000 | | cli-teleporter-deployer | 30d57c7b6e7e393e2e4ce8166768b497cc37930361a15b1c647d6e665d88afff | | | +-------------------------+------------------------------------------------------------------+--------------+---------------------------+ +----------------------------------------------------------------------------------------------------------------------------------+ | SMART CONTRACTS | +----------------------------------------+--------------------------------------------+--------------------------------------------+ | DESCRIPTION | ADDRESS | DEPLOYER | +----------------------------------------+--------------------------------------------+--------------------------------------------+ | Validator Messages Lib | 0x9C00629cE712B0255b17A4a657171Acd15720B8C | | +----------------------------------------+--------------------------------------------+--------------------------------------------+ | Proxy Admin | 0xC0fFEE1234567890aBCdeF1234567890abcDef34 | 0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC | +----------------------------------------+--------------------------------------------+--------------------------------------------+ | ACP99 Compatible PoA Validator Manager | 0x0C0DEbA5E0000000000000000000000000000000 | | +----------------------------------------+--------------------------------------------+--------------------------------------------+ | Transparent Proxy | 0x0Feedc0de0000000000000000000000000000000 | | +----------------------------------------+--------------------------------------------+--------------------------------------------+ +----------------------------------------------------------------------+ | INITIAL PRECOMPILE CONFIGS | +------------+-----------------+-------------------+-------------------+ | PRECOMPILE | ADMIN ADDRESSES | MANAGER ADDRESSES | ENABLED ADDRESSES | +------------+-----------------+-------------------+-------------------+ | Warp | n/a | n/a | n/a | +------------+-----------------+-------------------+-------------------+ +-------------------------------------------------------------------------------------------------+ | MYBLOCKCHAIN RPC URLS | +-----------+-------------------------------------------------------------------------------------+ | Localhost | http://127.0.0.1:60172/ext/bc/Yt9d8RRW9JcoqfvyefqJJMX14HawtBc28J9CQspQKPkdonp1y/rpc | +-----------+-------------------------------------------------------------------------------------+ +------------------------------------------------------------------+ | PRIMARY NODES | +------------------------------------------+-----------------------+ | NODE ID | LOCALHOST ENDPOINT | +------------------------------------------+-----------------------+ | NodeID-7Xhw2mDxuDS44j42TCB6U5579esbSt3Lg | http://127.0.0.1:9650 | +------------------------------------------+-----------------------+ | NodeID-MFrZFVCXPv5iCn6M9K6XduxGTYp891xXZ | http://127.0.0.1:9652 | +------------------------------------------+-----------------------+ +----------------------------------------------------------------------------------+ | L1 NODES | +------------------------------------------+------------------------+--------------+ | NODE ID | LOCALHOST ENDPOINT | L1 | +------------------------------------------+------------------------+--------------+ | NodeID-NuQc8BQ8mV9TVksgMtpyc57VnWzU2J6aN | http://127.0.0.1:60172 | myblockchain | +------------------------------------------+------------------------+--------------+ +-------------------------------------------------------------------------------------------------------+ | WALLET CONNECTION | +-----------------+-------------------------------------------------------------------------------------+ | Network RPC URL | http://127.0.0.1:60172/ext/bc/Yt9d8RRW9JcoqfvyefqJJMX14HawtBc28J9CQspQKPkdonp1y/rpc | +-----------------+-------------------------------------------------------------------------------------+ | Network Name | myblockchain | +-----------------+-------------------------------------------------------------------------------------+ | Chain ID | 888 | +-----------------+-------------------------------------------------------------------------------------+ | Token Symbol | TST | +-----------------+-------------------------------------------------------------------------------------+ | Token Name | TST Token | +-----------------+-------------------------------------------------------------------------------------+ ✓ L1 is successfully deployed on Local Network ``` You can use the deployment details to connect to and interact with your Avalanche L1. Deploying Avalanche L1s Locally[​](#deploying-avalanche-l1s-locally) --------------------------------------------------------------------------------- To deploy your Avalanche L1, run: ```bash avalanche blockchain deploy myblockchain ``` Make sure to substitute the name of your Avalanche L1 if you used a different one than `myblockchain`. ```bash ? Choose a network for the operation: ▸ Local Network Devnet Etna Devnet Fuji Testnet Mainnet ``` Next, select `Local Network`. This command boots a three node Avalanche network on your machine: - Two nodes to act as primary validators for the local network, that will validate the local P and C chains (unrelated to testnet/mainnet).. - One node to act as sovereign validator for the new L1 that is deployed into the local network. The command needs to download the latest versions of AvalancheGo and Subnet-EVM. It may take a couple minutes to run. Note: If you run `bash` on your shell and are running Avalanche-CLI on ARM64 on Mac, you will require Rosetta 2 to be able to deploy Avalanche L1s locally. You can download Rosetta 2 using `softwareupdate --install-rosetta` . If all works as expected, the command output should look something like this: ```bash avalanche blockchain deploy myblockchain # output ✔ Local Network Deploying [myblockchain] to Local Network AvalancheGo path: /Users/felipe.madero/.avalanche-cli/bin/avalanchego/avalanchego-v1.13.0/avalanchego Booting Network. Wait until healthy... Node logs directory: /Users/felipe.madero/.avalanche-cli/runs/network_20250410_104205//logs Network ready to use. Using [P-custom18jma8ppw3nhx5r4ap8clazz0dps7rv5u9xde7p] to be set as a change owner for leftover AVAX AvalancheGo path: /Users/felipe.madero/.avalanche-cli/bin/avalanchego/avalanchego-v1.13.0/avalanchego ✓ Local cluster myblockchain-local-node-local-network not found. Creating... Starting local avalanchego node using root: /Users/felipe.madero/.avalanche-cli/local/myblockchain-local-node-local-network ... ✓ Booting Network. Wait until healthy... ✓ Avalanchego started and ready to use from /Users/felipe.madero/.avalanche-cli/local/myblockchain-local-node-local-network Node logs directory: /Users/felipe.madero/.avalanche-cli/local/myblockchain-local-node-local-network//logs Network ready to use. URI: http://127.0.0.1:60172 NodeID: NodeID-NuQc8BQ8mV9TVksgMtpyc57VnWzU2J6aN Your blockchain control keys: [P-custom18jma8ppw3nhx5r4ap8clazz0dps7rv5u9xde7p] Your blockchain auth keys for chain creation: [P-custom18jma8ppw3nhx5r4ap8clazz0dps7rv5u9xde7p] CreateSubnetTx fee: 0.000010278 AVAX Blockchain has been created with ID: 2W9boARgCWL25z6pMFNtkCfNA5v28VGg9PmBgUJfuKndEdhrvw Now creating blockchain... CreateChainTx fee: 0.000129564 AVAX +--------------------------------------------------------------------+ | DEPLOYMENT RESULTS | +---------------+----------------------------------------------------+ | Chain Name | myblockchain | +---------------+----------------------------------------------------+ | Subnet ID | 2W9boARgCWL25z6pMFNtkCfNA5v28VGg9PmBgUJfuKndEdhrvw | +---------------+----------------------------------------------------+ | VM ID | qDNV9vtxZYYNqm7TN1mYBuaaknLdefDbFK8bFmMLTJQJKaWjV | +---------------+----------------------------------------------------+ | Blockchain ID | Yt9d8RRW9JcoqfvyefqJJMX14HawtBc28J9CQspQKPkdonp1y | +---------------+ | | P-Chain TXID | | +---------------+----------------------------------------------------+ Now calling ConvertSubnetToL1Tx... ConvertSubnetToL1Tx fee: 0.000036992 AVAX ConvertSubnetToL1Tx ID: 2d2EE7AorEhfKLBtnDGnAtcDYMGfPbWnHYDpNDm3SopYg6VtpV Waiting for the Subnet to be converted into a sovereign L1 ... 100% [===============] Validator Manager Protocol: ACP99 Restarting node NodeID-NuQc8BQ8mV9TVksgMtpyc57VnWzU2J6aN to track newly deployed subnet/s Waiting for blockchain Yt9d8RRW9JcoqfvyefqJJMX14HawtBc28J9CQspQKPkdonp1y to be bootstrapped ✓ Local Network successfully tracking myblockchain ✓ Checking if node is healthy... ✓ Node is healthy after 0 seconds Initializing Proof of Authority Validator Manager contract on blockchain myblockchain ... ✓ Proof of Authority Validator Manager contract successfully initialized on blockchain myblockchain Your L1 is ready for on-chain interactions. RPC Endpoint: http://127.0.0.1:60172/ext/bc/Yt9d8RRW9JcoqfvyefqJJMX14HawtBc28J9CQspQKPkdonp1y/rpc ICM Messenger successfully deployed to myblockchain (0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf) ICM Registry successfully deployed to myblockchain (0xEc7018552DC7E197Af85f157515f5976b1A15B12) ICM Messenger successfully deployed to c-chain (0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf) ICM Registry successfully deployed to c-chain (0x17aB05351fC94a1a67Bf3f56DdbB941aE6c63E25) ✓ ICM is successfully deployed Generating relayer config file at /Users/felipe.madero/.avalanche-cli/runs/network_20250410_104205/icm-relayer-config.json Relayer version icm-relayer-v1.6.2 Executing Relayer ✓ Relayer is successfully deployed +--------------------------------------------------------------------------------------------------------------------------------+ | MYBLOCKCHAIN | +---------------+----------------------------------------------------------------------------------------------------------------+ | Name | myblockchain | +---------------+----------------------------------------------------------------------------------------------------------------+ | VM ID | qDNV9vtxZYYNqm7TN1mYBuaaknLdefDbFK8bFmMLTJQJKaWjV | +---------------+----------------------------------------------------------------------------------------------------------------+ | VM Version | v0.7.3 | +---------------+----------------------------------------------------------------------------------------------------------------+ | Validation | Proof Of Authority | +---------------+--------------------------+-------------------------------------------------------------------------------------+ | Local Network | ChainID | 888 | | +--------------------------+-------------------------------------------------------------------------------------+ | | SubnetID | 2W9boARgCWL25z6pMFNtkCfNA5v28VGg9PmBgUJfuKndEdhrvw | | +--------------------------+-------------------------------------------------------------------------------------+ | | Owners (Threhold=1) | P-custom18jma8ppw3nhx5r4ap8clazz0dps7rv5u9xde7p | | +--------------------------+-------------------------------------------------------------------------------------+ | | BlockchainID (CB58) | Yt9d8RRW9JcoqfvyefqJJMX14HawtBc28J9CQspQKPkdonp1y | | +--------------------------+-------------------------------------------------------------------------------------+ | | BlockchainID (HEX) | 0x48644613a5ef255fa171bf4773df668b57ea0ea9593df8927a6d9f32376a9c6f | | +--------------------------+-------------------------------------------------------------------------------------+ | | RPC Endpoint | http://127.0.0.1:60172/ext/bc/Yt9d8RRW9JcoqfvyefqJJMX14HawtBc28J9CQspQKPkdonp1y/rpc | +---------------+--------------------------+-------------------------------------------------------------------------------------+ +------------------------------------------------------------------------------------+ | ICM | +---------------+-----------------------+--------------------------------------------+ | Local Network | ICM Messenger Address | 0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf | | +-----------------------+--------------------------------------------+ | | ICM Registry Address | 0xEc7018552DC7E197Af85f157515f5976b1A15B12 | +---------------+-----------------------+--------------------------------------------+ +--------------------------+ | TOKEN | +--------------+-----------+ | Token Name | TST Token | +--------------+-----------+ | Token Symbol | TST | +--------------+-----------+ +---------------------------------------------------------------------------------------------------------------------------------------+ | INITIAL TOKEN ALLOCATION | +-------------------------+------------------------------------------------------------------+--------------+---------------------------+ | DESCRIPTION | ADDRESS AND PRIVATE KEY | AMOUNT (TST) | AMOUNT (WEI) | +-------------------------+------------------------------------------------------------------+--------------+---------------------------+ | Main funded account | 0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC | 1000000 | 1000000000000000000000000 | | ewoq | 56289e99c94b6912bfc12adc093c9b51124f0dc54ac7a766b2bc5ccf558d8027 | | | +-------------------------+------------------------------------------------------------------+--------------+---------------------------+ | Used by ICM | 0xf34408C05e3B339B1c89d15163d4B9D96845597A | 600 | 600000000000000000000 | | cli-teleporter-deployer | 30d57c7b6e7e393e2e4ce8166768b497cc37930361a15b1c647d6e665d88afff | | | +-------------------------+------------------------------------------------------------------+--------------+---------------------------+ +----------------------------------------------------------------------------------------------------------------------------------+ | SMART CONTRACTS | +----------------------------------------+--------------------------------------------+--------------------------------------------+ | DESCRIPTION | ADDRESS | DEPLOYER | +----------------------------------------+--------------------------------------------+--------------------------------------------+ | Validator Messages Lib | 0x9C00629cE712B0255b17A4a657171Acd15720B8C | | +----------------------------------------+--------------------------------------------+--------------------------------------------+ | Proxy Admin | 0xC0fFEE1234567890aBCdeF1234567890abcDef34 | 0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC | +----------------------------------------+--------------------------------------------+--------------------------------------------+ | ACP99 Compatible PoA Validator Manager | 0x0C0DEbA5E0000000000000000000000000000000 | | +----------------------------------------+--------------------------------------------+--------------------------------------------+ | Transparent Proxy | 0x0Feedc0de0000000000000000000000000000000 | | +----------------------------------------+--------------------------------------------+--------------------------------------------+ +----------------------------------------------------------------------+ | INITIAL PRECOMPILE CONFIGS | +------------+-----------------+-------------------+-------------------+ | PRECOMPILE | ADMIN ADDRESSES | MANAGER ADDRESSES | ENABLED ADDRESSES | +------------+-----------------+-------------------+-------------------+ | Warp | n/a | n/a | n/a | +------------+-----------------+-------------------+-------------------+ +-------------------------------------------------------------------------------------------------+ | MYBLOCKCHAIN RPC URLS | +-----------+-------------------------------------------------------------------------------------+ | Localhost | http://127.0.0.1:60172/ext/bc/Yt9d8RRW9JcoqfvyefqJJMX14HawtBc28J9CQspQKPkdonp1y/rpc | +-----------+-------------------------------------------------------------------------------------+ +------------------------------------------------------------------+ | PRIMARY NODES | +------------------------------------------+-----------------------+ | NODE ID | LOCALHOST ENDPOINT | +------------------------------------------+-----------------------+ | NodeID-7Xhw2mDxuDS44j42TCB6U5579esbSt3Lg | http://127.0.0.1:9650 | +------------------------------------------+-----------------------+ | NodeID-MFrZFVCXPv5iCn6M9K6XduxGTYp891xXZ | http://127.0.0.1:9652 | +------------------------------------------+-----------------------+ +----------------------------------------------------------------------------------+ | L1 NODES | +------------------------------------------+------------------------+--------------+ | NODE ID | LOCALHOST ENDPOINT | L1 | +------------------------------------------+------------------------+--------------+ | NodeID-NuQc8BQ8mV9TVksgMtpyc57VnWzU2J6aN | http://127.0.0.1:60172 | myblockchain | +------------------------------------------+------------------------+--------------+ +-------------------------------------------------------------------------------------------------------+ | WALLET CONNECTION | +-----------------+-------------------------------------------------------------------------------------+ | Network RPC URL | http://127.0.0.1:60172/ext/bc/Yt9d8RRW9JcoqfvyefqJJMX14HawtBc28J9CQspQKPkdonp1y/rpc | +-----------------+-------------------------------------------------------------------------------------+ | Network Name | myblockchain | +-----------------+-------------------------------------------------------------------------------------+ | Chain ID | 888 | +-----------------+-------------------------------------------------------------------------------------+ | Token Symbol | TST | +-----------------+-------------------------------------------------------------------------------------+ | Token Name | TST Token | +-----------------+-------------------------------------------------------------------------------------+ ✓ L1 is successfully deployed on Local Network ``` To manage the newly deployed local Avalanche network, see [the `avalanche network` command tree](/docs/tooling/cli-commands#avalanche-network). You can use the deployment details to connect to and interact with your Avalanche L1. Now it's time to interact with it. ## Interacting with Your Avalanche L1 You can use the value provided by `Browser Extension connection details` to connect to your Avalanche L1 with Core, MetaMask, or any other wallet. To allow API calls from other machines, use `--http-host=0.0.0.0` in the config. ```bash Browser Extension connection details (any node URL from above works): RPC URL: http://127.0.0.1:9650/ext/bc/2BK8CKA4Vfvi69TBTc5GW94JQ9nPiL8xPpPNeeckb9UFSPYedD/rpc Funded address: 0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC with 1000000 (10^18) - private key: 56289e99c94b6912bfc12adc093c9b51124f0dc54ac7a766b2bc5ccf558d8027 Network name: myblockchain Chain ID: 888 Currency Symbol: TST ``` This tutorial uses Core. ### Importing the Test Private Key[​](#importing-the-test-private-key) This address derives from a well-known private key. Anyone can steal funds sent to this address. Only use it on development networks that only you have access to. If you send production funds to this address, attackers may steal them instantly. First, you need to import your airdrop private key into Core. In the Accounts screen, select the `Imported` tab. Click on `Import private key`. ![](/images/deploy-subnet1.png) Here, enter the private key. Import the well-known private key `0x56289e99c94b6912bfc12adc093c9b51124f0dc54ac7a766b2bc5ccf558d8027`. ![](/images/deploy-subnet2.png) Next, rename the Core account to prevent confusion. On the `Imported` tab, click on the pen icon next to your account. Rename the account `DO NOT USE -- Public test key` to prevent confusion with any personal wallets. ![Rename Account](/images/deploy-subnet3.png) ![Rename Account](/images/deploy-subnet4.png) ### Connect to the Avalanche L1[​](#connect-to-the-avalanche-l1) Next, you need to add your Avalanche L1 to Core's networks. In the Core Extension click, `See All Networks` and then select the `+` icon in the top right. ![Add network](/images/deploy-subnet5.png) Enter your Avalanche L1's details, found in the output of your `avalanche blockchain deploy` [command](#deploying-avalanche-l1s-locally), into the form and click `Save`. ![Add network 2](/images/deploy-subnet6.png) If all worked as expected, your balance should read 1 million tokens. Your Avalanche L1 is ready for action. You might want to try to [Deploy a Smart Contract on Your Subnet-EVM Using Remix and Core](/docs/avalanche-l1s/add-utility/deploy-smart-contract). ![Avalanche L1 in Core](/images/deploy-subnet7.png) ## Deploying Multiple Avalanche L1s[​](#deploying-multiple-avalanche-l1s "Direct link to heading") You may deploy multiple Avalanche L1s concurrently, but you can't deploy the same Avalanche L1 multiple times without resetting all deployed Avalanche L1 state. ## Redeploying the Avalanche L1 To redeploy the Avalanche L1, you first need to wipe the Avalanche L1 state. This permanently deletes all data from all locally deployed Avalanche L1s. To do so, run ```bash avalanche network clean ``` You are now free to redeploy your Avalanche L1 with ```bash avalanche blockchain deploy --local ``` ## Stopping the Local Network To gracefully stop a running local network while preserving state, run: ```bash avalanche network stop # output Network stopped successfully. ``` When restarted, all of your deployed Avalanche L1s resume where they left off. ### Resuming the Local Network To resume a stopped network, run: ```bash avalanche network start # output Starting previously deployed and stopped snapshot Booting Network. Wait until healthy... ............... Network ready to use. Local network node endpoints: +-------+----------+------------------------------------------------------------------------------------+ | NODE | VM | URL | +-------+----------+------------------------------------------------------------------------------------+ | node5 | myblockchain | http://127.0.0.1:9658/ext/bc/SPqou41AALqxDquEycNYuTJmRvZYbfoV9DYApDJVXKXuwVFPz/rpc | +-------+----------+------------------------------------------------------------------------------------+ | node1 | myblockchain | http://127.0.0.1:9650/ext/bc/SPqou41AALqxDquEycNYuTJmRvZYbfoV9DYApDJVXKXuwVFPz/rpc | +-------+----------+------------------------------------------------------------------------------------+ | node2 | myblockchain | http://127.0.0.1:9652/ext/bc/SPqou41AALqxDquEycNYuTJmRvZYbfoV9DYApDJVXKXuwVFPz/rpc | +-------+----------+------------------------------------------------------------------------------------+ | node3 | myblockchain | http://127.0.0.1:9654/ext/bc/SPqou41AALqxDquEycNYuTJmRvZYbfoV9DYApDJVXKXuwVFPz/rpc | +-------+----------+------------------------------------------------------------------------------------+ | node4 | myblockchain | http://127.0.0.1:9656/ext/bc/SPqou41AALqxDquEycNYuTJmRvZYbfoV9DYApDJVXKXuwVFPz/rpc | +-------+----------+------------------------------------------------------------------------------------+ ``` The network resumes with the same state it paused with. ## Next Steps After you feel comfortable with this deployment flow, try deploying smart contracts on your chain with [Remix](https://remix.ethereum.org/), [Hardhat](https://hardhat.org/), or [Foundry](https://github.com/foundry-rs/foundry). You can also experiment with customizing your Avalanche L1 by addingprecompiles or adjusting the airdrop. Once you've developed a stable Avalanche L1 you like, see [Create an EVM Avalanche L1 on Fuji Testnet](/docs/avalanche-l1s/evm-configuration/customize-avalanche-l1) to take your Avalanche L1 one step closer to production. ## FAQ **How is the Avalanche L1 ID (SubnetID) determined upon creation?** The Avalanche L1 ID (SubnetID) is the hash of the transaction that created the Avalanche L1. # On Fuji Testnet (/docs/tooling/avalanche-cli/create-deploy-avalanche-l1s/deploy-on-fuji-testnet) --- title: On Fuji Testnet description: This tutorial shows how to deploy an Avalanche L1 on Fuji Testnet. --- This document describes how to use the new Avalanche-CLI to deploy an Avalanche L1 on `Fuji`. After trying out an Avalanche L1 on a local box by following [this tutorial](/docs/tooling/avalanche-cli/create-deploy-avalanche-l1s/deploy-locally), next step is to try it out on `Fuji` Testnet. In this article, it's shown how to do the following on `Fuji` Testnet. - Create an Avalanche L1. - Deploy a virtual machine based on Subnet-EVM. - Join a node to the newly created Avalanche L1. - Add a node as a validator to the Avalanche L1. All IDs in this article are for illustration purposes. They can be different in your own run-through of this tutorial. ## Prerequisites - [`Avalanche-CLI`](https://github.com/ava-labs/avalanche-cli) installed Virtual Machine[​](#virtual-machine "Direct link to heading") ------------------------------------------------------------- Avalanche can run multiple blockchains. Each blockchain is an instance of a [Virtual Machine](/docs/primary-network/virtual-machines), much like an object in an object-oriented language is an instance of a class. That's, the VM defines the behavior of the blockchain. [Subnet-EVM](https://github.com/ava-labs/subnet-evm) is the VM that defines the Avalanche L1 Contract Chains. Subnet-EVM is a simplified version of [Avalanche C-Chain](https://github.com/ava-labs/avalanchego/tree/master/graft/coreth). This chain implements the Ethereum Virtual Machine and supports Solidity smart contracts as well as most other Ethereum client features. Avalanche-CLI[​](#avalanche-cli "Direct link to heading") --------------------------------------------------------- If not yet installed, install `Avalanche-CLI` following the tutorial at [Avalanche-CLI installation](/docs/tooling/avalanche-cli) ### Private Key[​](#private-key "Direct link to heading") All commands which issue a transaction require either a private key loaded into the tool, or a connected ledger device. This tutorial focuses on stored key usage and leave ledger operation details for the `Mainnet` deploy one, as `Mainnet` operations requires ledger usage, while for `Fuji` it's optional. `Avalanche-CLI` supports the following key operations: - create - delete - export - list You should only use the private key created for this tutorial for testing operations on `Fuji` or other testnets. Don't use this key on `Mainnet`. CLI is going to store the key on your file system. Whoever gets access to that key is going to have access to all funds secured by that private key. Run `create` if you don't have any private key available yet. You can create multiple named keys. Each command requiring a key is going to therefore require the appropriate key name you want to use. ```bash avalanche key create mytestkey ``` This is going to generate a new key named `mytestkey`. The command is going to then also print addresses associated with the key: ```bash Generating new key... Key created +-----------+-------------------------------+-------------------------------------------------+---------------+ | KEY NAME | CHAIN | ADDRESS | NETWORK | +-----------+-------------------------------+-------------------------------------------------+---------------+ | mytestkey | C-Chain (Ethereum hex format) | 0x86BB07a534ADF43786ECA5Dd34A97e3F96927e4F | All | + +-------------------------------+-------------------------------------------------+---------------+ | | P-Chain (Bech32 format) | P-custom1a3azftqvygc4tlqsdvd82wks2u7nx85rg7v8ta | Local Network | + + +-------------------------------------------------+---------------+ | | | P-fuji1a3azftqvygc4tlqsdvd82wks2u7nx85rhk6zqh | Fuji | +-----------+-------------------------------+-------------------------------------------------+---------------+ ``` You may use the C-Chain address (`0x86BB07a534ADF43786ECA5Dd34A97e3F96927e4F`) to fund your key: - **Recommended:** Create a [Builder Hub account](https://build.avax.network/login) and connect your wallet to receive testnet tokens automatically - **Alternative:** Use the [external faucet](https://core.app/tools/testnet-faucet/) The command also prints P-Chain addresses for both the default local network and `Fuji`. The latter (`P-fuji1a3azftqvygc4tlqsdvd82wks2u7nx85rhk6zqh`) is the one needed for this tutorial. The `delete` command of course deletes a private key: ```bash avalanche key delete mytestkey ``` Be careful though to always have a key available for commands involving transactions. The `export` command is going to **print your private key** in hex format to stdout. ```bash avalanche key export mytestkey 21940fbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb5f0b ``` This key is intentionally modified. You can also **import** a key by using the `--file` flag with a path argument and also providing a name to it: ```bash avalanche key create othertest --file /tmp/test.pk Loading user key... Key loaded ``` Finally, the `list` command is going to list all your keys in your system and their associated addresses (CLI stores the keys in a special directory on your file system, tampering with the directory is going to result in malfunction of the tool). ```bash avalanche key list +-----------+-------------------------------+-------------------------------------------------+---------------+ | KEY NAME | CHAIN | ADDRESS | NETWORK | +-----------+-------------------------------+-------------------------------------------------+---------------+ | othertest | C-Chain (Ethereum hex format) | 0x36c83263e33f9e87BB98D3fEb54a01E35a3Fa735 | All | + +-------------------------------+-------------------------------------------------+---------------+ | | P-Chain (Bech32 format) | P-custom1n5n4h99j3nx8hdrv50v8ll7aldm383nap6rh42 | Local Network | + + +-------------------------------------------------+---------------+ | | | P-fuji1n5n4h99j3nx8hdrv50v8ll7aldm383na7j4j7q | Fuji | +-----------+-------------------------------+-------------------------------------------------+---------------+ | mytestkey | C-Chain (Ethereum hex format) | 0x86BB07a534ADF43786ECA5Dd34A97e3F96927e4F | All | + +-------------------------------+-------------------------------------------------+---------------+ | | P-Chain (Bech32 format) | P-custom1a3azftqvygc4tlqsdvd82wks2u7nx85rg7v8ta | Local Network | + + +-------------------------------------------------+---------------+ | | | P-fuji1a3azftqvygc4tlqsdvd82wks2u7nx85rhk6zqh | Fuji | +-----------+-------------------------------+-------------------------------------------------+---------------+ ``` #### Funding the Key[​](#funding-the-key "Direct link to heading") Do these steps only to follow this tutorial for `Fuji` addresses. To access the wallet for `Mainnet`, the use of a ledger device is strongly recommended. 1. A newly created key has no funds on it. You have several options to fund it: - **Recommended:** Create a [Builder Hub account](https://build.avax.network/login) and connect your wallet to receive testnet tokens automatically on C-Chain and P-Chain - **Alternative:** Use the [external faucet](https://core.app/tools/testnet-faucet/) with your C-Chain address. If you have an AVAX balance on Mainnet, you can request tokens directly. Otherwise, request a faucet coupon on [Guild](https://guild.xyz/avalanche) or contact admins/mods on [Discord](https://discord.com/invite/RwXY7P6) 2. Export your key via the `avalanche key export` command. The output is your private key, which will help you [import](https://support.avax.network/en/articles/6821877-core-extension-how-can-i-import-an-account) your account into the Core extension. 3. Connect Core extension to [Core web](https://core.app/), and move the test funds from C-Chain to the P-Chain by clicking Stake, then Cross-Chain Transfer (find more details on [this tutorial](https://support.avax.network/en/articles/8133713-core-web-how-do-i-make-cross-chain-transfers-in-core-stake)). After following these 3 steps, your test key should now have a balance on the P-Chain on `Fuji` Testnet. Create an EVM Avalanche L1[​](#create-an-evm-avalanche-l1 "Direct link to heading") ----------------------------------------------------------------------- Creating an Avalanche L1 with `Avalanche-CLI` for `Fuji` works the same way as with a local network. In fact, the `create` commands only creates a specification of your Avalanche L1 on the local file system. Afterwards the Avalanche L1 needs to be _deployed_. This allows to reuse configs, by creating the config with the `create` command, then first deploying to a local network and successively to `Fuji` - and eventually to `Mainnet`. To create an EVM Avalanche L1, run the `blockchain create` command with a name of your choice: ```bash avalanche blockchain create testblockchain ``` This is going to start a series of prompts to customize your EVM Avalanche L1 to your needs. Most prompts have some validation to reduce issues due to invalid input. The first prompt asks for the type of the virtual machine (see [Virtual Machine](#virtual-machine)). ```bash Use the arrow keys to navigate: ↓ ↑ → ← ? Choose your VM: ▸ SubnetEVM Custom ``` As you want to create an EVM Avalanche L1, just accept the default `Subnet-EVM`. Choose either Proof of Authority (PoA) or Proof of Stake (PoS) as your consensus mechanism. ```bash ? Which validator management type would you like to use in your blockchain?: ▸ Proof Of Authority Proof Of Stake Explain the difference ``` For this tutorial, select `Proof of Authority (PoA)`. For more info, reference the [Validator Management Contracts](/docs/avalanche-l1s/validator-manager/contract). ```bash Which address do you want to enable as controller of ValidatorManager contract?: ▸ Get address from an existing stored key (created from avalanche key create or avalanche key import) Custom ``` This address will be able to add and remove validators from your Avalanche L1. You can either use an existing key or create a new one. In addition to being the PoA owner, this address will also be the owner of the `ProxyAdmin` contract of the Validator Manager's `TransparentUpgradeableProxy`. This address will be able to upgrade (PoA -> PoS) the Validator Manager implementation through updating the proxy. Next, CLI will ask for blockchain configuration values. Since we are deploying to Fuji, select `I want to use defaults for a production environment`. ```bash ? Do you want to use default values for the Blockchain configuration?: ▸ I want to use defaults for a test environment I want to use defaults for a production environment I don't want to use default values Explain the difference ``` The default values for production environment: - Use latest Subnet-EVM release - Allocate 1 million tokens to: 1. **a newly created key (production)**: name of this key will be in the format of `subnet_blockchainName_airdrop` 2. **ewoq address (test)**: 0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC - Supply of the native token will be hard-capped - Set gas fee config as low throughput (12 million gas per block) - Use constant gas prices - Disable further adjustments in transaction fee configuration - Transaction fees are burned - Enable interoperability with other blockchains - Allow any user to deploy smart contracts, send transactions, and interact with your blockchain. Next, CLI asks for the ChainID. You should provide your own ID. Check [chainlist.org](https://chainlist.org/) to see if the value you'd like is already in use. ```bash ✔ Subnet-EVM creating Avalanche L1 test blockchain Enter your Avalanche L1's ChainId. It can be any positive integer. ChainId: 3333 ``` Now, provide a symbol of your choice for the token of this EVM: ```bash Select a symbol for your Avalanche L1's native token Token symbol: TST ``` It's possible to end the process with Ctrl-C at any time. At this point, CLI creates the specification of the new Avalanche L1 on disk, but isn't deployed yet. Print the specification to disk by running the `describe` command: ```bash avalanche blockchain describe testblockchain ``` ```bash +------------------------------------------------------------------+ | TESTBLOCKCHAIN | +------------+-----------------------------------------------------+ | Name | testblockchain | +------------+-----------------------------------------------------+ | VM ID | tGBrM94jbkesczgqsL1UaxjrdxRQQobs3MZTNQ4GrfhzvpiE8 | +------------+-----------------------------------------------------+ | VM Version | v0.6.12 | +------------+-----------------------------------------------------+ | Validation | Proof Of Authority | +------------+-----------------------------------------------------+ +--------------------------+ | TOKEN | +--------------+-----------+ | Token Name | TST Token | +--------------+-----------+ | Token Symbol | TST | +--------------+-----------+ +-----------------------------------------------------------------------------------------------------------------------------------+ | INITIAL TOKEN ALLOCATION | +---------------------+------------------------------------------------------------------+--------------+---------------------------+ | DESCRIPTION | ADDRESS AND PRIVATE KEY | AMOUNT (TST) | AMOUNT (WEI) | +---------------------+------------------------------------------------------------------+--------------+---------------------------+ | Main funded account | 0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC | 1000000 | 1000000000000000000000000 | | ewoq | 56289e99c94b6912bfc12adc093c9b51124f0dc54ac7a766b2bc5ccf558d8027 | | | +---------------------+------------------------------------------------------------------+--------------+---------------------------+ +-----------------------------------------------------------------------------------------------------------------+ | SMART CONTRACTS | +-----------------------+--------------------------------------------+--------------------------------------------+ | DESCRIPTION | ADDRESS | DEPLOYER | +-----------------------+--------------------------------------------+--------------------------------------------+ | Proxy Admin | 0xC0fFEE1234567890aBCdeF1234567890abcDef34 | 0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC | +-----------------------+--------------------------------------------+--------------------------------------------+ | PoA Validator Manager | 0x0C0DEbA5E0000000000000000000000000000000 | | +-----------------------+--------------------------------------------+--------------------------------------------+ | Transparent Proxy | 0x0Feedc0de0000000000000000000000000000000 | | +-----------------------+--------------------------------------------+--------------------------------------------+ +----------------------------------------------------------------------+ | INITIAL PRECOMPILE CONFIGS | +------------+-----------------+-------------------+-------------------+ | PRECOMPILE | ADMIN ADDRESSES | MANAGER ADDRESSES | ENABLED ADDRESSES | +------------+-----------------+-------------------+-------------------+ | Warp | n/a | n/a | n/a | +------------+-----------------+-------------------+-------------------+ ``` Deploy the Avalanche L1[​](#deploy-the-avalanche-l1 "Direct link to heading") ----------------------------------------------------------------- To deploy the Avalanche L1, you will need some testnet AVAX on the P-chain. To deploy the new Avalanche L1, run: ```bash avalanche blockchain deploy testblockchain ``` This is going to start a new prompt series. ```bash Use the arrow keys to navigate: ↓ ↑ → ← ? Choose a network to deploy on: ▸ Local Network Fuji Mainnet ``` This tutorial is about deploying to `Fuji`, so navigate with the arrow keys to `Fuji` and hit enter. The user is then asked to provide which private key to use for the deployment. Select a key to has P-Chain AVAX to pay for transaction fees. Also, this tutorial assumes that a node is up running, fully bootstrapped on `Fuji`, and runs from the **same** box. ```bash ✔ Fuji Deploying [testblockchain] to Fuji Use the arrow keys to navigate: ↓ ↑ → ← ? Which private key should be used to issue the transaction?: test ▸ mytestkey ``` Avalanche L1s require bootstrap validators during creation process. Avalanche CLI has enabled using local machine as a bootstrap validator on the blockchain. This means that you don't have to to set up a remote server on a cloud service (e.g. AWS / GCP) to be a validator on the blockchain. We will select `Yes` on using our local machine as a bootstrap validator. Note that since we need to sync our node with Fuji, this process will take around 3 minutes. ```bash You can use your local machine as a bootstrap validator on the blockchain This means that you don't have to to set up a remote server on a cloud service (e.g. AWS / GCP) to be a validator on the blockchain. Use the arrow keys to navigate: ↓ ↑ → ← ? Do you want to use your local machine as a bootstrap validator?: ▸ Yes No ``` Well done. You have just created your own Avalanche L1 on `Fuji`. You will be able to see information on the deployed L1 at the end of `avalanche blockchain deploy` command: ```bash +--------------------+----------------------------------------------------+ | DEPLOYMENT RESULTS | | +--------------------+----------------------------------------------------+ | Chain Name | testblockchain | +--------------------+----------------------------------------------------+ | Subnet ID | 2cNuyBhvAd4jH5bFSGndezhB66Z4UHYAsLCMGoCpvhXVhrZfgd | +--------------------+----------------------------------------------------+ | VM ID | qcvkEX1zWSz7PtGd7CKvPRBqLVTzA7qyMPvkh5NMDWkuhrcCu | +--------------------+----------------------------------------------------+ | Blockchain ID | 2U7vNdB78xTiN6QtZ9aetfKoGtQhfeEPQG6QZC8bpq8QMf4cDx | +--------------------+ + | P-Chain TXID | | +--------------------+----------------------------------------------------+ ``` To get your new Avalanche L1 information, visit the [Avalanche L1 Explorer](https://subnets-test.avax.network/). The search best works by blockchain ID, so in this example, enter `2U7vNdB78xTiN6QtZ9aetfKoGtQhfeEPQG6QZC8bpq8QMf4cDx` into the search box and you should see your shiny new blockchain information. Add a Validator[​](#add-a-validator "Direct link to heading") ------------------------------------------------------------- Before proceeding to add a validator to our Avalanche L1, we will need to have the validator's NodeID, BLS public key and proof of possession. These can be obtained by ssh into the node itself and run the `getNodeID` API specified [here](/docs/rpcs/other/info-rpc#infogetnodeid). To add a validator to an Avalanche L1, the owner of the key that acts as the controller of `ValidatorManager` contract specified in `avalanche blockchain create` command above run: ```bash avalanche blockchain addValidator testblockchain ``` Choose `Fuji`: ```bash Use the arrow keys to navigate: ↓ ↑ → ← ? Choose a network to deploy on: ▸ Fuji ``` You will need to specify which private key to use to pay for the transaction fees: ```bash Use the arrow keys to navigate: ↓ ↑ → ← ? Which key should be used to pay for transaction fees on P-Chain?: test ▸ mytestkey ``` Now enter the **NodeID** of the new validator to be added. ```bash What is the NodeID of the validator you'd like to whitelist?: NodeID-BFa1paAAAAAAAAAAAAAAAAAAAAQGjPhUy ``` Next, enter the node's BLS public key and proof of possession. Now, enter the amount of AVAX that you would like to allocate to the new validator. The validator's balance is used to pay for continuous fee to the P-Chain. When this Balance reaches 0, the validator will be considered inactive and will no longer participate in validating the L1. 1 AVAX should last the validator about a month. ```bash What balance would you like to assign to the validator (in AVAX)?: 1 ``` Next, select a key that will receive the leftover AVAX if the validator is removed from the L1: ```bash Which stored key should be used be set as a change owner for leftover AVAX?: test ▸ mytestkey ``` Next, select a key that can remove the validator: ```bash ? Which stored key should be used be able to disable the validator using P-Chain transactions?: test ▸ mytestkey ``` By the end of the command, you would have successfully added a new validator to the Avalanche L1 on Fuji Testnet! Appendix[​](#appendix "Direct link to heading") ----------------------------------------------- ### Connect with Core[​](#connect-with-core "Direct link to heading") To connect Core (or MetaMask) with your blockchain on the new Avalanche L1 running on your local computer, you can add a new network on your Core wallet with the following values: ```bash - Network Name: testblockchain - RPC URL: [http://127.0.0.1:9650/ext/bc/2U7vNdB78xTiN6QtZ9aetfKoGtQhfeEPQG6QZC8bpq8QMf4cDx/rpc] - Chain ID: 3333 - Symbol: TST ``` # On Avalanche Mainnet (/docs/tooling/avalanche-cli/create-deploy-avalanche-l1s/deploy-on-mainnet) --- title: On Avalanche Mainnet description: Deploy an Avalanche L1 to Avalanche Mainnet --- Deploying an Avalanche L1 to Mainnet has many risks. Doing so safely requires a laser focus on security. This tutorial does its best to point out common pitfalls, but there may be other risks not discussed here. This tutorial is an educational resource and provides no guarantees that following it results in a secure deployment. Additionally, this tutorial takes some shortcuts that aid the understanding of the deployment process at the expense of security. The text highlights these shortcuts and they shouldn't be used for a production deployment. After managing a successful Avalanche L1 deployment on the `Fuji Testnet`, you're ready to deploy your Avalanche L1 on Mainnet. If you haven't done so, first [Deploy an Avalanche L1 on Testnet](/docs/tooling/avalanche-cli/create-deploy-avalanche-l1s/deploy-on-fuji-testnet). This tutorial shows how to do the following on `Mainnet`. - Deploy an Avalanche L1. - Add a node as a validator to the Avalanche L1. All IDs in this article are for illustration purposes only. They are guaranteed to be different in your own run-through of this tutorial. ## Prerequisites - An Avalanche node running and [fully bootstrapped](/docs/nodes) on `Mainnet` - [Avalanche-CLI is installed](/docs/tooling/avalanche-cli) on each validator node's box - A [Ledger](https://www.ledger.com/) device - You've [created an Avalanche L1 configuration](/docs/tooling/avalanche-cli#create-your-avalanche-l1-configuration) and fully tested a [Fuji Testnet Avalanche L1 deployment](/docs/tooling/avalanche-cli/create-deploy-avalanche-l1s/deploy-on-fuji-testnet) ### Setting up Your Ledger[​](#setting-up-your-ledger "Direct link to heading") In the interest of security, all Avalanche-CLI `Mainnet` operations require the use of a connected Ledger device. You must unlock your Ledger and run the Avalanche App. See [How to Use Ledger](https://support.avax.network/en/articles/6150237-how-to-use-a-ledger-nano-s-or-nano-x-with-avalanche) for help getting set up. Ledger devices support TX signing for any address inside a sequence automatically generated by the device. By default, Avalanche-CLI uses the first address of the derivation, and that address needs funds to issue the TXs to create the Avalanche L1 and add validators. To get the first `Mainnet` address of your Ledger device, first make sure it is connected, unblocked, and running the Avalanche app. Then execute the `key list` command: ```bash avalanche key list --ledger 0 --mainnet ``` ```bash +--------+---------+-------------------------+-----------------------------------------------+---------+---------+ | KIND | NAME | CHAIN | ADDRESS | BALANCE | NETWORK | +--------+---------+-------------------------+-----------------------------------------------+---------+---------+ | ledger | index 0 | P-Chain (Bech32 format) | P-avax1ucykh6ls8thqpuwhg3vp8vvu6spg5e8tp8a25j | 11 | Mainnet | +--------+---------+-------------------------+-----------------------------------------------+---------+---------+ ``` The command prints the P-Chain address for `Mainnet`, `P-avax1ucykh6ls8thqpuwhg3vp8vvu6spg5e8tp8a25j`, and its balance. You can use the `key list` command to get any Ledger address in the derivation sequence by changing the index parameter from `0` to the one desired, or to a list of them (for example: `2`, or `0,4,7`). Also, you can ask for addresses on `Mainnet` with the `--mainnet` parameter, and local networks with the `--local` parameter. #### Funding the Ledger[​](#funding-the-ledger "Direct link to heading") A new Ledger device has no funds on the addresses it controls. You'll need to send funds to it by exporting them from C-Chain to the P-Chain using [Core web](https://core.app/) connected to [Core extension](https://core.app/). You can load the Ledger's C-Chain address in Core extension, or load in a different private key to [Core extension](https://core.app/), and then connect to Core web . You can move test funds from the C-Chain to the P-Chain by clicking Stake on Core web , then Cross-Chain Transfer (find more details on [this tutorial](https://support.avax.network/en/articles/8133713-core-web-how-do-i-make-cross-chain-transfers-in-core-stake)). Deploy the Avalanche L1[​](#deploy-the-avalanche-l1 "Direct link to heading") ----------------------------------------------------------------- To deploy the Avalanche L1, you will need some AVAX on the P-Chain. For our Fuji example, we used our local machine as a bootstrap validator. However, since bootstrapping a node to Mainnet will take several hours, we will use an Avalanche node set up on an AWS server that is already bootstrapped to Mainnet for this example. To check if the Avalanche node is done bootstrapping, ssh into the node and call [`info.isBootstrapped`](/docs/rpcs/other/info-rpc#infoisbootstrapped) by copying and pasting the following command: ```bash curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"info.isBootstrapped", "params": { "chain":"P" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info ``` If this returns `true`, it means that the chain is bootstrapped and we will proceed to deploying our L1. We will need to have the Avalanche node's NodeID, BLS public key and proof of possession. These can be obtained by ssh into the node itself and run the `getNodeID` API specified [here](/docs/rpcs/other/info-rpc#infogetnodeid) To deploy the new Avalanche L1, with your Ledger unlocked and running the Avalanche app, run: ```bash avalanche blockchain deploy testblockchain ``` This is going to start a new prompt series. ```bash Use the arrow keys to navigate: ↓ ↑ → ← ? Choose a network to deploy on: Local Network Fuji ▸ Mainnet ``` This tutorial is about deploying to `Mainnet`, so navigate with the arrow keys to `Mainnet` and hit enter. The user is then asked to provide which private key to use for the deployment. Select a key to has P-Chain AVAX to pay for transaction fees. ```bash ✔ Mainnet Deploying [testblockchain] to Mainnet ``` After that, CLI shows the `Mainnet` Ledger address used to fund the deployment: ```bash Ledger address: P-avax1ucykh6ls8thqpuwhg3vp8vvu6spg5e8tp8a25j ``` Select `No` to using local machine as a bootstrap validator on the blockchain. ```bash You can use your local machine as a bootstrap validator on the blockchain This means that you don't have to to set up a remote server on a cloud service (e.g. AWS / GCP) to be a validator on the blockchain. Use the arrow keys to navigate: ↓ ↑ → ← ? Do you want to use your local machine as a bootstrap validator?: Yes ▸ No ``` Enter 1 as the number of bootstrap validators we will be setting up. ```bash ✔ No ✗ How many bootstrap validators do you want to set up?: 1 ``` Select `Yes` since we have already set up our Avalanche Node on AWS. ```bash If you have set up your own Avalanche Nodes, you can provide the Node ID and BLS Key from those nodes in the next step. Otherwise, we will generate new Node IDs and BLS Key for you. Use the arrow keys to navigate: ↓ ↑ → ← ? Have you set up your own Avalanche Nodes?: ▸ Yes No ``` Next, we will enter the node's Node-ID: ```bash Getting info for bootstrap validator 1 ✗ What is the NodeID of the node you want to add as bootstrap validator?: █ ``` And BLS public key and proof of possession: ```bash Next, we need the public key and proof of possession of the node's BLS Check https://build.avax.network/docs/rpcs/other/info-rpc#infogetnodeid for instructions on calling info.getNodeID API ✗ What is the node's BLS public key?: █ ``` Next, the CLI generates a CreateSubnet TX to create the Subnet and asks the user to sign it by using the Ledger. ```bash *** Please sign Avalanche L1 creation hash on the ledger device *** ``` This activates a `Please review` window on the Ledger. Navigate to the Ledger's `APPROVE` window by using the Ledger's right button, and then authorize the request by pressing both left and right buttons. If the Ledger doesn't have enough funds, the user may see an error message: ```bash *** Please sign Avalanche L1 creation hash on the ledger device *** Error: insufficient funds: provided UTXOs need 1000000000 more units of asset "U8iRqJoiJm8xZHAacmvYyZVwqQx6uDNtQeP3CQ6fcgQk3JqnK" ``` If successful, the CLI next asks you to sign a CreateChain Tx. Once CreateChain Tx is signed, it will then ask you to sign ConvertSubnetToL1 Tx. Well done. You have just created your own Avalanche L1 on `Mainnet`. You will be able to see information on the deployed L1 at the end of `avalanche blockchain deploy` command: ```bash +--------------------+----------------------------------------------------+ | DEPLOYMENT RESULTS | | +--------------------+----------------------------------------------------+ | Chain Name | testblockchain | +--------------------+----------------------------------------------------+ | Subnet ID | 2cNuyBhvAd4jH5bFSGndezhB66Z4UHYAsLCMGoCpvhXVhrZfgd | +--------------------+----------------------------------------------------+ | VM ID | qcvkEX1zWSz7PtGd7CKvPRBqLVTzA7qyMPvkh5NMDWkuhrcCu | +--------------------+----------------------------------------------------+ | Blockchain ID | 2U7vNdB78xTiN6QtZ9aetfKoGtQhfeEPQG6QZC8bpq8QMf4cDx | +--------------------+ + | P-Chain TXID | | +--------------------+----------------------------------------------------+ ``` To get your new Avalanche L1 information, there are two options: - Call `avalanche blockchain describe` command or - Visit the [Avalanche L1 Explorer](https://subnets-test.avax.network/). The search best works by blockchain ID, so in this example, enter `2U7vNdB78xTiN6QtZ9aetfKoGtQhfeEPQG6QZC8bpq8QMf4cDx` into the search box and you should see your shiny new blockchain information. Add a Validator[​](#add-a-validator "Direct link to heading") ------------------------------------------------------------- Before proceeding to add a validator to our Avalanche L1, we will need to have the validator's NodeID, BLS public key and proof of possession. These can be obtained by ssh into the node itself and run the `getNodeID` API specified [here](/docs/rpcs/other/info-rpc#infogetnodeid) To add a validator to an Avalanche L1, the owner of the key that acts as the controller of ValidatorManager contract specified in `avalanche blockchain create` command above run: ```bash avalanche blockchain addValidator testblockchain ``` Choose `Mainnet`: ```bash Use the arrow keys to navigate: ↓ ↑ → ← ? Choose a network to deploy on: ▸ Mainnet ``` The CLI will show the Ledger address that will be used to pay for add validator tx: ```bash Ledger address: P-avax1ucykh6ls8thqpuwhg3vp8vvu6spg5e8tp8a25j ``` Now enter the **NodeID** of the new validator to be added. ```bash What is the NodeID of the validator you'd like to whitelist?: NodeID-BFa1paAAAAAAAAAAAAAAAAAAAAQGjPhUy ``` Next, enter the node's BLS public key and proof of possession. Now, enter the amount of AVAX that you would like to allocate to the new validator. The validator's balance is used to pay for continuous fee to the P-Chain. When this Balance reaches 0, the validator will be considered inactive and will no longer participate in validating the L1. 1 AVAX should last the validator about a month. ```bash What balance would you like to assign to the validator (in AVAX)?: 1 ``` Sign the addValidatorTx with your Ledger: ```bash *** Please sign add validator hash on the ledger device *** ``` This activates a `Please review` window on the Ledger. Navigate to the Ledger's `APPROVE` window by using the Ledger's right button, and then authorize the request by pressing both left and right buttons. This might take a couple of seconds. After, it prints: ```bash Transaction successful, transaction ID: r3tJ4Wr2CWA8AaticmFrKdKgAs5AhW2wwWTaQHRBZKwJhsXzb ``` This means the node is now a validator on the given Avalanche L1 on `Mainnet`! Going Live[​](#going-live "Direct link to heading") --------------------------------------------------- For the safety of your validators, you should setup dedicated API nodes to process transactions, but for test purposes, you can issue transactions directly to one of your validator's RPC interface. # On Production Infra (/docs/tooling/avalanche-cli/create-deploy-avalanche-l1s/deploy-on-production-infra) --- title: On Production Infra description: Learn how to deploy an Avalanche L1 on production infrastructure. --- After architecting your Avalanche L1 environment on the [local machine](/docs/tooling/avalanche-cli/create-deploy-avalanche-l1s/deploy-locally), proving the design and testing it out on [the Fuji Testnet](/docs/tooling/avalanche-cli/create-deploy-avalanche-l1s/deploy-on-fuji-testnet), eventually you will need to deploy your Avalanche L1 to production environment. Running an Avalanche L1 in production is much more involved than local and Testnet deploys, as your Avalanche L1 will have to take care of real world usage, maintaining uptime, upgrades and all of that in a potentially adversarial environment. The purpose of this document is to point out a set of general considerations and propose potential solutions to them. The architecture of the environment your particular Avalanche L1 will use will be greatly influenced by the type of load and activity your Avalanche L1 is designed to support so your solution will most likely differ from what we propose here. Still, it might be useful to follow along, to build up the intuition for the type of questions you will need to consider. Node Setup[​](#node-setup "Direct link to heading") --------------------------------------------------- Avalanche nodes are essential elements for running your Avalanche L1 in production. At a minimum, your Avalanche L1 will need validator nodes, potentially also nodes that act as RPC servers, indexers or explorers. Running a node is basically running an instance of [AvalancheGo](/docs/nodes) on a server. ### Server OS[​](#server-os "Direct link to heading") Although AvalancheGo can run on a MacOS or a Windows computer, we strongly recommend running nodes on computers running Linux as they are designed specifically for server loads and all the tools and utilities needed for administering a server are native to Linux. ### Hardware Specification[​](#hardware-specification "Direct link to heading") For running AvalancheGo as a validator on the Primary Network the recommended configuration is as follows: - CPU: Equivalent of 8 AWS vCPU - RAM: 16 GiB - Storage: 1 TiB with at least 3000 IOPS - OS: Ubuntu 20.04 - Network: Reliable IPv4 or IPv6 network connection, with an open public port That is the configuration sufficient for running a Primary Network node. Any resource requirements for your Avalanche L1 come on top of this, so you should not go below this configuration, but may need to step up the specification if you expect your Avalanche L1 to handle a significant amount of transactions. Be sure to set up monitoring of resource consumption for your nodes because resource exhaustion may cause your node to slow down or even halt, which may severely impact your Avalanche L1 negatively. ### Server Location[​](#server-location "Direct link to heading") You can run a node on a physical computer that you own and run, or on a cloud instance. Although running on your own HW may seem like a good idea, unless you have a sizeable DevOps 24/7 staff we recommend using cloud service providers as they generally provide reliable computing resources that you can count on to be properly maintained and monitored. #### Local Servers[​](#local-servers "Direct link to heading") If you plan on running nodes on your own hardware, make sure they satisfy the minimum HW specification as outlined earlier. Pay close attention to proper networking setup, making sure the p2p port (9651) is accessible and public IP properly configured on the node. Make sure the node is connected to the network physically (not over Wi-Fi), and that the router is powerful enough to handle a couple of thousands of persistent TCP connections and that network bandwidth can accommodate at least 5Mbps of steady upstream and downstream network traffic. When installing the AvalancheGo node on the machines, unless you have a dedicated DevOps staff that will take care of node setup and configuration, we recommend using the [installer script](/docs/nodes/run-a-node/using-install-script/installing-avalanche-go) to set up the nodes. It will abstract most of the setup process for you, set up the node as a system service and will enable easy node upgrades. #### Cloud Providers[​](#cloud-providers "Direct link to heading") There are a number of different cloud providers. We have documents that show how to set up a node on the most popular ones: - [Amazon Web Services](/docs/nodes/run-a-node/on-third-party-services/amazon-web-services) - [Azure](/docs/nodes/run-a-node/on-third-party-services/microsoft-azure) - [Google Cloud Platform](/docs/nodes/run-a-node/on-third-party-services/google-cloud) There is a whole range of other cloud providers that may offer lower prices or better deals for your particular needs, so it makes sense to shop around. Once you decide on a provider (or providers), if they offer instances in multiple data centers, it makes sense to spread the nodes geographically since that provides a better resilience and stability against outages. ### Number of Validators[​](#number-of-validators "Direct link to heading") Number of validators on an Avalanche L1 is a crucial decision you need to make. For stability and decentralization, you should strive to have as many validators as possible. For stability reasons our recommendation is to have **at least** 5 full validators on your Avalanche L1. If you have less than 5 validators your Avalanche L1 liveness will be at risk whenever a single validator goes offline, and if you have less than 4 even one offline node will halt your Avalanche L1. You should be aware that 5 is the minimum we recommend. But, from a decentralization standpoint having more validators is always better as it increases the stability of your Avalanche L1 and makes it more resilient to both technical failures and adversarial action. In a nutshell: run as many Avalanche L1 validators as you can. Considering that at times you will have to take nodes offline, for routine maintenance (at least for node upgrades which happen with some regularity) or unscheduled outages and failures you need to be able to routinely handle at least one node being offline without your Avalanche L1 performance degrading. ### Node Bootstrap[​](#node-bootstrap "Direct link to heading") Once you set up the server and install AvalancheGo on them, nodes will need to bootstrap (sync with the network). This is a lengthy process, as the nodes need to catch up and replay all the network activity since the genesis up to the present moment. Full bootstrap on a node can take more than a week, but there are ways to shorten that process, depending on your circumstances. #### State Sync[​](#state-sync "Direct link to heading") If the nodes you will be running as validators don't need to have the full transaction history, then you can use [state sync](/docs/nodes/chain-configs/primary-network/c-chain#state-sync-enabled). With this flag enabled, instead of replaying the whole history to get to the current state, nodes simply download only the current state from other network peers, shortening the bootstrap process from multiple days to a couple of hours. If the nodes will be used for Avalanche L1 validation exclusively, you can use the state sync without any issues. Currently, state sync is only available for the C-Chain, but since the bulk of the transactions on the platform happen there it still has a significant impact on the speed of bootstrapping. #### Database Copy[​](#database-copy "Direct link to heading") Good way to cut down on bootstrap times on multiple nodes is database copy. Database is identical across nodes, and as such can safely be copied from one node to another. Just make sure to that the node is not running during the copy process, as that can result in a corrupted database. Database copy procedure is explained in detail [here](/docs/nodes/maintain/backup-restore#database). Please make sure you don't reuse any node's NodeID by accident, especially don't restore another node's ID, see [here](/docs/nodes/maintain/backup-restore#nodeid) for details. Each node must has its own unique NodeID, otherwise, the nodes sharing the same ID will not behave correctly, which will impact your validator's uptime, thus staking rewards, and the stability of your Avalanche L1. Avalanche L1 Deploy[​](#avalanche-l1-deploy "Direct link to heading") --------------------------------------------------------- Once you have the nodes set up you are ready to deploy the actual Avalanche L1. Right now, the recommended tool to do that is [Avalanche-CLI](https://github.com/ava-labs/avalanche-cli). Instructions for deployment by Avalanche-CLI can be found [here](/docs/tooling/avalanche-cli/create-deploy-avalanche-l1s/deploy-on-mainnet). ### Ledger Hardware Wallet[​](#ledger-hw-wallet "Direct link to heading") When creating the Avalanche L1, you will be required to have a private key that will control the administrative functions of the Avalanche L1 (adding validators, managing the configuration). Needless to say, whoever has this private key has complete control over the Avalanche L1 and the way it runs. Therefore, protecting that key is of the utmost operational importance. Which is why we strongly recommend using a hardware wallet such as a [Ledger HW Wallet](https://www.ledger.com/) to store and access that private key. General instruction on how to use a Ledger device with Avalanche can be found [here](https://support.avax.network/en/articles/6150237-how-to-use-a-ledger-nano-s-or-nano-x-with-avalanche). ### Genesis File[​](#genesis-file "Direct link to heading") The structure that defines the most important parameters in an Avalanche L1 is found in the genesis file, which is a `json` formatted, human-readable file. Describing the contents and the options available in the genesis file is beyond the scope of this document, and if you're ready to deploy your Avalanche L1 to production you probably have it mapped out already. If you want to review, we have a description of the genesis file in our document on [customizing EVM Avalanche L1s](/docs/avalanche-l1s/evm-configuration/customize-avalanche-l1). Validator Configuration[​](#validator-configuration "Direct link to heading") ----------------------------------------------------------------------------- Running nodes as Avalanche L1 validators warrants some additional considerations, above those when running a regular node or a Primary Network-only validator. ### Joining an Avalanche L1[​](#joining-a-avalanche-l1 "Direct link to heading") For a node to join an Avalanche L1, there are two prerequisites: - Primary Network validation - Avalanche L1 tracking Primary Network validation means that a node cannot join an Avalanche L1 as a validator before becoming a validator on the Primary Network itself. So, after you add the node to the validator set on the Primary Network, node can join an Avalanche L1. Of course, this is valid only for Avalanche L1 validators, if you need a non-validating Avalanche L1 node, then the node doesn't need to be a validator at all. To have a node start syncing the Avalanche L1, you need to add the `--track-subnets` command line option, or `track-subnets` key to the node config file (found at `.avalanchego/configs/node.json` for installer-script created nodes). A single node can sync multiple Layer 1s, so you can add them as a comma-separated list of Avalanche L1 IDs (SubnetID). An example of a node config syncing two Avalanche L1s: ```json { "public-ip-resolution-service": "opendns", "http-host": "", "track-subnets": "28nrH5T2BMvNrWecFcV3mfccjs6axM1TVyqe79MCv2Mhs8kxiY,Ai42MkKqk8yjXFCpoHXw7rdTWSHiKEMqh5h8gbxwjgkCUfkrk" } ``` But that is not all. Besides tracking the SubnetID, the node also needs to have the plugin that contains the VM instance the blockchain in the Avalanche L1 will run. You should have already been through that on Testnet and Fuji, but for a refresher, you can refer to [this tutorial](/docs/tooling/avalanche-cli/create-deploy-avalanche-l1s/deploy-on-fuji-testnet). So, name the VM plugin binary as the `VMID` of the Avalanche L1 chain and place it in the `plugins` directory where the node binary is (for installer-script created nodes that would be `~/avalanche-node/plugins/`). ### Avalanche L1 Bootstrapping[​](#avalanche-l1-bootstrapping "Direct link to heading") After you have tracked the Avalanche L1 and placed the VM binary in the correct directory, your node is ready to start syncing with the Avalanche L1. Restart the node and monitor the log output. You should notice something similar to: ```bash Jul 30 18:26:31 node-fuji avalanchego[1728308]: [07-30|18:26:31.422] INFO chains/manager.go:262 creating chain: Jul 30 18:26:31 node-fuji avalanchego[1728308]: ID: 2ebCneCbwthjQ1rYT41nhd7M76Hc6YmosMAQrTFhBq8qeqh6tt Jul 30 18:26:31 node-fuji avalanchego[1728308]: VMID:srEXiWaHuhNyGwPUi444Tu47ZEDwxTWrbQiuD7FmgSAQ6X7Dy ``` That means the node has detected the Avalanche L1, and is attempting to initialize it and start bootstrapping the Avalanche L1. It might take some time (if there are already transactions on the Avalanche L1), and eventually it will finish the bootstrap with a message like: ```bash Jul 30 18:27:21 node-fuji avalanchego[1728308]: [07-30|18:27:21.055] INFO <2ebCneCbwthjQ1rYT41nhd7M76Hc6YmosMAQrTFhBq8qeqh6tt Chain> snowman/transitive.go:333 consensus starting with J5wjmotMCrM2DKxeBTBPfwgCPpvsjtuqWNozLog2TomTjSuGK as the last accepted block ``` That means the node has successfully bootstrapped the Avalanche L1 and is now in sync. If the node is one of the validators, it will start validating any transactions that get posted to the Avalanche L1. ### Monitoring[​](#monitoring "Direct link to heading") If you want to inspect the process of Avalanche L1 syncing, you can use the RPC call to check for the [blockchain status](/docs/rpcs/p-chain#platformgetblockchainstatus). For a more in-depth look into Avalanche L1 operation, check out the blockchain log. By default, the log can be found in `~/.avalanchego/logs/ChainID.log` where you replace the `ChainID` with the actual ID of the blockchain in your Avalanche L1. For an even more thorough (and pretty!) insight into how the node and the Avalanche L1 is behaving, you can install the Prometheus+Grafana monitoring system with the custom dashboards for the regular node operation, as well as a dedicated dashboard for Avalanche L1 data. Check out the [tutorial](/docs/nodes/maintain/monitoring) for information on how to set it up. ### Managing Validation[​](#managing-validation "Direct link to heading") On Avalanche all validations are limited in time and can range from two weeks up to one year. Furthermore, Avalanche L1 validations are always a subset of the Primary Network validation period (must be shorter or the same). That means that periodically your validators will expire and you will need to submit a new validation transaction for both the Primary Network and your Avalanche L1. Unless managed properly and in a timely manner, that can be disruptive for your Avalanche L1 (if all validators expire at the same time your Avalanche L1 will halt). To avoid that, keep notes on when a particular validation is set to expire and be ready to renew it as soon as possible. Also, when initially setting up the nodes, make sure to stagger the validator expiry so they don't all expire on the same date. Setting end dates at least a day apart is a good practice, as well as setting reminders for each expiry. Conclusion[​](#conclusion "Direct link to heading") --------------------------------------------------- Hopefully, by reading this document you have a better picture of the requirements and considerations you need to make when deploying your Avalanche L1 to production and you are now better prepared to launch your Avalanche L1 successfully. Keep in mind, running an Avalanche L1 in production is not a one-and-done kind of situation, it is in fact running a fleet of servers 24/7. And as with any real time service, you should have a robust logging, monitoring and alerting systems to constantly check the nodes and Avalanche L1 health and alert you if anything out of the ordinary happens. If you have any questions, doubts or would like to chat, please check out our [Discord server](https://discord.gg/avax/), where we host a dedicated `#subnet-chat` channel dedicated to talking about all things Avalanche L1. # With Custom VM (/docs/tooling/avalanche-cli/create-deploy-avalanche-l1s/deploy-with-custom-vm) --- title: With Custom VM description: Learn how to create an Avalanche L1 with a custom virtual machine and deploy it locally. --- This tutorial walks through the process of creating an Avalanche L1 with a custom virtual machine and deploying it locally. Although the tutorial uses a fork of Subnet-EVM as an example, you can extend its lessons to support any custom VM binary. Fork Subnet-EVM[​](#fork-subnet-evm "Direct link to heading") ------------------------------------------------------------- Instead of building a custom VM from scratch, this tutorial starts with forking Subnet-EVM. ### Clone Subnet-EVM[​](#clone-subnet-evm "Direct link to heading") First off, clone the Subnet-EVM repository into a directory of your choosing. ```bash git clone https://github.com/ava-labs/subnet-evm.git ``` The repository cloning method used is HTTPS, but SSH can be used too: `git clone git@github.com:ava-labs/subnet-evm.git` You can find more about SSH and how to use it [here](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/about-ssh). ### Modify and Build Subnet-EVM[​](#modify-and-build-subnet-evm "Direct link to heading") To prove you're running your custom binary and not the stock Subnet-EVM included with Avalanche-CLI, you need to modify the Subnet-EVM binary by making a minor change. Navigate to the directory you cloned Subnet-EVM into and generate a new commit: ```bash git commit -a --allow-empty -m "custom vm commit" ``` Take note of the new commit hash: ```bash git rev-parse HEAD c0fe6506a40da466285f37dd0d3c044f494cce32 ``` In this case, `c0fe6506a40da466285f37dd0d3c044f494cce32`. Now build your custom binary by running: ```bash ./scripts/build.sh custom_vm.bin ``` This command builds the binary and saves it at `./custom_vm.bin`. ### Create a Custom Genesis[​](#create-a-custom-genesis "Direct link to heading") To start a VM, you need to provide a genesis file. Here is a basic Subnet-EVM genesis that's compatible with your custom VM. This genesis includes the PoA Validator Manager, a Transparent Proxy and Proxy Admin as predeployed contracts. `0c0deba5e0000000000000000000000000000000` is the ValidatorManager address. `0feedc0de0000000000000000000000000000000` is the Transparent Proxy address. `c0ffee1234567890abcdef1234567890abcdef34` is the Proxy Admin contract. These proxy contracts are from [OpenZeppelin v4.9](https://github.com/OpenZeppelin/openzeppelin-contracts/tree/release-v4.9/contracts/proxy/transparent) You can add your own predeployed contracts by running `forge build` and collecting the `deployedBytecode` output from `out/MyContract.sol` ```json { "config": { "berlinBlock": 0, "byzantiumBlock": 0, "chainId": 1, "constantinopleBlock": 0, "eip150Block": 0, "eip155Block": 0, "eip158Block": 0, "feeConfig": { "gasLimit": 12000000, "targetBlockRate": 2, "minBaseFee": 25000000000, "targetGas": 60000000, "baseFeeChangeDenominator": 36, "minBlockGasCost": 0, "maxBlockGasCost": 1000000, "blockGasCostStep": 200000 }, "homesteadBlock": 0, "istanbulBlock": 0, "londonBlock": 0, "muirGlacierBlock": 0, "petersburgBlock": 0, "warpConfig": { "blockTimestamp": 1736356569, "quorumNumerator": 67, "requirePrimaryNetworkSigners": true } }, "nonce": "0x0", "timestamp": "0x677eb2d9", "extraData": "0x", "gasLimit": "0xb71b00", "difficulty": "0x0", "mixHash": "0x0000000000000000000000000000000000000000000000000000000000000000", "coinbase": "0x0000000000000000000000000000000000000000", "alloc": { "0c0deba5e0000000000000000000000000000000": { "code": "0x608060405234801561000f575f80fd5b5060043610610132575f3560e01c80639ba96b86116100b4578063c974d1b611610079578063c974d1b6146102a7578063d588c18f146102af578063d5f20ff6146102c2578063df93d8de146102e2578063f2fde38b146102ec578063fd7ac5e7146102ff575f80fd5b80639ba96b861461024c578063a3a65e481461025f578063b771b3bc14610272578063bc5fbfec14610280578063bee0a03f14610294575f80fd5b8063715018a6116100fa578063715018a6146101be578063732214f8146101c65780638280a25a146101db5780638da5cb5b146101f557806397fb70d414610239575f80fd5b80630322ed981461013657806320d91b7a1461014b578063467ef06f1461015e57806360305d621461017157806366435abf14610193575b5f80fd5b610149610144366004612b01565b610312565b005b610149610159366004612b30565b610529565b61014961016c366004612b7e565b610a15565b610179601481565b60405163ffffffff90911681526020015b60405180910390f35b6101a66101a1366004612b01565b610a23565b6040516001600160401b03909116815260200161018a565b610149610a37565b6101cd5f81565b60405190815260200161018a565b6101e3603081565b60405160ff909116815260200161018a565b7f9016d09d72d40fdae2fd8ceac6b6234c7706214fd39c1cd1e609a0528c199300546001600160a01b03165b6040516001600160a01b03909116815260200161018a565b610149610247366004612b01565b610a4a565b6101cd61025a366004612bad565b610a5f565b61014961026d366004612b7e565b610a7b565b6102216005600160991b0181565b6101cd5f8051602061370d83398151915281565b6101496102a2366004612b01565b610c04565b6101e3601481565b6101496102bd366004612c06565b610d41565b6102d56102d0366004612b01565b610e4f565b60405161018a9190612cc3565b6101a66202a30081565b6101496102fa366004612d43565b610f9e565b6101cd61030d366004612d65565b610fdb565b5f8181525f8051602061372d8339815191526020526040808220815160e0810190925280545f8051602061370d83398151915293929190829060ff16600581111561035f5761035f612c42565b600581111561037057610370612c42565b815260200160018201805461038490612dd0565b80601f01602080910402602001604051908101604052809291908181526020018280546103b090612dd0565b80156103fb5780601f106103d2576101008083540402835291602001916103fb565b820191905f5260205f20905b8154815290600101906020018083116103de57829003601f168201915b505050918352505060028201546001600160401b038082166020840152600160401b820481166040840152600160801b820481166060840152600160c01b909104811660808301526003928301541660a0909101529091508151600581111561046657610466612c42565b146104a2575f8381526007830160205260409081902054905163170cc93360e21b81526104999160ff1690600401612e08565b60405180910390fd5b6005600160991b016001600160a01b031663ee5b48eb6104c78584606001515f611036565b6040518263ffffffff1660e01b81526004016104e39190612e16565b6020604051808303815f875af11580156104ff573d5f803e3d5ffd5b505050506040513d601f19601f820116820180604052508101906105239190612e28565b50505050565b7fe92546d698950ddd38910d2e15ed1d923cd0a7b3dde9e2a6a3f380565559cb09545f8051602061370d8339815191529060ff161561057b57604051637fab81e560e01b815260040160405180910390fd5b6005600160991b016001600160a01b0316634213cf786040518163ffffffff1660e01b8152600401602060405180830381865afa1580156105be573d5f803e3d5ffd5b505050506040513d601f19601f820116820180604052508101906105e29190612e28565b83602001351461060b576040516372b0a7e760e11b815260208401356004820152602401610499565b3061061c6060850160408601612d43565b6001600160a01b03161461065f5761063a6060840160408501612d43565b604051632f88120d60e21b81526001600160a01b039091166004820152602401610499565b5f61066d6060850185612e3f565b905090505f805b828163ffffffff161015610955575f6106906060880188612e3f565b8363ffffffff168181106106a6576106a6612e84565b90506020028101906106b89190612e98565b6106c190612fbc565b80516040519192505f9160088801916106d991613035565b9081526020016040518091039020541461070957805160405163a41f772f60e01b81526104999190600401612e16565b5f6002885f01358460405160200161073892919091825260e01b6001600160e01b031916602082015260240190565b60408051601f198184030181529082905261075291613035565b602060405180830381855afa15801561076d573d5f803e3d5ffd5b5050506040513d601f19601f820116820180604052508101906107909190612e28565b90508086600801835f01516040516107a89190613035565b90815260408051602092819003830181209390935560e0830181526002835284518284015284810180516001600160401b03908116858401525f60608601819052915181166080860152421660a085015260c0840181905284815260078a01909252902081518154829060ff1916600183600581111561082a5761082a612c42565b0217905550602082015160018201906108439082613091565b506040828101516002830180546060860151608087015160a08801516001600160401b039586166001600160801b031990941693909317600160401b92861692909202919091176001600160801b0316600160801b918516919091026001600160c01b031617600160c01b9184169190910217905560c0909301516003909201805467ffffffffffffffff1916928416929092179091558301516108e8911685613164565b82516040519195506108f991613035565b60408051918290038220908401516001600160401b031682529082907f9d47fef9da077661546e646d61830bfcbda90506c2e5eed38195e82c4eb1cbdf9060200160405180910390a350508061094e90613177565b9050610674565b50600483018190555f61097361096a86611085565b6040015161119b565b90505f61097f87611328565b90505f6002826040516109929190613035565b602060405180830381855afa1580156109ad573d5f803e3d5ffd5b5050506040513d601f19601f820116820180604052508101906109d09190612e28565b90508281146109fc57604051631872fc8d60e01b81526004810182905260248101849052604401610499565b5050506009909201805460ff1916600117905550505050565b610a1e81611561565b505050565b5f610a2d82610e4f565b6080015192915050565b610a3f61189f565b610a485f6118fa565b565b610a5261189f565b610a5b8161196a565b5050565b5f610a6861189f565b610a728383611c4e565b90505b92915050565b5f8051602061370d8339815191525f80610aa0610a9785611085565b604001516121a1565b9150915080610ac657604051632d07135360e01b81528115156004820152602401610499565b5f82815260068401602052604090208054610ae090612dd0565b90505f03610b045760405163089938b360e11b815260048101839052602401610499565b60015f83815260078501602052604090205460ff166005811115610b2a57610b2a612c42565b14610b5d575f8281526007840160205260409081902054905163170cc93360e21b81526104999160ff1690600401612e08565b5f8281526006840160205260408120610b7591612a75565b5f828152600784016020908152604091829020805460ff1916600290811782550180546001600160401b0342818116600160c01b026001600160c01b0390931692909217928390558451600160801b9093041682529181019190915283917ff8fd1c90fb9cfa2ca2358fdf5806b086ad43315d92b221c929efc7f105ce7568910160405180910390a250505050565b5f8181527fe92546d698950ddd38910d2e15ed1d923cd0a7b3dde9e2a6a3f380565559cb066020526040902080545f8051602061370d8339815191529190610c4b90612dd0565b90505f03610c6f5760405163089938b360e11b815260048101839052602401610499565b60015f83815260078301602052604090205460ff166005811115610c9557610c95612c42565b14610cc8575f8281526007820160205260409081902054905163170cc93360e21b81526104999160ff1690600401612e08565b5f82815260068201602052604090819020905163ee5b48eb60e01b81526005600160991b019163ee5b48eb91610d019190600401613199565b6020604051808303815f875af1158015610d1d573d5f803e3d5ffd5b505050506040513d601f19601f82011682018060405250810190610a1e9190612e28565b7ff0c57e16840df040f15088dc2f81fe391c3923bec73e23a9662efc9c229c6a008054600160401b810460ff1615906001600160401b03165f81158015610d855750825b90505f826001600160401b03166001148015610da05750303b155b905081158015610dae575080155b15610dcc5760405163f92ee8a960e01b815260040160405180910390fd5b845467ffffffffffffffff191660011785558315610df657845460ff60401b1916600160401b1785555b610e00878761235d565b8315610e4657845460ff60401b19168555604051600181527fc7f505b2f371ae2175ee4913f4499e1f2633a7b5936321eed1cdaeb6115181d29060200160405180910390a15b50505050505050565b610e57612aac565b5f8281525f8051602061372d833981519152602052604090819020815160e0810190925280545f8051602061370d833981519152929190829060ff166005811115610ea457610ea4612c42565b6005811115610eb557610eb5612c42565b8152602001600182018054610ec990612dd0565b80601f0160208091040260200160405190810160405280929190818152602001828054610ef590612dd0565b8015610f405780601f10610f1757610100808354040283529160200191610f40565b820191905f5260205f20905b815481529060010190602001808311610f2357829003601f168201915b505050918352505060028201546001600160401b038082166020840152600160401b820481166040840152600160801b820481166060840152600160c01b9091048116608083015260039092015490911660a0909101529392505050565b610fa661189f565b6001600160a01b038116610fcf57604051631e4fbdf760e01b81525f6004820152602401610499565b610fd8816118fa565b50565b6040515f905f8051602061370d833981519152907fe92546d698950ddd38910d2e15ed1d923cd0a7b3dde9e2a6a3f380565559cb089061101e9086908690613223565b90815260200160405180910390205491505092915050565b604080515f6020820152600360e01b602282015260268101949094526001600160c01b031960c093841b811660468601529190921b16604e830152805180830360360181526056909201905290565b60408051606080820183525f8083526020830152918101919091526040516306f8253560e41b815263ffffffff831660048201525f9081906005600160991b0190636f825350906024015f60405180830381865afa1580156110e9573d5f803e3d5ffd5b505050506040513d5f823e601f3d908101601f191682016040526111109190810190613241565b915091508061113257604051636b2f19e960e01b815260040160405180910390fd5b815115611158578151604051636ba589a560e01b81526004810191909152602401610499565b60208201516001600160a01b031615611194576020820151604051624de75d60e31b81526001600160a01b039091166004820152602401610499565b5092915050565b5f81516026146111d057815160405163cc92daa160e01b815263ffffffff909116600482015260266024820152604401610499565b5f805b600281101561121f576111e7816001613313565b6111f2906008613326565b61ffff1684828151811061120857611208612e84565b016020015160f81c901b91909117906001016111d3565b5061ffff8116156112495760405163407b587360e01b815261ffff82166004820152602401610499565b5f805b60048110156112a457611260816003613313565b61126b906008613326565b63ffffffff168561127d836002613164565b8151811061128d5761128d612e84565b016020015160f81c901b919091179060010161124c565b5063ffffffff8116156112ca57604051635b60892f60e01b815260040160405180910390fd5b5f805b602081101561131f576112e181601f613313565b6112ec906008613326565b866112f8836006613164565b8151811061130857611308612e84565b016020015160f81c901b91909117906001016112cd565b50949350505050565b60605f8083356020850135601461134487870160408901612d43565b6113516060890189612e3f565b60405160f09790971b6001600160f01b0319166020880152602287019590955250604285019290925260e090811b6001600160e01b0319908116606286015260609290921b6bffffffffffffffffffffffff191660668501529190911b16607a820152607e0160405160208183030381529060405290505f5b6113d76060850185612e3f565b9050811015611194576113ed6060850185612e3f565b828181106113fd576113fd612e84565b905060200281019061140f9190612e98565b61141d90602081019061333d565b905060301461143f5760405163180ffa0d60e01b815260040160405180910390fd5b8161144d6060860186612e3f565b8381811061145d5761145d612e84565b905060200281019061146f9190612e98565b611479908061333d565b90506114886060870187612e3f565b8481811061149857611498612e84565b90506020028101906114aa9190612e98565b6114b4908061333d565b6114c16060890189612e3f565b868181106114d1576114d1612e84565b90506020028101906114e39190612e98565b6114f190602081019061333d565b6114fe60608b018b612e3f565b8881811061150e5761150e612e84565b90506020028101906115209190612e98565b61153190606081019060400161337f565b6040516020016115479796959493929190613398565b60408051601f1981840301815291905291506001016113ca565b5f61156a612aac565b5f8051602061370d8339815191525f80611586610a9787611085565b9150915080156115ad57604051632d07135360e01b81528115156004820152602401610499565b5f828152600784016020526040808220815160e081019092528054829060ff1660058111156115de576115de612c42565b60058111156115ef576115ef612c42565b815260200160018201805461160390612dd0565b80601f016020809104026020016040519081016040528092919081815260200182805461162f90612dd0565b801561167a5780601f106116515761010080835404028352916020019161167a565b820191905f5260205f20905b81548152906001019060200180831161165d57829003601f168201915b505050918352505060028201546001600160401b038082166020840152600160401b820481166040840152600160801b820481166060840152600160c01b909104811660808301526003928301541660a090910152909150815160058111156116e5576116e5612c42565b14158015611706575060018151600581111561170357611703612c42565b14155b1561172757805160405163170cc93360e21b81526104999190600401612e08565b60038151600581111561173c5761173c612c42565b0361174a576004815261174f565b600581525b8360080181602001516040516117659190613035565b90815260408051602092819003830190205f908190558581526007870190925290208151815483929190829060ff191660018360058111156117a9576117a9612c42565b0217905550602082015160018201906117c29082613091565b5060408201516002820180546060850151608086015160a08701516001600160401b039586166001600160801b031990941693909317600160401b92861692909202919091176001600160801b0316600160801b918516919091026001600160c01b031617600160c01b9184169190910217905560c0909201516003909101805467ffffffffffffffff1916919092161790558051600581111561186857611868612c42565b60405184907f1c08e59656f1a18dc2da76826cdc52805c43e897a17c50faefb8ab3c1526cc16905f90a39196919550909350505050565b336118d17f9016d09d72d40fdae2fd8ceac6b6234c7706214fd39c1cd1e609a0528c199300546001600160a01b031690565b6001600160a01b031614610a485760405163118cdaa760e01b8152336004820152602401610499565b7f9016d09d72d40fdae2fd8ceac6b6234c7706214fd39c1cd1e609a0528c19930080546001600160a01b031981166001600160a01b03848116918217845560405192169182907f8be0079c531659141344cd1fd0a4f28419497f9722a3daafe3b4186f6b6457e0905f90a3505050565b611972612aac565b5f8281525f8051602061372d8339815191526020526040808220815160e0810190925280545f8051602061370d83398151915293929190829060ff1660058111156119bf576119bf612c42565b60058111156119d0576119d0612c42565b81526020016001820180546119e490612dd0565b80601f0160208091040260200160405190810160405280929190818152602001828054611a1090612dd0565b8015611a5b5780601f10611a3257610100808354040283529160200191611a5b565b820191905f5260205f20905b815481529060010190602001808311611a3e57829003601f168201915b50505091835250506002828101546001600160401b038082166020850152600160401b820481166040850152600160801b820481166060850152600160c01b9091048116608084015260039093015490921660a09091015290915081516005811115611ac957611ac9612c42565b14611afc575f8481526007830160205260409081902054905163170cc93360e21b81526104999160ff1690600401612e08565b60038152426001600160401b031660c08201525f84815260078301602052604090208151815483929190829060ff19166001836005811115611b4057611b40612c42565b021790555060208201516001820190611b599082613091565b5060408201516002820180546060850151608086015160a08701516001600160401b039586166001600160801b031990941693909317600160401b92861692909202919091176001600160801b0316600160801b918516919091026001600160c01b031617600160c01b9184169190910217905560c0909201516003909101805467ffffffffffffffff1916919092161790555f611bf78582612377565b6080840151604080516001600160401b03909216825242602083015291935083925087917f13d58394cf269d48bcf927959a29a5ffee7c9924dafff8927ecdf3c48ffa7c67910160405180910390a3509392505050565b7fe92546d698950ddd38910d2e15ed1d923cd0a7b3dde9e2a6a3f380565559cb09545f9060ff16611c9257604051637fab81e560e01b815260040160405180910390fd5b5f8051602061370d83398151915242611cb1606086016040870161337f565b6001600160401b0316111580611ceb5750611ccf6202a30042613164565b611cdf606086016040870161337f565b6001600160401b031610155b15611d2557611d00606085016040860161337f565b604051635879da1360e11b81526001600160401b039091166004820152602401610499565b6030611d34602086018661333d565b905014611d6657611d48602085018561333d565b6040516326475b2f60e11b8152610499925060040190815260200190565b611d70848061333d565b90505f03611d9d57611d82848061333d565b604051633e08a12560e11b8152600401610499929190613401565b5f60088201611dac868061333d565b604051611dba929190613223565b90815260200160405180910390205414611df357611dd8848061333d565b60405163a41f772f60e01b8152600401610499929190613401565b611dfd835f6124ce565b6040805160e08101909152815481525f908190611f099060208101611e22898061333d565b8080601f0160208091040260200160405190810160405280939291908181526020018383808284375f92019190915250505090825250602090810190611e6a908a018a61333d565b8080601f0160208091040260200160405190810160405280939291908181526020018383808284375f92019190915250505090825250602001611eb360608a0160408b0161337f565b6001600160401b03168152602001611ece60608a018a61342f565b611ed790613443565b8152602001611ee960808a018a61342f565b611ef290613443565b8152602001876001600160401b03168152506126a8565b5f82815260068601602052604090209193509150611f278282613091565b508160088401611f37888061333d565b604051611f45929190613223565b9081526040519081900360200181209190915563ee5b48eb60e01b81525f906005600160991b019063ee5b48eb90611f81908590600401612e16565b6020604051808303815f875af1158015611f9d573d5f803e3d5ffd5b505050506040513d601f19601f82011682018060405250810190611fc19190612e28565b6040805160e081019091529091508060018152602001611fe1898061333d565b8080601f0160208091040260200160405190810160405280939291908181526020018383808284375f9201829052509385525050506001600160401b0389166020808401829052604080850184905260608501929092526080840183905260a0909301829052868252600788019092522081518154829060ff1916600183600581111561207057612070612c42565b0217905550602082015160018201906120899082613091565b5060408201516002820180546060850151608086015160a08701516001600160401b039586166001600160801b031990941693909317600160401b92861692909202919091176001600160801b0316600160801b918516919091026001600160c01b031617600160c01b9184169190910217905560c0909201516003909101805467ffffffffffffffff19169190921617905580612127888061333d565b604051612135929190613223565b6040518091039020847fb77297e3befc691bfc864a81e241f83e2ef722b6e7becaa2ecec250c6d52b430898b6040016020810190612173919061337f565b604080516001600160401b0393841681529290911660208301520160405180910390a4509095945050505050565b5f8082516027146121d757825160405163cc92daa160e01b815263ffffffff909116600482015260276024820152604401610499565b5f805b6002811015612226576121ee816001613313565b6121f9906008613326565b61ffff1685828151811061220f5761220f612e84565b016020015160f81c901b91909117906001016121da565b5061ffff8116156122505760405163407b587360e01b815261ffff82166004820152602401610499565b5f805b60048110156122ab57612267816003613313565b612272906008613326565b63ffffffff1686612284836002613164565b8151811061229457612294612e84565b016020015160f81c901b9190911790600101612253565b5063ffffffff81166002146122d357604051635b60892f60e01b815260040160405180910390fd5b5f805b6020811015612328576122ea81601f613313565b6122f5906008613326565b87612301836006613164565b8151811061231157612311612e84565b016020015160f81c901b91909117906001016122d6565b505f8660268151811061233d5761233d612e84565b016020015191976001600160f81b03199092161515965090945050505050565b612365612895565b61236e826128de565b610a5b816128f7565b5f8281525f8051602061372d833981519152602052604081206002015481905f8051602061370d83398151915290600160801b90046001600160401b03166123bf85826124ce565b5f6123c987612908565b5f8881526007850160205260408120600201805467ffffffffffffffff60801b1916600160801b6001600160401b038b16021790559091506005600160991b0163ee5b48eb6124198a858b611036565b6040518263ffffffff1660e01b81526004016124359190612e16565b6020604051808303815f875af1158015612451573d5f803e3d5ffd5b505050506040513d601f19601f820116820180604052508101906124759190612e28565b604080516001600160401b038a811682526020820184905282519394508516928b927f07de5ff35a674a8005e661f3333c907ca6333462808762d19dc7b3abb1a8c1df928290030190a3909450925050505b9250929050565b5f8051602061370d8339815191525f6001600160401b038084169085161115612502576124fb838561350a565b905061250f565b61250c848461350a565b90505b6040805160808101825260028401548082526003850154602083015260048501549282019290925260058401546001600160401b031660608201524291158061257157506001840154815161256d916001600160401b031690613164565b8210155b15612597576001600160401b0383166060820152818152604081015160208201526125b6565b82816060018181516125a9919061352a565b6001600160401b03169052505b60608101516125c690606461354a565b602082015160018601546001600160401b0392909216916125f19190600160401b900460ff16613326565b101561262157606081015160405163dfae880160e01b81526001600160401b039091166004820152602401610499565b856001600160401b03168160400181815161263c9190613164565b9052506040810180516001600160401b038716919061265c908390613313565b905250805160028501556020810151600385015560408101516004850155606001516005909301805467ffffffffffffffff19166001600160401b039094169390931790925550505050565b5f60608260400151516030146126d15760405163180ffa0d60e01b815260040160405180910390fd5b82516020808501518051604080880151606089015160808a01518051908701515193515f98612712988a986001989297929690959094909390929101613575565b60405160208183030381529060405290505f5b846080015160200151518110156127845781856080015160200151828151811061275157612751612e84565b602002602001015160405160200161276a92919061362f565b60408051601f198184030181529190529150600101612725565b5060a08401518051602091820151516040516127a4938593929101613665565b60405160208183030381529060405290505f5b8460a00151602001515181101561281657818560a001516020015182815181106127e3576127e3612e84565b60200260200101516040516020016127fc92919061362f565b60408051601f1981840301815291905291506001016127b7565b5060c084015160405161282d9183916020016136a0565b604051602081830303815290604052905060028160405161284e9190613035565b602060405180830381855afa158015612869573d5f803e3d5ffd5b5050506040513d601f19601f8201168201806040525081019061288c9190612e28565b94909350915050565b7ff0c57e16840df040f15088dc2f81fe391c3923bec73e23a9662efc9c229c6a0054600160401b900460ff16610a4857604051631afcd79f60e31b815260040160405180910390fd5b6128e6612895565b6128ee61297d565b610fd881612985565b6128ff612895565b610fd881612a6d565b5f8181525f8051602061372d8339815191526020526040812060020180545f8051602061370d833981519152919060089061295290600160401b90046001600160401b03166136d1565b91906101000a8154816001600160401b0302191690836001600160401b031602179055915050919050565b610a48612895565b61298d612895565b80355f8051602061370d83398151915290815560146129b260608401604085016136ec565b60ff1611806129d157506129cc60608301604084016136ec565b60ff16155b15612a05576129e660608301604084016136ec565b604051634a59bbff60e11b815260ff9091166004820152602401610499565b612a1560608301604084016136ec565b60018201805460ff92909216600160401b0260ff60401b19909216919091179055612a46604083016020840161337f565b600191909101805467ffffffffffffffff19166001600160401b0390921691909117905550565b610fa6612895565b508054612a8190612dd0565b5f825580601f10612a90575050565b601f0160209004905f5260205f2090810190610fd89190612ae9565b6040805160e08101909152805f81526060602082018190525f604083018190529082018190526080820181905260a0820181905260c09091015290565b5b80821115612afd575f8155600101612aea565b5090565b5f60208284031215612b11575f80fd5b5035919050565b803563ffffffff81168114612b2b575f80fd5b919050565b5f8060408385031215612b41575f80fd5b82356001600160401b03811115612b56575f80fd5b830160808186031215612b67575f80fd5b9150612b7560208401612b18565b90509250929050565b5f60208284031215612b8e575f80fd5b610a7282612b18565b80356001600160401b0381168114612b2b575f80fd5b5f8060408385031215612bbe575f80fd5b82356001600160401b03811115612bd3575f80fd5b830160a08186031215612be4575f80fd5b9150612b7560208401612b97565b6001600160a01b0381168114610fd8575f80fd5b5f808284036080811215612c18575f80fd5b6060811215612c25575f80fd5b508291506060830135612c3781612bf2565b809150509250929050565b634e487b7160e01b5f52602160045260245ffd5b60068110612c7257634e487b7160e01b5f52602160045260245ffd5b9052565b5f5b83811015612c90578181015183820152602001612c78565b50505f910152565b5f8151808452612caf816020860160208601612c76565b601f01601f19169290920160200192915050565b60208152612cd5602082018351612c56565b5f602083015160e06040840152612cf0610100840182612c98565b905060408401516001600160401b0380821660608601528060608701511660808601528060808701511660a08601528060a08701511660c08601528060c08701511660e086015250508091505092915050565b5f60208284031215612d53575f80fd5b8135612d5e81612bf2565b9392505050565b5f8060208385031215612d76575f80fd5b82356001600160401b0380821115612d8c575f80fd5b818501915085601f830112612d9f575f80fd5b813581811115612dad575f80fd5b866020828501011115612dbe575f80fd5b60209290920196919550909350505050565b600181811c90821680612de457607f821691505b602082108103612e0257634e487b7160e01b5f52602260045260245ffd5b50919050565b60208101610a758284612c56565b602081525f610a726020830184612c98565b5f60208284031215612e38575f80fd5b5051919050565b5f808335601e19843603018112612e54575f80fd5b8301803591506001600160401b03821115612e6d575f80fd5b6020019150600581901b36038213156124c7575f80fd5b634e487b7160e01b5f52603260045260245ffd5b5f8235605e19833603018112612eac575f80fd5b9190910192915050565b634e487b7160e01b5f52604160045260245ffd5b604051606081016001600160401b0381118282101715612eec57612eec612eb6565b60405290565b604080519081016001600160401b0381118282101715612eec57612eec612eb6565b604051601f8201601f191681016001600160401b0381118282101715612f3c57612f3c612eb6565b604052919050565b5f6001600160401b03821115612f5c57612f5c612eb6565b50601f01601f191660200190565b5f82601f830112612f79575f80fd5b8135612f8c612f8782612f44565b612f14565b818152846020838601011115612fa0575f80fd5b816020850160208301375f918101602001919091529392505050565b5f60608236031215612fcc575f80fd5b612fd4612eca565b82356001600160401b0380821115612fea575f80fd5b612ff636838701612f6a565b8352602085013591508082111561300b575f80fd5b5061301836828601612f6a565b60208301525061302a60408401612b97565b604082015292915050565b5f8251612eac818460208701612c76565b601f821115610a1e57805f5260205f20601f840160051c8101602085101561306b5750805b601f840160051c820191505b8181101561308a575f8155600101613077565b5050505050565b81516001600160401b038111156130aa576130aa612eb6565b6130be816130b88454612dd0565b84613046565b602080601f8311600181146130f1575f84156130da5750858301515b5f19600386901b1c1916600185901b178555613148565b5f85815260208120601f198616915b8281101561311f57888601518255948401946001909101908401613100565b508582101561313c57878501515f19600388901b60f8161c191681555b505060018460011b0185555b505050505050565b634e487b7160e01b5f52601160045260245ffd5b80820180821115610a7557610a75613150565b5f63ffffffff80831681810361318f5761318f613150565b6001019392505050565b5f60208083525f84546131ab81612dd0565b806020870152604060018084165f81146131cc57600181146131e857613215565b60ff19851660408a0152604084151560051b8a01019550613215565b895f5260205f205f5b8581101561320c5781548b82018601529083019088016131f1565b8a016040019650505b509398975050505050505050565b818382375f9101908152919050565b80518015158114612b2b575f80fd5b5f8060408385031215613252575f80fd5b82516001600160401b0380821115613268575f80fd5b908401906060828703121561327b575f80fd5b613283612eca565b8251815260208084015161329681612bf2565b828201526040840151838111156132ab575f80fd5b80850194505087601f8501126132bf575f80fd5b835192506132cf612f8784612f44565b83815288828587010111156132e2575f80fd5b6132f184838301848801612c76565b80604084015250819550613306818801613232565b9450505050509250929050565b81810381811115610a7557610a75613150565b8082028115828204841417610a7557610a75613150565b5f808335601e19843603018112613352575f80fd5b8301803591506001600160401b0382111561336b575f80fd5b6020019150368190038213156124c7575f80fd5b5f6020828403121561338f575f80fd5b610a7282612b97565b5f88516133a9818460208d01612c76565b60e089901b6001600160e01b031916908301908152868860048301378681019050600481015f8152858782375060c09390931b6001600160c01b0319166004939094019283019390935250600c019695505050505050565b60208152816020820152818360408301375f818301604090810191909152601f909201601f19160101919050565b5f8235603e19833603018112612eac575f80fd5b5f60408236031215613453575f80fd5b61345b612ef2565b61346483612b18565b81526020808401356001600160401b0380821115613480575f80fd5b9085019036601f830112613492575f80fd5b8135818111156134a4576134a4612eb6565b8060051b91506134b5848301612f14565b81815291830184019184810190368411156134ce575f80fd5b938501935b838510156134f857843592506134e883612bf2565b82825293850193908501906134d3565b94860194909452509295945050505050565b6001600160401b0382811682821603908082111561119457611194613150565b6001600160401b0381811683821601908082111561119457611194613150565b6001600160401b0381811683821602808216919082811461356d5761356d613150565b505092915050565b61ffff60f01b8a60f01b1681525f63ffffffff60e01b808b60e01b166002840152896006840152808960e01b1660268401525086516135bb81602a850160208b01612c76565b8651908301906135d281602a840160208b01612c76565b60c087901b6001600160c01b031916602a9290910191820152613604603282018660e01b6001600160e01b0319169052565b61361d603682018560e01b6001600160e01b0319169052565b603a019b9a5050505050505050505050565b5f8351613640818460208801612c76565b60609390931b6bffffffffffffffffffffffff19169190920190815260140192915050565b5f8451613676818460208901612c76565b6001600160e01b031960e095861b8116919093019081529290931b16600482015260080192915050565b5f83516136b1818460208801612c76565b60c09390931b6001600160c01b0319169190920190815260080192915050565b5f6001600160401b0380831681810361318f5761318f613150565b5f602082840312156136fc575f80fd5b813560ff81168114612d5e575f80fdfee92546d698950ddd38910d2e15ed1d923cd0a7b3dde9e2a6a3f380565559cb00e92546d698950ddd38910d2e15ed1d923cd0a7b3dde9e2a6a3f380565559cb07a164736f6c6343000819000a", "balance": "0x0", "nonce": "0x1" }, "0feedc0de0000000000000000000000000000000": { "code": "0x60806040523661001357610011610017565b005b6100115b61001f610169565b6001600160a01b0316330361015f5760606001600160e01b0319600035166364d3180d60e11b810161005a5761005361019c565b9150610157565b63587086bd60e11b6001600160e01b031982160161007a576100536101f3565b63070d7c6960e41b6001600160e01b031982160161009a57610053610239565b621eb96f60e61b6001600160e01b03198216016100b95761005361026a565b63a39f25e560e01b6001600160e01b03198216016100d9576100536102aa565b60405162461bcd60e51b815260206004820152604260248201527f5472616e73706172656e745570677261646561626c6550726f78793a2061646d60448201527f696e2063616e6e6f742066616c6c6261636b20746f2070726f78792074617267606482015261195d60f21b608482015260a4015b60405180910390fd5b815160208301f35b6101676102be565b565b60007fb53127684a568b3173ae13b9f8a6016e243e63b6e8ee1178d6a717850b5d61035b546001600160a01b0316919050565b60606101a66102ce565b60006101b53660048184610683565b8101906101c291906106c9565b90506101df816040518060200160405280600081525060006102d9565b505060408051602081019091526000815290565b60606000806102053660048184610683565b81019061021291906106fa565b91509150610222828260016102d9565b604051806020016040528060008152509250505090565b60606102436102ce565b60006102523660048184610683565b81019061025f91906106c9565b90506101df81610305565b60606102746102ce565b600061027e610169565b604080516001600160a01b03831660208201529192500160405160208183030381529060405291505090565b60606102b46102ce565b600061027e61035c565b6101676102c961035c565b61036b565b341561016757600080fd5b6102e28361038f565b6000825111806102ef5750805b15610300576102fe83836103cf565b505b505050565b7f7e644d79422f17c01e4894b5f4f588d331ebfa28653d42ae832dc59e38c9798f61032e610169565b604080516001600160a01b03928316815291841660208301520160405180910390a1610359816103fb565b50565b60006103666104a4565b905090565b3660008037600080366000845af43d6000803e80801561038a573d6000f35b3d6000fd5b610398816104cc565b6040516001600160a01b038216907fbc7cd75a20ee27fd9adebab32041f755214dbc6bffa90cc0225b39da2e5c2d3b90600090a250565b60606103f4838360405180606001604052806027815260200161083060279139610560565b9392505050565b6001600160a01b0381166104605760405162461bcd60e51b815260206004820152602660248201527f455243313936373a206e65772061646d696e20697320746865207a65726f206160448201526564647265737360d01b606482015260840161014e565b807fb53127684a568b3173ae13b9f8a6016e243e63b6e8ee1178d6a717850b5d61035b80546001600160a01b0319166001600160a01b039290921691909117905550565b60007f360894a13ba1a3210667c828492db98dca3e2076cc3735a920a3ca505d382bbc61018d565b6001600160a01b0381163b6105395760405162461bcd60e51b815260206004820152602d60248201527f455243313936373a206e657720696d706c656d656e746174696f6e206973206e60448201526c1bdd08184818dbdb9d1c9858dd609a1b606482015260840161014e565b807f360894a13ba1a3210667c828492db98dca3e2076cc3735a920a3ca505d382bbc610483565b6060600080856001600160a01b03168560405161057d91906107e0565b600060405180830381855af49150503d80600081146105b8576040519150601f19603f3d011682016040523d82523d6000602084013e6105bd565b606091505b50915091506105ce868383876105d8565b9695505050505050565b60608315610647578251600003610640576001600160a01b0385163b6106405760405162461bcd60e51b815260206004820152601d60248201527f416464726573733a2063616c6c20746f206e6f6e2d636f6e7472616374000000604482015260640161014e565b5081610651565b6106518383610659565b949350505050565b8151156106695781518083602001fd5b8060405162461bcd60e51b815260040161014e91906107fc565b6000808585111561069357600080fd5b838611156106a057600080fd5b5050820193919092039150565b80356001600160a01b03811681146106c457600080fd5b919050565b6000602082840312156106db57600080fd5b6103f4826106ad565b634e487b7160e01b600052604160045260246000fd5b6000806040838503121561070d57600080fd5b610716836106ad565b9150602083013567ffffffffffffffff8082111561073357600080fd5b818501915085601f83011261074757600080fd5b813581811115610759576107596106e4565b604051601f8201601f19908116603f01168101908382118183101715610781576107816106e4565b8160405282815288602084870101111561079a57600080fd5b8260208601602083013760006020848301015280955050505050509250929050565b60005b838110156107d75781810151838201526020016107bf565b50506000910152565b600082516107f28184602087016107bc565b9190910192915050565b602081526000825180602084015261081b8160408501602087016107bc565b601f01601f1916919091016040019291505056fe416464726573733a206c6f772d6c6576656c2064656c65676174652063616c6c206661696c6564a2646970667358221220b22984eb1f3348f5b2148862b6f80392e497e3c65d0d2cfbb5e53d737e5a6c6a64736f6c63430008190033", "storage": { "0x360894a13ba1a3210667c828492db98dca3e2076cc3735a920a3ca505d382bbc": "0x0000000000000000000000000c0deba5e0000000000000000000000000000000", "0xb53127684a568b3173ae13b9f8a6016e243e63b6e8ee1178d6a717850b5d6103": "0x000000000000000000000000c0ffee1234567890abcdef1234567890abcdef34" }, "balance": "0x0", "nonce": "0x1" }, "32aaa04b1c166d02b0ee152dd221367687f72108": { "balance": "0x2086ac351052600000" }, "48a90c916ad48a72f49fa72a9f889c1ba9cc9b4b": { "balance": "0x8ac7230489e80000" }, "8db97c7cece249c2b98bdc0226cc4c2a57bf52fc": { "balance": "0xd3c21bcecceda1000000" }, "c0ffee1234567890abcdef1234567890abcdef34": { "code": "0x60806040526004361061007b5760003560e01c80639623609d1161004e5780639623609d1461011157806399a88ec414610124578063f2fde38b14610144578063f3b7dead1461016457600080fd5b8063204e1c7a14610080578063715018a6146100bc5780637eff275e146100d35780638da5cb5b146100f3575b600080fd5b34801561008c57600080fd5b506100a061009b366004610499565b610184565b6040516001600160a01b03909116815260200160405180910390f35b3480156100c857600080fd5b506100d1610215565b005b3480156100df57600080fd5b506100d16100ee3660046104bd565b610229565b3480156100ff57600080fd5b506000546001600160a01b03166100a0565b6100d161011f36600461050c565b610291565b34801561013057600080fd5b506100d161013f3660046104bd565b610300565b34801561015057600080fd5b506100d161015f366004610499565b610336565b34801561017057600080fd5b506100a061017f366004610499565b6103b4565b6000806000836001600160a01b03166040516101aa90635c60da1b60e01b815260040190565b600060405180830381855afa9150503d80600081146101e5576040519150601f19603f3d011682016040523d82523d6000602084013e6101ea565b606091505b5091509150816101f957600080fd5b8080602001905181019061020d91906105e2565b949350505050565b61021d6103da565b6102276000610434565b565b6102316103da565b6040516308f2839760e41b81526001600160a01b038281166004830152831690638f283970906024015b600060405180830381600087803b15801561027557600080fd5b505af1158015610289573d6000803e3d6000fd5b505050505050565b6102996103da565b60405163278f794360e11b81526001600160a01b03841690634f1ef2869034906102c990869086906004016105ff565b6000604051808303818588803b1580156102e257600080fd5b505af11580156102f6573d6000803e3d6000fd5b5050505050505050565b6103086103da565b604051631b2ce7f360e11b81526001600160a01b038281166004830152831690633659cfe69060240161025b565b61033e6103da565b6001600160a01b0381166103a85760405162461bcd60e51b815260206004820152602660248201527f4f776e61626c653a206e6577206f776e657220697320746865207a65726f206160448201526564647265737360d01b60648201526084015b60405180910390fd5b6103b181610434565b50565b6000806000836001600160a01b03166040516101aa906303e1469160e61b815260040190565b6000546001600160a01b031633146102275760405162461bcd60e51b815260206004820181905260248201527f4f776e61626c653a2063616c6c6572206973206e6f7420746865206f776e6572604482015260640161039f565b600080546001600160a01b038381166001600160a01b0319831681178455604051919092169283917f8be0079c531659141344cd1fd0a4f28419497f9722a3daafe3b4186f6b6457e09190a35050565b6001600160a01b03811681146103b157600080fd5b6000602082840312156104ab57600080fd5b81356104b681610484565b9392505050565b600080604083850312156104d057600080fd5b82356104db81610484565b915060208301356104eb81610484565b809150509250929050565b634e487b7160e01b600052604160045260246000fd5b60008060006060848603121561052157600080fd5b833561052c81610484565b9250602084013561053c81610484565b9150604084013567ffffffffffffffff8082111561055957600080fd5b818601915086601f83011261056d57600080fd5b81358181111561057f5761057f6104f6565b604051601f8201601f19908116603f011681019083821181831017156105a7576105a76104f6565b816040528281528960208487010111156105c057600080fd5b8260208601602083013760006020848301015280955050505050509250925092565b6000602082840312156105f457600080fd5b81516104b681610484565b60018060a01b03831681526000602060406020840152835180604085015260005b8181101561063c57858101830151858201606001528201610620565b506000606082860101526060601f19601f83011685010192505050939250505056fea264697066735822122019f39983a6fd15f3cffa764efd6fb0234ffe8d71051b3ebddc0b6bd99f87fa9764736f6c63430008190033", "storage": { "0x0000000000000000000000000000000000000000000000000000000000000000": "0x00000000000000000000000048a90c916ad48a72f49fa72a9f889c1ba9cc9b4b" }, "balance": "0x0", "nonce": "0x1" } }, "airdropHash": "0x0000000000000000000000000000000000000000000000000000000000000000", "airdropAmount": null, "number": "0x0", "gasUsed": "0x0", "parentHash": "0x0000000000000000000000000000000000000000000000000000000000000000", "baseFeePerGas": null, "excessBlobGas": null, "blobGasUsed": null } ``` Open a text editor and copy the preceding text into a file called `custom_genesis.json`. For full breakdown of the genesis file, see the [Genesis File](/docs/avalanche-l1s/evm-configuration/customize-avalanche-l1#genesis). The `timestamp` field is the Unix timestamp of the genesis block. `0x677eb2d9` represents the timestamp 1736356569 which is the time this tutorial was written. You should use the timestamp when you create your genesis file. Create the Avalanche L1 Configuration[​](#create-the-avalanche-l1-configuration "Direct link to heading") --------------------------------------------------------------------------------------------- Now that you have your binary, it's time to create the Avalanche L1 configuration. This tutorial uses `myblockchain` as it's Avalanche L1 name. Invoke the Avalanche L1 Creation Wizard with this command: ```bash avalanche blockchain create myblockchain ``` ### Choose Your VM[​](#choose-your-vm "Direct link to heading") Select `Custom` for your VM. ```bash Use the arrow keys to navigate: ↓ ↑ → ← ? Choose your VM: Subnet-EVM ▸ Custom ``` ### Select Validator Manager type ```bash Which validator management type would you like to use in your blockchain?: ▸ Proof Of Authority Proof Of Stake Explain the difference ``` Select the Validator manager that matches the `deployedBytecode` in your genesis. This tutorial is using Proof Of Authority. Next, select the key that will be used to controller the PoA ValidatorManager contract: ```bash Which address do you want to enable as controller of ValidatorManager contract?: ▸ Get address from an existing stored key (created from avalanche key create or avalanche key import) Custom ``` This key can add, remove and change weight of the validator set. ### Enter the Path to Your Genesis[​](#enter-the-path-to-your-genesis "Direct link to heading") Enter the path to the genesis file you created in this [step](#create-a-custom-genesis). ```bash ✔ Enter path to custom genesis: ./custom_genesis.json ``` ### ICM Setup ```bash ? Do you want to connect your blockchain with other blockchains or the C-Chain?: Yes, I want to enable my blockchain to interoperate with other blockchains and the C-Chain ▸ No, I want to run my blockchain isolated Explain the difference ``` Select no for now as we can setup ICM after deployment. ### Enter the Path to Your VM Binary[​](#enter-the-path-to-your-vm-binary "Direct link to heading") ```bash ? How do you want to set up the VM binary?: Download and build from a git repository (recommended for cloud deployments) ▸ I already have a VM binary (local network deployments only) ``` Select `I already have a VM binary`. Next, enter the path to your VM binary. This should be the path to the `custom_evm.bin` you created [previously](#modify-and-build-subnet-evm). ```bash ✔ Enter path to vm binary: ./custom_vm.bin ``` ### Wrapping Up[​](#wrapping-up "Direct link to heading") If all worked successfully, the command prints: ```bash ✓ Successfully created blockchain configuration Run 'avalanche blockchain describe' to view all created addresses and what their roles are ``` Now it's time to deploy it. Deploy the Avalanche L1 Locally[​](#deploy-the-avalanche-l1-locally "Direct link to heading") --------------------------------------------------------------------------------- To deploy your Avalanche L1, run: ```bash avalanche blockchain deploy myblockchain ``` Make sure to substitute the name of your Avalanche L1 if you used a different one than `myblockchain`. Next, select `Local Network`: ```bash Use the arrow keys to navigate: ↓ ↑ → ← ? Choose a network to deploy on: ▸ Local Network Fuji Mainnet ``` This command boots a five node Avalanche network on your machine. It needs to download the latest versions of AvalancheGo and Subnet-EVM. The command may take a couple minutes to run. If all works as expected, the command output should look something like this: ```bash Deploying [myblockchain] to Local Network Backend controller started, pid: 49158, output at: /Users/l1-dev/.avalanche-cli/runs/server_20250108_140532/avalanche-cli-backend.log Installing avalanchego-v1.12.1... avalanchego-v1.12.1 installation successful AvalancheGo path: /Users/l1-dev/.avalanche-cli/bin/avalanchego/avalanchego-v1.12.1/avalanchego Booting Network. Wait until healthy... Node logs directory: /Users/l1-dev/.avalanche-cli/runs/network_20250108_140538/node/logs Network ready to use. Your blockchain control keys: [P-custom18jma8ppw3nhx5r4ap8clazz0dps7rv5u9xde7p] Your subnet auth keys for chain creation: [P-custom18jma8ppw3nhx5r4ap8clazz0dps7rv5u9xde7p] CreateSubnetTx fee: 0.000010278 AVAX Subnet has been created with ID: GEieSy2doZ96bpMo5CuHPaX1LvaxpKZ9C72L22j94t6YyUb6X Now creating blockchain... CreateChainTx fee: 0.000095896 AVAX +-------------------------------------------------------------------+ | DEPLOYMENT RESULTS | +---------------+---------------------------------------------------+ | Chain Name | myblockchain | +---------------+---------------------------------------------------+ | Subnet ID | GEieSy2doZ96bpMo5CuHPaX1LvaxpKZ9C72L22j94t6YyUb6X | +---------------+---------------------------------------------------+ | VM ID | qDNV9vtxZYYNqm7TN1mYBuaaknLdefDbFK8bFmMLTJQJKaWjV | +---------------+---------------------------------------------------+ | Blockchain ID | 9FrNVEPkVpQyWDECQhEPDuT9oK98EhWQdg7anypKujVt9uSVT | +---------------+ | | P-Chain TXID | | +---------------+---------------------------------------------------+ Restarting node node2 to track subnet Restarting node node1 to track subnet Waiting for http://127.0.0.1:9652/ext/bc/9FrNVEPkVpQyWDECQhEPDuT9oK98EhWQdg7anypKujVt9uSVT/rpc to be available Waiting for http://127.0.0.1:9650/ext/bc/9FrNVEPkVpQyWDECQhEPDuT9oK98EhWQdg7anypKujVt9uSVT/rpc to be available ✓ Local Network successfully tracking myblockchain ✓ Subnet is successfully deployed Local Network ``` Use the describe command to find your L1s RPC: ```bash avalanche blockchain describe myblockchain ``` You can use the `RPC URL` to connect to and interact with your Avalanche L1. Interact with Your Avalanche L1[​](#interact-with-your-avalanche-l1 "Direct link to heading") --------------------------------------------------------------------------------- ### Check the Version[​](#check-the-version "Direct link to heading") You can verify that your Avalanche L1 has deployed correctly by querying the local node to see what Avalanche L1s it's running. You need to use the [`getNodeVersion`](/docs/rpcs/other/info-rpc#infogetnodeversion) endpoint. Try running this curl command: ```bash curl --location --request POST 'http://127.0.0.1:9650/ext/info' \ --header 'Content-Type: application/json' \ --data-raw '{ "jsonrpc":"2.0", "id" :1, "method" :"info.getNodeVersion", "params" :{ } }' ``` The command returns a list of all the VMs your local node is currently running along with their versions. ```json { "jsonrpc": "2.0", "result": { "version": "avalanche/1.10.8", "databaseVersion": "v1.4.5", "rpcProtocolVersion": "27", "gitCommit": "e70a17d9d988b5067f3ef5c4a057f15ae1271ac4", "vmVersions": { "avm": "v1.10.8", "evm": "v0.12.5", "platform": "v1.10.8", "qDMnZ895HKpRXA2wEvujJew8nNFEkvcrH5frCR9T1Suk1sREe": "v0.5.4@c0fe6506a40da466285f37dd0d3c044f494cce32" } }, "id": 1 } ``` Your results may be slightly different, but you can see that in addition to the X-Chain's `avm`, the C-Chain's `evm`, and the P-Chain's `platform` VM, the node is running the custom VM with commit `c0fe6506a40da466285f37dd0d3c044f494cce32`. ### Check a Balance[​](#check-a-balance "Direct link to heading") If you used the default genesis, your custom VM has a prefunded address. You can verify its balance with a curl command. Make sure to substitute the command's URL with the `RPC URL` from your deployment output. ```bash curl --location --request POST 'http://127.0.0.1:9650/ext/bc/myblockchain/rpc' \ --header 'Content-Type: application/json' \ --data-raw '{ "jsonrpc": "2.0", "method": "eth_getBalance", "params": [ "0x8db97c7cece249c2b98bdc0226cc4c2a57bf52fc", "latest" ], "id": 1 }' ``` The command should return: ```json { "jsonrpc": "2.0", "id": 1, "result": "0xd3c21bcecceda1000000" } ``` The balance is hex encoded, so this means the address has a balance of 1 million tokens. Note, this command doesn't work on all custom VMs, only VMs that implement the EVM's `eth_getBalance` interface. Next Steps[​](#next-steps "Direct link to heading") --------------------------------------------------- You've now unlocked the ability to deploy custom VMs. Go build something cool! # With Multisig Auth (/docs/tooling/avalanche-cli/create-deploy-avalanche-l1s/deploy-with-multisig-auth) --- title: With Multisig Auth description: Learn how to create an Avalanche L1 with a multisig authorization. --- Avalanche L1 creators can control critical Avalanche L1 operations with a N of M multisig. This multisig must be setup at deployment time and can't be edited afterward. Multisigs can are available on both the Fuji Testnet and Mainnet. To setup your multisig, you need to know the P-Chain address of each key holder and what you want your signing threshold to be. Avalanche-CLI requires Ledgers for Mainnet deployments. This how-to guide assumes the use of Ledgers for setting up your multisig. ## Prerequisites - [`Avalanche-CLI`](https://github.com/ava-labs/avalanche-cli) installed - Familiarity with process of [Deploying an Avalanche L1 on Testnet](/docs/tooling/avalanche-cli/create-deploy-avalanche-l1s/deploy-on-fuji-testnet) and [Deploying a Permissioned Avalanche L1 on Mainnet](/docs/tooling/avalanche-cli/create-deploy-avalanche-l1s/deploy-on-mainnet) - Multiple Ledger devices [configured for Avalanche](/docs/tooling/avalanche-cli/create-deploy-avalanche-l1s/deploy-on-mainnet#setting-up-your-ledger) - an Avalanche L1 configuration ready to deploy to either Fuji Testnet or Mainnet Getting Started[​](#getting-started "Direct link to heading") ------------------------------------------------------------- When issuing the transactions to create the Avalanche L1, you need to sign the TXs with multiple keys from the multisig. ### Specify Network[​](#specify-network "Direct link to heading") Start the Avalanche L1 deployment with ```bash avalanche blockchain deploy testAvalanche L1 ``` First step is to specify `Fuji` or `Mainnet` as the network: ```bash Use the arrow keys to navigate: ↓ ↑ → ← ? Choose a network to deploy on: Local Network Fuji ▸ Mainnet ``` ```bash Deploying [testblockchain] to Mainnet ``` Ledger is automatically recognized as the signature mechanism on `Mainnet`. After that, the CLI shows the first `Mainnet` Ledger address. ```bash Ledger address: P-avax1kdzq569g2c9urm9887cmldlsa3w3jhxe0knfy5 ``` ### Set Control Keys[​](#set-control-keys "Direct link to heading") Next the CLI asks the user to specify the control keys. This is where you setup your multisig. ```bash Configure which addresses may make changes to the Avalanche L1. These addresses are known as your control keys. You are going to also set how many control keys are required to make an Avalanche L1 change (the threshold). Use the arrow keys to navigate: ↓ ↑ → ← ? How would you like to set your control keys?: ▸ Use ledger address Custom list ``` Select `Custom list` and add every address that you'd like to be a key holder on the multisig. ```bash ✔ Custom list ? Enter control keys: ▸ Add Delete Preview More Info ↓ Done ``` Use the given menu to add each key, and select `Done` when finished. The output at this point should look something like: ```bash ✔ Custom list ✔ Add Enter P-Chain address (Example: P-...): P-avax1wryu62weky9qjlp40cpmnqf6ml2hytnagj5q28 ✔ Add Enter P-Chain address (Example: P-...): P-avax1kdzq569g2c9urm9887cmldlsa3w3jhxe0knfy5 ✔ Add Enter P-Chain address (Example: P-...): P-avax12gcy0xl0al6gcjrt0395xqlcuq078ml93wl5h8 ✔ Add Enter P-Chain address (Example: P-...): P-avax1g7nkguzg8yju8cq3ndzc9lql2yg69s9ejqa2af ✔ Add Enter P-Chain address (Example: P-...): P-avax1g4eryh40dtcsltmxn9zk925ny07gdq2xyjtf4g ✔ Done Your Avalanche L1's control keys: [P-avax1wryu62weky9qjlp40cpmnqf6ml2hytnagj5q28 P-avax1kdzq569g2c9urm9887cmldlsa3w3jhxe0knfy5 P-avax12gcy0xl0al6gcjrt0395xqlcuq078ml93wl5h8 P-avax1g7nkguzg8yju8cq3ndzc9lql2yg69s9ejqa2af P-avax1g4eryh40dtcsltmxn9zk925ny07gdq2xyjtf4g] ``` When deploying an Avalanche L1 with Ledger, you must include the Ledger's default address determined in [Specify Network](#specify-network) for the deployment to succeed. You may see an error like ``` Error: wallet does not contain Avalanche L1 auth keys exit status 1 ``` ### Set Threshold[​](#set-threshold "Direct link to heading") Next, specify the threshold. In your N of M multisig, your threshold is N, and M is the number of control keys you added in the previous step. ```bash Use the arrow keys to navigate: ↓ ↑ → ← ? Select required number of control key signatures to make an Avalanche L1 change: ▸ 1 2 3 4 5 ``` ### Specify Control Keys to Sign the Chain Creation TX[​](#specify-control-keys-to-sign-the-chain-creation-tx "Direct link to heading") You now need N of your key holders to sign the Avalanche L1 deployment transaction. You must select which addresses you want to sign the TX. ```bash ? Choose an Avalanche L1 auth key: ▸ P-avax1wryu62weky9qjlp40cpmnqf6ml2hytnagj5q28 P-avax1kdzq569g2c9urm9887cmldlsa3w3jhxe0knfy5 P-avax12gcy0xl0al6gcjrt0395xqlcuq078ml93wl5h8 P-avax1g7nkguzg8yju8cq3ndzc9lql2yg69s9ejqa2af P-avax1g4eryh40dtcsltmxn9zk925ny07gdq2xyjtf4g ``` A successful control key selection looks like: ```bash ✔ 2 ✔ P-avax1kdzq569g2c9urm9887cmldlsa3w3jhxe0knfy5 ✔ P-avax1g7nkguzg8yju8cq3ndzc9lql2yg69s9ejqa2af Your subnet auth keys for chain creation: [P-avax1kdzq569g2c9urm9887cmldlsa3w3jhxe0knfy5 P-avax1g7nkguzg8yju8cq3ndzc9lql2yg69s9ejqa2af] *** Please sign Avalanche L1 creation hash on the ledger device *** ``` #### Potential Errors[​](#potential-errors "Direct link to heading") If the currently connected Ledger address isn't included in your TX signing group, the operation fails with: ```bash ✔ 2 ✔ P-avax1g7nkguzg8yju8cq3ndzc9lql2yg69s9ejqa2af ✔ P-avax1g4eryh40dtcsltmxn9zk925ny07gdq2xyjtf4g Your Avalanche L1 auth keys for chain creation: [P-avax1g7nkguzg8yju8cq3ndzc9lql2yg69s9ejqa2af P-avax1g4eryh40dtcsltmxn9zk925ny07gdq2xyjtf4g] Error: wallet does not contain Avalanche L1 auth keys exit status 1 ``` This can happen either because the original specified control keys -previous step- don't contain the Ledger address, or because the Ledger address control key wasn't selected in the current step. If the user has the correct address but doesn't have sufficient balance to pay for the TX, the operation fails with: ```bash ✔ 2 ✔ P-avax1g7nkguzg8yju8cq3ndzc9lql2yg69s9ejqa2af ✔ P-avax1g4eryh40dtcsltmxn9zk925ny07gdq2xyjtf4g Your Avalanche L1 auth keys for chain creation: [P-avax1g7nkguzg8yju8cq3ndzc9lql2yg69s9ejqa2af P-avax1g4eryh40dtcsltmxn9zk925ny07gdq2xyjtf4g] *** Please sign Avalanche L1 creation hash on the ledger device *** Error: insufficient funds: provided UTXOs need 1000000000 more units of asset "rgNLkDPpANwqg3pHC4o9aGJmf2YU4GgTVUMRKAdnKodihkqgr" exit status 1 ``` ### Sign Avalanche L1 Deployment TX with the First Address[​](#sign-avalanche-l1-deployment-tx-with-the-first-address "Direct link to heading") The Avalanche L1 Deployment TX is ready for signing. ```bash *** Please sign Avalanche L1 creation hash on the ledger device *** ``` This activates a `Please review` window on the Ledger. Navigate to the Ledger's `APPROVE` window by using the Ledger's right button, and then authorize the request by pressing both left and right buttons. ```bash Avalanche L1 has been created with ID: 2qUKjvPx68Fgc1NMi8w4mtaBt5hStgBzPhsQrS1m7vSub2q9ew. Now creating blockchain... *** Please sign blockchain creation hash on the ledger device *** ``` After successful Avalanche L1 creation, the CLI asks the user to sign the blockchain creation TX. This activates a `Please review` window on the Ledger. Navigate to the Ledger's `APPROVE` window by using the Ledger's right button, and then authorize the request by pressing both left and right buttons. On success, the CLI provides Avalanche L1 deploy details. As only one address signed the chain creation TX, the CLI writes a file to disk to save the TX to continue the signing process with another command. ```bash +--------------------+----------------------------------------------------+ | DEPLOYMENT RESULTS | | +--------------------+----------------------------------------------------+ | Chain Name | testblockchain | +--------------------+----------------------------------------------------+ | Subnet ID | 2qUKjvPx68Fgc1NMi8w4mtaBt5hStgBzPhsQrS1m7vSub2q9ew | +--------------------+----------------------------------------------------+ | VM ID | rW1esjm6gy4BtGvxKMpHB2M28MJGFNsqHRY9AmnchdcgeB3ii | +--------------------+----------------------------------------------------+ 1 of 2 required Blockchain Creation signatures have been signed. Saving TX to disk to enable remaining signing. Path to export partially signed TX to: ``` Enter the name of file to write to disk, such as `partiallySigned.txt`. This file shouldn't exist already. ```bash Path to export partially signed TX to: partiallySigned.txt Addresses remaining to sign the tx: P-avax1g7nkguzg8yju8cq3ndzc9lql2yg69s9ejqa2af Connect a ledger with one of the remaining addresses or choose a stored key and run the signing command, or send "partiallySigned.txt" to another user for signing. Signing command: avalanche transaction sign testblockchain --input-tx-filepath partiallySigned.txt ``` Gather Remaining Signatures and Issue the Avalanche L1 Deployment TX[​](#gather-remaining-signatures-and-issue-the-avalanche-l1-deployment-tx "Direct link to heading") ----------------------------------------------------------------------------------------------------------------------------------------------------------- So far, one address has signed the Avalanche L1 deployment TX, but you need N signatures. Your Avalanche L1 has not been fully deployed yet. To get the remaining signatures, you may connect a different Ledger to the same computer you've been working on. Alternatively, you may send the `partiallySigned.txt` file to other users to sign themselves. The remainder of this section assumes that you are working on a machine with access to both the remaining keys and the `partiallySigned.txt` file. ### Issue the Command to Sign the Chain Creation TX[​](#issue-the-command-to-sign-the-chain-creation-tx "Direct link to heading") Avalanche-CLI can detect the deployment network automatically. For `Mainnet` TXs, it uses your Ledger automatically. For `Fuji Testnet`, the CLI prompts the user to choose the signing mechanism. You can start the signing process with the `transaction sign` command: ```bash avalanche transaction sign testblockchain --input-tx-filepath partiallySigned.txt ``` ```bash Ledger address: P-avax1g7nkguzg8yju8cq3ndzc9lql2yg69s9ejqa2af *** Please sign TX hash on the ledger device *** ``` Next, the CLI starts a new signing process for the Avalanche L1 deployment TX. If the Ledger isn't the correct one, the following error should appear instead: ```bash Ledger address: P-avax1kdzq569g2c9urm9887cmldlsa3w3jhxe0knfy5 Error: wallet does not contain Avalanche L1 auth keys exit status 1 ``` This activates a `Please review` window on the Ledger. Navigate to the Ledger's `APPROVE` window by using the Ledger's right button, and then authorize the request by pressing both left and right buttons. Repeat this processes until all required parties have signed the TX. You should see a message like this: ```bash All 2 required Tx signatures have been signed. Saving TX to disk to enable commit. Overwriting partiallySigned.txt Tx is fully signed, and ready to be committed Commit command: avalanche transaction commit testblockchain --input-tx-filepath partiallySigned.txt ``` Now, `partiallySigned.txt` contains a fully signed TX. ### Commit the Avalanche L1 Deployment TX[​](#commit-the-avalanche-l1-deployment-tx "Direct link to heading") To run submit the fully signed TX, run: ```bash avalanche transaction commit testblockchain --input-tx-filepath partiallySigned.txt ``` The CLI recognizes the deployment network automatically and submits the TX appropriately. ```bash +--------------------+-------------------------------------------------------------------------------------+ | DEPLOYMENT RESULTS | | +--------------------+-------------------------------------------------------------------------------------+ | Chain Name | testblockchain | +--------------------+-------------------------------------------------------------------------------------+ | Subnet ID | 2qUKjvPx68Fgc1NMi8w4mtaBt5hStgBzPhsQrS1m7vSub2q9ew | +--------------------+-------------------------------------------------------------------------------------+ | VM ID | rW1esjm6gy4BtGvxKMpHB2M28MJGFNsqHRY9AmnchdcgeB3ii | +--------------------+-------------------------------------------------------------------------------------+ | Blockchain ID | 2fx9EF61C964cWBu55vcz9b7gH9LFBkPwoj49JTSHA6Soqqzoj | +--------------------+-------------------------------------------------------------------------------------+ | RPC URL | http://127.0.0.1:9650/ext/bc/2fx9EF61C964cWBu55vcz9b7gH9LFBkPwoj49JTSHA6Soqqzoj/rpc | +--------------------+-------------------------------------------------------------------------------------+ | P-Chain TXID | 2fx9EF61C964cWBu55vcz9b7gH9LFBkPwoj49JTSHA6Soqqzoj | +--------------------+-------------------------------------------------------------------------------------+ ``` Your Avalanche L1 successfully deployed with a multisig. Add Validators Using the Multisig[​](#add-validators-using-the-multisig "Direct link to heading") ------------------------------------------------------------------------------------------------- The `addValidator` command also requires use of the multisig. Before starting, make sure to connect, unlock, and run the Avalanche Ledger app. ```bash avalanche blockchain addValidator testblockchain ``` ### Select Network[​](#select-network "Direct link to heading") First specify the network. Select either `Fuji` or `Mainnet`: ```bash Use the arrow keys to navigate: ↓ ↑ → ← ? Choose a network to add validator to.: ▸ Fuji Mainnet ``` ### Choose Signing Keys[​](#choose-signing-keys "Direct link to heading") Then, similar to the `deploy` command, the command asks the user to select the N control keys needed to sign the TX. ```bash ✔ Mainnet Use the arrow keys to navigate: ↓ ↑ → ← ? Choose an Avalanche L1 auth key: ▸ P-avax1wryu62weky9qjlp40cpmnqf6ml2hytnagj5q28 P-avax1kdzq569g2c9urm9887cmldlsa3w3jhxe0knfy5 P-avax12gcy0xl0al6gcjrt0395xqlcuq078ml93wl5h8 P-avax1g7nkguzg8yju8cq3ndzc9lql2yg69s9ejqa2af P-avax1g4eryh40dtcsltmxn9zk925ny07gdq2xyjtf4g ``` ```bash ✔ Mainnet ✔ P-avax1kdzq569g2c9urm9887cmldlsa3w3jhxe0knfy5 ✔ P-avax1g7nkguzg8yju8cq3ndzc9lql2yg69s9ejqa2af Your subnet auth keys for add validator TX creation: [P-avax1kdzq569g2c9urm9887cmldlsa3w3jhxe0knfy5 P-avax1g7nkguzg8yju8cq3ndzc9lql2yg69s9ejqa2af]. ``` ### Finish Assembling the TX[​](#finish-assembling-the-tx "Direct link to heading") Take a look at [Add a Validator](/docs/tooling/avalanche-cli/create-deploy-avalanche-l1s/deploy-on-mainnet#add-a-validator) for additional help issuing this transaction. If setting up a multisig, don't select your validator start time to be in one minute. Finishing the signing process takes significantly longer when using a multisig. ```bash Next, we need the NodeID of the validator you want to whitelist. Check https://build.avax.network/docs/rpcs/other/info-rpc#info-getnodeid for instructions about how to query the NodeID from your node (Edit host IP address and port to match your deployment, if needed). What is the NodeID of the validator you'd like to whitelist?: NodeID-7Xhw2mDxuDS44j42TCB6U5579esbSt3Lg ✔ Default (20) When should your validator start validating? If your validator is not ready by this time, Avalanche L1 downtime can occur. ✔ Custom When should the validator start validating? Enter a UTC datetime in 'YYYY-MM-DD HH:MM:SS' format: 2022-11-22 23:00:00 ✔ Until primary network validator expires NodeID: NodeID-7Xhw2mDxuDS44j42TCB6U5579esbSt3Lg Network: Local Network Start time: 2022-11-22 23:00:00 End time: 2023-11-22 15:57:27 Weight: 20 Inputs complete, issuing transaction to add the provided validator information... ``` ```bash Ledger address: P-avax1kdzq569g2c9urm9887cmldlsa3w3jhxe0knfy5 *** Please sign add validator hash on the ledger device *** ``` After that, the command shows the connected Ledger's address, and asks the user to sign the TX with the Ledger. ```bash Partial TX created 1 of 2 required Add Validator signatures have been signed. Saving TX to disk to enable remaining signing. Path to export partially signed TX to: ``` Because you've setup a multisig, TX isn't fully signed, and the commands asks a file to write into. Use something like `partialAddValidatorTx.txt`. ```bash Path to export partially signed TX to: partialAddValidatorTx.txt Addresses remaining to sign the tx: P-avax1g7nkguzg8yju8cq3ndzc9lql2yg69s9ejqa2af Connect a Ledger with one of the remaining addresses or choose a stored key and run the signing command, or send "partialAddValidatorTx.txt" to another user for signing. Signing command: avalanche transaction sign testblockchain --input-tx-filepath partialAddValidatorTx.txt ``` Sign and Commit the Add Validator TX[​](#sign-and-commit-the-add-validator-tx "Direct link to heading") ------------------------------------------------------------------------------------------------------- The process is very similar to signing of Avalanche L1 Deployment TX. So far, one address has signed the TX, but you need N signatures. To get the remaining signatures, you may connect a different Ledger to the same computer you've been working on. Alternatively, you may send the `partialAddValidatorTx.txt` file to other users to sign themselves. The remainder of this section assumes that you are working on a machine with access to both the remaining keys and the `partialAddValidatorTx.txt` file. ### Issue the Command to Sign the Add Validator TX[​](#issue-the-command-to-sign-the-add-validator-tx "Direct link to heading") Avalanche-CLI can detect the deployment network automatically. For `Mainnet` TXs, it uses your Ledger automatically. For `Fuji Testnet`, the CLI prompts the user to choose the signing mechanism. ```bash avalanche transaction sign testblockchain --input-tx-filepath partialAddValidatorTx.txt ``` ```bash Ledger address: P-avax1g7nkguzg8yju8cq3ndzc9lql2yg69s9ejqa2af *** Please sign TX hash on the ledger device *** ``` Next, the command is going to start a new signing process for the Add Validator TX. This activates a `Please review` window on the Ledger. Navigate to the Ledger's `APPROVE` window by using the Ledger's right button, and then authorize the request by pressing both left and right buttons. Repeat this processes until all required parties have signed the TX. You should see a message like this: ```bash All 2 required Tx signatures have been signed. Saving TX to disk to enable commit. Overwriting partialAddValidatorTx.txt Tx is fully signed, and ready to be committed Commit command: avalanche transaction commit testblockchain --input-tx-filepath partialAddValidatorTx.txt ``` Now, `partialAddValidatorTx.txt` contains a fully signed TX. ### Issue the Command to Commit the add validator TX[​](#issue-the-command-to-commit-the-add-validator-tx "Direct link to heading") To run submit the fully signed TX, run: ```bash avalanche transaction commit testblockchain --input-tx-filepath partialAddValidatorTx.txt ``` The CLI recognizes the deployment network automatically and submits the TX appropriately. ```bash Transaction successful, transaction ID: K7XNSwcmgjYX7BEdtFB3hEwQc6YFKRq9g7hAUPhW4J5bjhEJG ``` You've successfully added the validator to the Avalanche L1. # Teleporter on Devnet (/docs/tooling/avalanche-cli/cross-chain/teleporter-devnet) --- title: Teleporter on Devnet description: This how-to guide focuses on deploying Teleporter-enabled Avalanche L1s to a Devnet. --- After this tutorial, you would have created a Devnet and deployed two Avalanche L1s in it, and have enabled them to cross-communicate with each other and with the C-Chain through Teleporter and the underlying Warp technology. For more information on cross chain messaging through Teleporter and Warp, check: - [Cross Chain References](/docs/cross-chain) Note that currently only [Subnet-EVM](https://github.com/ava-labs/subnet-evm) and [Subnet-EVM-Based](/docs/avalanche-l1s/evm-configuration/customize-avalanche-l1) virtual machines support Teleporter. ## Prerequisites Before we begin, you will need to have: - Created an AWS account and have an updated AWS `credentials` file in home directory with \[default\] profile Note: the tutorial uses AWS hosts, but Devnets can also be created and operated in other supported cloud providers, such as GCP. Create Avalanche L1s Configurations[​](#create-avalanche-l1s-configurations "Direct link to heading") ----------------------------------------------------------------------------------------- For this section we will follow this [steps](/docs/tooling/avalanche-cli/cross-chain/teleporter-local-network#create-avalanche-l1s-configurations), to create two Teleporter-enabled Avalanche L1s, `` and ``. Create a Devnet and Deploy an Avalanche L1 in It[​](#create-a-devnet-and-deploy-a-avalanche-l1-in-it "Direct link to heading") ----------------------------------------------------------------------------------------------------------------- Let's use the `devnet wiz` command to create a devnet `` and deploy `` in it. The devnet will be created in the `us-east-1` region of AWS, and will consist of 5 validators only. ```bash avalanche node devnet wiz --aws --node-type default --region us-east-1 --num-validators 5 --num-apis 0 --enable-monitoring=false --default-validator-params Creating the devnet... Creating new EC2 instance(s) on AWS... ... Deploying [chain1] to Cluster ... configuring AWM RElayer on host i-0f1815c016b555fcc Setting the nodes as subnet trackers ... Setting up teleporter on subnet Teleporter Messenger successfully deployed to chain1 (0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf) Teleporter Registry successfully deployed to chain1 (0xb623C4495220C603D0A939D32478F55891a61750) Teleporter Messenger successfully deployed to c-chain (0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf) Teleporter Registry successfully deployed to c-chain (0x5DB9A7629912EBF95876228C24A848de0bfB43A9) Starting AWM Relayer Service setting AWM Relayer on host i-0f1815c016b555fcc to relay subnet chain1 updating configuration file ~/.avalanche-cli/nodes/i-0f1815c016b555fcc/services/awm-relayer/awm-relayer-config.json Devnet is successfully created and is now validating subnet chain1! Subnet RPC URL: http://67.202.23.231:9650/ext/bc/fqcM24LNb3kTV7KD1mAvUJXYy5XunwP8mrE44YuNwPjgZHY6p/rpc ✓ Cluster information YAML file can be found at ~/.avalanche-cli/nodes/inventories//clusterInfo.yaml at local host ``` Notice some details here: - Two smart contracts are deployed to the Avalanche L1: Teleporter Messenger and Teleporter Registry - Both Teleporter smart contracts are also deployed to `C-Chain` - [AWM Teleporter Relayer](https://github.com/ava-labs/awm-relayer) is installed and configured as a service into one of the nodes (A Relayer [listens](/docs/cross-chain/teleporter/overview#data-flow) for new messages being generated on a source Avalanche L1 and sends them to the destination Avalanche L1.) CLI configures the Relayer to enable every Avalanche L1 to send messages to all other Avalanche L1s. If you add more Avalanche L1s to the Devnet, the Relayer will be automatically reconfigured. Checking Devnet Configuration and Relayer Logs[​](#checking-devnet-configuration-and-relayer-logs "Direct link to heading") --------------------------------------------------------------------------------------------------------------------------- Execute `node list` command to get a list of the devnet nodes: ```bash avalanche node list Cluster "" (Devnet) Node i-0f1815c016b555fcc (NodeID-91PGQ7keavfSV1XVFva2WsQXWLWZqqqKe) 67.202.23.231 [Validator,Relayer] Node i-026392a651571232c (NodeID-AkPyyTs9e9nPGShdSoxdvWYZ6X2zYoyrK) 52.203.183.68 [Validator] Node i-0d1b98d5d941d6002 (NodeID-ByEe7kuwtrPStmdMgY1JiD39pBAuFY2mS) 50.16.235.194 [Validator] Node i-0c291f54bb38c2984 (NodeID-8SE2CdZJExwcS14PYEqr3VkxFyfDHKxKq) 52.45.0.56 [Validator] Node i-049916e2f35231c29 (NodeID-PjQY7xhCGaB8rYbkXYddrr1mesYi29oFo) 3.214.163.110 [Validator] ``` Notice that, in this case, `i-0f1815c016b555fcc` was set as Relayer. This host contains a `systemd` service called `awm-relayer` that can be used to check the Relayer logs, and set the execution status. To view the Relayer logs, the following command can be used: ```bash avalanche node ssh i-0f1815c016b555fcc "journalctl -u awm-relayer --no-pager" [Node i-0f1815c016b555fcc (NodeID-91PGQ7keavfSV1XVFva2WsQXWLWZqqqKe) 67.202.23.231 [Validator,Relayer]] Warning: Permanently added '67.202.23.231' (ED25519) to the list of known hosts. -- Logs begin at Fri 2024-04-05 14:11:43 UTC, end at Fri 2024-04-05 14:30:24 UTC. -- Apr 05 14:15:06 ip-172-31-47-187 systemd[1]: Started AWM Relayer systemd service. Apr 05 14:15:07 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:07.018Z","logger":"awm-relayer","caller":"main/main.go:66","msg":"Initializing awm-relayer"} Apr 05 14:15:07 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:07.018Z","logger":"awm-relayer","caller":"main/main.go:71","msg":"Set config options."} Apr 05 14:15:07 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:07.018Z","logger":"awm-relayer","caller":"main/main.go:78","msg":"Initializing destination clients"} Apr 05 14:15:07 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:07.021Z","logger":"awm-relayer","caller":"main/main.go:97","msg":"Initializing app request network"} Apr 05 14:15:08 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:08.159Z","logger":"awm-relayer","caller":"main/main.go:309","msg":"starting metrics server...","port":9090} Apr 05 14:15:08 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:08.160Z","logger":"awm-relayer","caller":"main/main.go:251","msg":"Creating relayer","originBlockchainID":"fqcM24LNb3kTV7KD1mAvUJXYy5XunwP8mrE44YuNwPjgZHY6p"} Apr 05 14:15:08 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:08.160Z","logger":"awm-relayer","caller":"main/main.go:251","msg":"Creating relayer","originBlockchainID":"2EfJg86if9Ka5Ag73JRfoqWz4EGuFwtemaNf4XiBBpUW4YngS6"} Apr 05 14:15:08 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:08.160Z","logger":"awm-relayer","caller":"relayer/relayer.go:114","msg":"Creating relayer","subnetID":"11111111111111111111111111111111LpoYY","subnetIDHex":"0000000000000000000000000000000000000000000000000000000000000000","blockchainID":"2EfJg86if9Ka5Ag73JRfoqWz4EGuFwtemaNf4XiBBpUW4YngS6","blockchainIDHex":"a2b6b947cf2b9bf6df03c8caab08e38ab951d8b120b9c37265d9be01d86bb170"} Apr 05 14:15:08 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:08.160Z","logger":"awm-relayer","caller":"relayer/relayer.go:114","msg":"Creating relayer","subnetID":"giY8tswWgZmcAWzPkoNrmjjrykited7GJ9799SsFzTiq5a1ML","subnetIDHex":"5a2e2d87d74b4ec62fdd6626e7d36a44716484dfcc721aa4f2168e8a61af63af","blockchainID":"fqcM24LNb3kTV7KD1mAvUJXYy5XunwP8mrE44YuNwPjgZHY6p","blockchainIDHex":"582fc7bd55472606c260668213bf1b6d291df776c9edf7e042980a84cce7418a"} Apr 05 14:15:08 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:08.171Z","logger":"awm-relayer","caller":"evm/subscriber.go:247","msg":"Successfully subscribed","blockchainID":"2EfJg86if9Ka5Ag73JRfoqWz4EGuFwtemaNf4XiBBpUW4YngS6"} Apr 05 14:15:08 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:08.171Z","logger":"awm-relayer","caller":"relayer/relayer.go:161","msg":"processed-missed-blocks set to false, starting processing from chain head","blockchainID":"2EfJg86if9Ka5Ag73JRfoqWz4EGuFwtemaNf4XiBBpUW4YngS6"} Apr 05 14:15:08 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:08.172Z","logger":"awm-relayer","caller":"relayer/message_relayer.go:662","msg":"Updating latest processed block in database","relayerID":"0xea06381426934ec1800992f41615b9d362c727ad542f6351dbfa7ad2849a35bf","latestBlock":6} Apr 05 14:15:08 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:08.173Z","logger":"awm-relayer","caller":"relayer/message_relayer.go:662","msg":"Updating latest processed block in database","relayerID":"0x175e14327136d57fe22d4bdd295ff14bea8a7d7ab1884c06a4d9119b9574b9b3","latestBlock":6} Apr 05 14:15:08 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:08.173Z","logger":"awm-relayer","caller":"main/main.go:272","msg":"Created relayer","blockchainID":"2EfJg86if9Ka5Ag73JRfoqWz4EGuFwtemaNf4XiBBpUW4YngS6"} Apr 05 14:15:08 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:08.173Z","logger":"awm-relayer","caller":"main/main.go:295","msg":"Relayer initialized. Listening for messages to relay.","originBlockchainID":"2EfJg86if9Ka5Ag73JRfoqWz4EGuFwtemaNf4XiBBpUW4YngS6"} Apr 05 14:15:08 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:08.178Z","logger":"awm-relayer","caller":"evm/subscriber.go:247","msg":"Successfully subscribed","blockchainID":"fqcM24LNb3kTV7KD1mAvUJXYy5XunwP8mrE44YuNwPjgZHY6p"} Apr 05 14:15:08 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:08.178Z","logger":"awm-relayer","caller":"relayer/relayer.go:161","msg":"processed-missed-blocks set to false, starting processing from chain head","blockchainID":"fqcM24LNb3kTV7KD1mAvUJXYy5XunwP8mrE44YuNwPjgZHY6p"} Apr 05 14:15:08 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:08.179Z","logger":"awm-relayer","caller":"relayer/message_relayer.go:662","msg":"Updating latest processed block in database","relayerID":"0xe584ccc0df44506255811f6b54375e46abd5db40a4c84fd9235a68f7b69c6f06","latestBlock":6} Apr 05 14:15:08 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:08.179Z","logger":"awm-relayer","caller":"relayer/message_relayer.go:662","msg":"Updating latest processed block in database","relayerID":"0x70f14d33bde4716928c5c4723d3969942f9dfd1f282b64ffdf96f5ac65403814","latestBlock":6} Apr 05 14:15:08 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:08.180Z","logger":"awm-relayer","caller":"main/main.go:272","msg":"Created relayer","blockchainID":"fqcM24LNb3kTV7KD1mAvUJXYy5XunwP8mrE44YuNwPjgZHY6p"} Apr 05 14:15:08 ip-172-31-47-187 awm-relayer[6886]: {"level":"info","timestamp":"2024-04-05T14:15:08.180Z","logger":"awm-relayer","caller":"main/main.go:295","msg":"Relayer initialized. Listening for messages to relay.","originBlockchainID":"fqcM24LNb3kTV7KD1mAvUJXYy5XunwP8mrE44YuNwPjgZHY6p"} ``` Deploying the Second Avalanche L1[​](#deploying-the-second-avalanche-l1 "Direct link to heading") ------------------------------------------------------------------------------------- Let's use the `devnet wiz` command again to deploy ``. When deploying Avalanche L1 ``, the two Teleporter contracts will not be deployed to C-Chain in Local Network as they have already been deployed when we deployed the first Avalanche L1. ```bash avalanche node devnet wiz --default-validator-params Adding subnet into existing devnet ... ... Deploying [chain2] to Cluster ... Stopping AWM Relayer Service Setting the nodes as subnet trackers ... Setting up teleporter on subnet Teleporter Messenger successfully deployed to chain2 (0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf) Teleporter Registry successfully deployed to chain2 (0xb623C4495220C603D0A939D32478F55891a61750) Teleporter Messenger has already been deployed to c-chain Starting AWM Relayer Service setting AWM Relayer on host i-0f1815c016b555fcc to relay subnet chain2 updating configuration file ~/.avalanche-cli/nodes/i-0f1815c016b555fcc/services/awm-relayer/awm-relayer-config.json Devnet is now validating subnet chain2 Subnet RPC URL: http://67.202.23.231:9650/ext/bc/7gKt6evRnkA2uVHRfmk9WrH3dYZH9gEVVxDAknwtjvtaV3XuQ/rpc ✓ Cluster information YAML file can be found at ~/.avalanche-cli/nodes/inventories//clusterInfo.yaml at local host ``` Verify Teleporter Is Successfully Set Up[​](#verify-teleporter-is-successfully-set-up "Direct link to heading") --------------------------------------------------------------------------------------------------------------- To verify that Teleporter is successfully, let's send a couple of cross messages: ```bash avalanche teleporter msg C-Chain chain1 "Hello World" --cluster Delivering message "this is a message" to source subnet "C-Chain" (2EfJg86if9Ka5Ag73JRfoqWz4EGuFwtemaNf4XiBBpUW4YngS6) Waiting for message to be received at destination subnet subnet "chain1" (fqcM24LNb3kTV7KD1mAvUJXYy5XunwP8mrE44YuNwPjgZHY6p) Message successfully Teleported! ``` ```bash avalanche teleporter msg chain2 chain1 "Hello World" --cluster Delivering message "this is a message" to source subnet "chain2" (29WP91AG7MqPUFEW2YwtKnsnzVrRsqcWUpoaoSV1Q9DboXGf4q) Waiting for message to be received at destination subnet subnet "chain1" (fqcM24LNb3kTV7KD1mAvUJXYy5XunwP8mrE44YuNwPjgZHY6p) Message successfully Teleported! ``` You have Teleport-ed your first message in the Devnet! Obtaining Information on Teleporter Deploys[​](#obtaining-information-on-teleporter-deploys "Direct link to heading") --------------------------------------------------------------------------------------------------------------------- ### Obtaining Avalanche L1 Information[​](#obtaining-avalanche-l1-information "Direct link to heading") By executing `blockchain describe` on a Teleporter enabled Avalanche L1, the following relevant information can be found: - Blockchain RPC URL - Blockchain ID in cb58 format - Blockchain ID in plain hex format - Teleporter Messenger address - Teleporter Registry address Let's get the information for ``: ```bash avalanche blockchain describe _____ _ _ _ | __ \ | | (_) | | | | | ___| |_ __ _ _| |___ | | | |/ _ \ __/ _ | | / __| | |__| | __/ || (_| | | \__ \ |_____/ \___|\__\__,_|_|_|___/ +--------------------------------+----------------------------------------------------------------------------------------+ | PARAMETER | VALUE | +--------------------------------+----------------------------------------------------------------------------------------+ | Blockchain Name | chain1 | +--------------------------------+----------------------------------------------------------------------------------------+ | ChainID | 1 | +--------------------------------+----------------------------------------------------------------------------------------+ | Token Name | TOKEN1 Token | +--------------------------------+----------------------------------------------------------------------------------------+ | Token Symbol | TOKEN1 | +--------------------------------+----------------------------------------------------------------------------------------+ | VM Version | v0.6.3 | +--------------------------------+----------------------------------------------------------------------------------------+ | VM ID | srEXiWaHjFEgKSgK2zBgnWQUVEy2MZA7UUqjqmBSS7MZYSCQ5 | +--------------------------------+----------------------------------------------------------------------------------------+ | Cluster SubnetID | giY8tswWgZmcAWzPkoNrmjjrykited7GJ9799SsFzTiq5a1ML | +--------------------------------+----------------------------------------------------------------------------------------+ | Cluster RPC URL | http://67.202.23.231:9650/ext/bc/fqcM24LNb3kTV7KD1mAvUJXYy5XunwP8mrE44YuNwPjgZHY6p/rpc | +--------------------------------+----------------------------------------------------------------------------------------+ | Cluster | fqcM24LNb3kTV7KD1mAvUJXYy5XunwP8mrE44YuNwPjgZHY6p | | BlockchainID | | + +----------------------------------------------------------------------------------------+ | | 0x582fc7bd55472606c260668213bf1b6d291df776c9edf7e042980a84cce7418a | | | | +--------------------------------+----------------------------------------------------------------------------------------+ | Cluster Teleporter| 0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf | | Messenger Address | | +--------------------------------+----------------------------------------------------------------------------------------+ | Cluster Teleporter| 0xb623C4495220C603D0A939D32478F55891a61750 | | Registry Address | | +--------------------------------+----------------------------------------------------------------------------------------+ ... ``` ### Obtaining C-Chain Information[​](#obtaining-c-chain-information "Direct link to heading") Similar information can be found for C-Chain by using `primary describe`: ```bash avalanche primary describe --cluster _____ _____ _ _ _____ / ____| / ____| | (_) | __ \ | | ______| | | |__ __ _ _ _ __ | |__) |_ _ _ __ __ _ _ __ ___ ___ | | |______| | | '_ \ / _ | | '_ \ | ___/ _ | '__/ _ | '_ _ \/ __| | |____ | |____| | | | (_| | | | | | | | | (_| | | | (_| | | | | | \__ \ \_____| \_____|_| |_|\__,_|_|_| |_| |_| \__,_|_| \__,_|_| |_| |_|___/ +------------------------------+--------------------------------------------------------------------+ | PARAMETER | VALUE | +------------------------------+--------------------------------------------------------------------+ | RPC URL | http://67.202.23.231:9650/ext/bc/C/rpc | +------------------------------+--------------------------------------------------------------------+ | EVM Chain ID | 43112 | +------------------------------+--------------------------------------------------------------------+ | TOKEN SYMBOL | AVAX | +------------------------------+--------------------------------------------------------------------+ | Address | 0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC | +------------------------------+--------------------------------------------------------------------+ | Balance | 49999489.815751426 | +------------------------------+--------------------------------------------------------------------+ | Private Key | 56289e99c94b6912bfc12adc093c9b51124f0dc54ac7a766b2bc5ccf558d8027 | +------------------------------+--------------------------------------------------------------------+ | BlockchainID | 2EfJg86if9Ka5Ag73JRfoqWz4EGuFwtemaNf4XiBBpUW4YngS6 | + +--------------------------------------------------------------------+ | | 0xa2b6b947cf2b9bf6df03c8caab08e38ab951d8b120b9c37265d9be01d86bb170 | +------------------------------+--------------------------------------------------------------------+ | ICM Messenger Address | 0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf | +------------------------------+--------------------------------------------------------------------+ | ICM Registry Address | 0x5DB9A7629912EBF95876228C24A848de0bfB43A9 | +------------------------------+--------------------------------------------------------------------+ ``` Controlling Relayer Execution[​](#controlling-relayer-execution "Direct link to heading") ----------------------------------------------------------------------------------------- CLI provides two commands to remotely control Relayer execution: ```bash avalanche interchain relayer stop --cluster ✓ Remote AWM Relayer on i-0f1815c016b555fcc successfully stopped ``` ```bash avalanche interchain relayer start --cluster ✓ Remote AWM Relayer on i-0f1815c016b555fcc successfully started ``` # Teleporter on Local Network (/docs/tooling/avalanche-cli/cross-chain/teleporter-local-network) --- title: Teleporter on Local Network description: This how-to guide focuses on deploying Teleporter-enabled Avalanche L1s to a local Avalanche network. --- After this tutorial, you would have created and deployed two Avalanche L1s to the local network and have enabled them to cross-communicate with each other and with the local C-Chain (through Teleporter and the underlying Warp technology.) For more information on cross chain messaging through Teleporter and Warp, check: - [Cross Chain References](/docs/cross-chain) Note that currently only [Subnet-EVM](https://github.com/ava-labs/subnet-evm) and [Subnet-EVM-Based](/docs/avalanche-l1s/evm-configuration/customize-avalanche-l1) virtual machines support Teleporter. ## Prerequisites - [Avalanche-CLI installed](/docs/tooling/avalanche-cli) ## Create Avalanche L1 Configurations Let's create an Avalanche L1 called `` with the latest Subnet-EVM version, a chain ID of 1, TOKEN1 as the token name, and with default Subnet-EVM parameters (more information regarding Avalanche L1 creation can be found [here](/docs/tooling/avalanche-cli#create-your-avalanche-l1-configuration)): ```bash avalanche blockchain create --evm --latest\ --evm-chain-id 1 --evm-token TOKEN1 --evm-defaults creating genesis for configuring airdrop to stored key "subnet__airdrop" with address 0x0EF8151A3e6ad1d4e17C8ED4128b20EB5edc58B1 loading stored key "cli-teleporter-deployer" for teleporter deploys (evm address, genesis balance) = (0xE932784f56774879e03F3624fbeC6261154ec711, 600000000000000000000) using latest teleporter version (v1.0.0) ✓ Successfully created subnet configuration ``` Notice that by default, Teleporter is enabled and a stored key is created to fund Teleporter related operations (that is deploy Teleporter smart contracts, fund Teleporter Relayer). To disable Teleporter in your Avalanche L1, use the flag `--teleporter=false` when creating the Avalanche L1. To disable Relayer in your Avalanche L1, use the flag `--relayer=false` when creating the Avalanche L1. Now let's create a second Avalanche L1 called ``, with similar settings: ```bash avalanche blockchain create --evm --latest\ --evm-chain-id 2 --evm-token TOKEN2 --evm-defaults creating genesis for configuring airdrop to stored key "subnet__airdrop" with address 0x0EF815FFFF6ad1d4e17C8ED4128b20EB5edAABBB loading stored key "cli-teleporter-deployer" for teleporter deploys (evm address, genesis balance) = (0xE932784f56774879e03F3624fbeC6261154ec711, 600000000000000000000) using latest teleporter version (v1.0.0) ✓ Successfully created subnet configuration ``` Deploy the Avalanche L1s to Local Network[​](#deploy-the-avalanche-l1s-to-local-network "Direct link to heading") ----------------------------------------------------------------------------------------------------- Let's deploy ``: ```bash avalanche blockchain deploy --local Deploying [] to Local Network Backend controller started, pid: 149427, output at: ~/.avalanche-cli/runs/server_20240229_165923/avalanche-cli-backend.log Booting Network. Wait until healthy... Node logs directory: ~/.avalanche-cli/runs/network_20240229_165923/node/logs Network ready to use. Deploying Blockchain. Wait until network acknowledges... Teleporter Messenger successfully deployed to c-chain (0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf) Teleporter Registry successfully deployed to c-chain (0x17aB05351fC94a1a67Bf3f56DdbB941aE6c63E25) Teleporter Messenger successfully deployed to (0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf) Teleporter Registry successfully deployed to (0x9EDc4cB4E781413b1b82CC3A92a60131FC111F58) Using latest awm-relayer version (v1.1.0) Executing AWM-Relayer... Blockchain ready to use. Local network node endpoints: +-------+-----------+------------------------------------------------------------------------------------+--------------------------------------------+ | NODE | VM | URL | ALIAS URL | +-------+-----------+------------------------------------------------------------------------------------+--------------------------------------------+ | node1 | | http://127.0.0.1:9650/ext/bc/MzN4AbtFzQ3eKqPhFaDpwCMJmagciWSCgghkZx6YeC6jRdvb6/rpc | http://127.0.0.1:9650/ext/bc//rpc | +-------+-----------+------------------------------------------------------------------------------------+--------------------------------------------+ | node2 | | http://127.0.0.1:9652/ext/bc/MzN4AbtFzQ3eKqPhFaDpwCMJmagciWSCgghkZx6YeC6jRdvb6/rpc | http://127.0.0.1:9652/ext/bc//rpc | +-------+-----------+------------------------------------------------------------------------------------+--------------------------------------------+ | node3 | | http://127.0.0.1:9654/ext/bc/MzN4AbtFzQ3eKqPhFaDpwCMJmagciWSCgghkZx6YeC6jRdvb6/rpc | http://127.0.0.1:9654/ext/bc//rpc | +-------+-----------+------------------------------------------------------------------------------------+--------------------------------------------+ | node4 | | http://127.0.0.1:9656/ext/bc/MzN4AbtFzQ3eKqPhFaDpwCMJmagciWSCgghkZx6YeC6jRdvb6/rpc | http://127.0.0.1:9656/ext/bc//rpc | +-------+-----------+------------------------------------------------------------------------------------+--------------------------------------------+ | node5 | | http://127.0.0.1:9658/ext/bc/MzN4AbtFzQ3eKqPhFaDpwCMJmagciWSCgghkZx6YeC6jRdvb6/rpc | http://127.0.0.1:9658/ext/bc//rpc | +-------+-----------+------------------------------------------------------------------------------------+--------------------------------------------+ Browser Extension connection details (any node URL from above works): RPC URL: http://127.0.0.1:9650/ext/bc/MzN4AbtFzQ3eKqPhFaDpwCMJmagciWSCgghkZx6YeC6jRdvb6/rpc Funded address: 0x0EF8151A3e6ad1d4e17C8ED4128b20EB5edc58B1 with 1000000 (10^18) - private key: 16289399c9466912ffffffdc093c9b51124f0dc54ac7a766b2bc5ccf558d8eee Network name: Chain ID: 1 Currency Symbol: TOKEN1 ``` Notice some details here: - Two smart contracts are deployed to each Avalanche L1: Teleporter Messenger and Teleporter Registry - Both Teleporter smart contracts are also deployed to `C-Chain` in the Local Network - [AWM Teleporter Relayer](https://github.com/ava-labs/awm-relayer) is installed, configured and executed in background (A Relayer [listens](/docs/cross-chain/teleporter/overview#data-flow) for new messages being generated on a source Avalanche L1 and sends them to the destination Avalanche L1.) CLI configures the Relayer to enable every Avalanche L1 to send messages to all other Avalanche L1s. If you add more Avalanche L1s, the Relayer will be automatically reconfigured. When deploying Avalanche L1 ``, the two Teleporter contracts will not be deployed to C-Chain in Local Network as they have already been deployed when we deployed the first Avalanche L1. ```bash avalanche blockchain deploy --local Deploying [] to Local Network Deploying Blockchain. Wait until network acknowledges... Teleporter Messenger has already been deployed to c-chain Teleporter Messenger successfully deployed to (0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf) Teleporter Registry successfully deployed to (0x9EDc4cB4E781413b1b82CC3A92a60131FC111F58) Using latest awm-relayer version (v1.1.0) Executing AWM-Relayer... Blockchain ready to use. Local network node endpoints: +-------+-----------+-------------------------------------------------------------------------------------+--------------------------------------------+ | NODE | VM | URL | ALIAS URL | +-------+-----------+-------------------------------------------------------------------------------------+--------------------------------------------+ | node1 | | http://127.0.0.1:9650/ext/bc/2tVGwEQmeXtdnFURW1YSq5Yf4jbJPfTBfVcu68KWHdHe5e5gX5/rpc | http://127.0.0.1:9650/ext/bc//rpc | +-------+-----------+-------------------------------------------------------------------------------------+--------------------------------------------+ | node1 | | http://127.0.0.1:9650/ext/bc/MzN4AbtFzQ3eKqPhFaDpwCMJmagciWSCgghkZx6YeC6jRdvb6/rpc | http://127.0.0.1:9650/ext/bc//rpc | +-------+-----------+-------------------------------------------------------------------------------------+--------------------------------------------+ | node2 | | http://127.0.0.1:9652/ext/bc/2tVGwEQmeXtdnFURW1YSq5Yf4jbJPfTBfVcu68KWHdHe5e5gX5/rpc | http://127.0.0.1:9652/ext/bc//rpc | +-------+-----------+-------------------------------------------------------------------------------------+--------------------------------------------+ | node2 | | http://127.0.0.1:9652/ext/bc/MzN4AbtFzQ3eKqPhFaDpwCMJmagciWSCgghkZx6YeC6jRdvb6/rpc | http://127.0.0.1:9652/ext/bc//rpc | +-------+-----------+-------------------------------------------------------------------------------------+--------------------------------------------+ | node3 | | http://127.0.0.1:9654/ext/bc/2tVGwEQmeXtdnFURW1YSq5Yf4jbJPfTBfVcu68KWHdHe5e5gX5/rpc | http://127.0.0.1:9654/ext/bc//rpc | +-------+-----------+-------------------------------------------------------------------------------------+--------------------------------------------+ | node3 | | http://127.0.0.1:9654/ext/bc/MzN4AbtFzQ3eKqPhFaDpwCMJmagciWSCgghkZx6YeC6jRdvb6/rpc | http://127.0.0.1:9654/ext/bc//rpc | +-------+-----------+-------------------------------------------------------------------------------------+--------------------------------------------+ | node4 | | http://127.0.0.1:9656/ext/bc/2tVGwEQmeXtdnFURW1YSq5Yf4jbJPfTBfVcu68KWHdHe5e5gX5/rpc | http://127.0.0.1:9656/ext/bc//rpc | +-------+-----------+-------------------------------------------------------------------------------------+--------------------------------------------+ | node4 | | http://127.0.0.1:9656/ext/bc/MzN4AbtFzQ3eKqPhFaDpwCMJmagciWSCgghkZx6YeC6jRdvb6/rpc | http://127.0.0.1:9656/ext/bc//rpc | +-------+-----------+-------------------------------------------------------------------------------------+--------------------------------------------+ | node5 | | http://127.0.0.1:9658/ext/bc/MzN4AbtFzQ3eKqPhFaDpwCMJmagciWSCgghkZx6YeC6jRdvb6/rpc | http://127.0.0.1:9658/ext/bc//rpc | +-------+-----------+-------------------------------------------------------------------------------------+--------------------------------------------+ | node5 | | http://127.0.0.1:9658/ext/bc/2tVGwEQmeXtdnFURW1YSq5Yf4jbJPfTBfVcu68KWHdHe5e5gX5/rpc | http://127.0.0.1:9658/ext/bc//rpc | +-------+-----------+-------------------------------------------------------------------------------------+--------------------------------------------+ Browser Extension connection details (any node URL from above works): RPC URL: http://127.0.0.1:9650/ext/bc/2tVGwEQmeXtdnFURW1YSq5Yf4jbJPfTBfVcu68KWHdHe5e5gX5/rpc Funded address: 0x0EF815FFFF6ad1d4e17C8ED4128b20EB5edAABBB with 1000000 (10^18) - private key: 56289e99c94b6912bfc12adc093c9b51124f0dc54ac7a766b2bc5ccf558d8027 Network name: Chain ID: 2 Currency Symbol: TOKEN2 ``` Verify Teleporter Is Successfully Set Up[​](#verify-teleporter-is-successfully-set-up "Direct link to heading") --------------------------------------------------------------------------------------------------------------- To verify that Teleporter is successfully, let's send a couple of cross messages (from C-Chain to chain1): ```bash avalanche teleporter sendMsg C-Chain chain1 "Hello World" --local ``` Results: ```bash Delivering message "this is a message" to source subnet "C-Chain" Waiting for message to be received at destination subnet subnet "chain1" Message successfully Teleported! ``` To verify that Teleporter is successfully deployed, let's send a couple of cross-chain messages (from chain2 to chain1): ```bash avalanche teleporter sendMsg chain2 chain1 "Hello World" --local ``` Results: ```bash Delivering message "this is a message" to source subnet "chain2" Waiting for message to be received at destination subnet subnet "chain1" Message successfully Teleported! ``` You have Teleport-ed your first message in the Local Network! Relayer related logs can be found at `~/.avalanche-cli/runs/awm-relayer.log`, and Relayer configuration can be found at `~/.avalanche-cli/runs/awm-relayer-config.json` Obtaining Information on Teleporter Deploys[​](#obtaining-information-on-teleporter-deploys "Direct link to heading") --------------------------------------------------------------------------------------------------------------------- ### Obtaining Avalanche L1 Information[​](#obtaining-avalanche-l1-information "Direct link to heading") By executing `blockchain describe` on a Teleporter enabled Avalanche L1, the following relevant information can be found: - Blockchain RPC URL - Blockchain ID in cb58 format - Blockchain ID in plain hex format - Teleporter Messenger address - Teleporter Registry address Let's get the information for ``: ```bash avalanche blockchain describe _____ _ _ _ | __ \ | | (_) | | | | | ___| |_ __ _ _| |___ | | | |/ _ \ __/ _ | | / __| | |__| | __/ || (_| | | \__ \ |_____/ \___|\__\__,_|_|_|___/ +--------------------------------+-------------------------------------------------------------------------------------+ | PARAMETER | VALUE | +--------------------------------+-------------------------------------------------------------------------------------+ | Blockchain Name | chain1 | +--------------------------------+-------------------------------------------------------------------------------------+ | ChainID | 1 | +--------------------------------+-------------------------------------------------------------------------------------+ | Token Name | TOKEN1 Token | +--------------------------------+-------------------------------------------------------------------------------------+ | Token Symbol | TOKEN1 | +--------------------------------+-------------------------------------------------------------------------------------+ | VM Version | v0.6.3 | +--------------------------------+-------------------------------------------------------------------------------------+ | VM ID | srEXiWaHjFEgKSgK2zBgnWQUVEy2MZA7UUqjqmBSS7MZYSCQ5 | +--------------------------------+-------------------------------------------------------------------------------------+ | Local Network SubnetID | 2CZP2ndbQnZxTzGuZjPrJAm5b4s2K2Bcjh8NqWoymi8NZMLYQk | +--------------------------------+-------------------------------------------------------------------------------------+ | Local Network RPC URL | http://127.0.0.1:9650/ext/bc/2cFWSgGkmRrmKtbPkB8yTpnq9ykK3Dc2qmxphwYtiGXCvnSwg8/rpc | +--------------------------------+-------------------------------------------------------------------------------------+ | Local Network BlockchainID | 2cFWSgGkmRrmKtbPkB8yTpnq9ykK3Dc2qmxphwYtiGXCvnSwg8 | + +-------------------------------------------------------------------------------------+ | | 0xd3bc5f71e6946d17c488d320cd1f6f5337d9dce75b3fac5023433c4634b6e91e | +--------------------------------+-------------------------------------------------------------------------------------+ | Local Network ICM | 0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf | | Messenger Address | | +--------------------------------+-------------------------------------------------------------------------------------+ | Local Network ICM | 0xbD9e8eC38E43d34CAB4194881B9BF39d639D7Bd3 | | Registry Address | | +--------------------------------+-------------------------------------------------------------------------------------+ ... ``` ### Obtaining C-Chain Information[​](#obtaining-c-chain-information "Direct link to heading") Similar information can be found for C-Chain by using `primary describe`: ```bash avalanche primary describe --local _____ _____ _ _ _____ / ____| / ____| | (_) | __ \ | | ______| | | |__ __ _ _ _ __ | |__) |_ _ _ __ __ _ _ __ ___ ___ | | |______| | | '_ \ / _ | | '_ \ | ___/ _ | '__/ _ | '_ _ \/ __| | |____ | |____| | | | (_| | | | | | | | | (_| | | | (_| | | | | | \__ \ \_____| \_____|_| |_|\__,_|_|_| |_| |_| \__,_|_| \__,_|_| |_| |_|___/ +------------------------------+--------------------------------------------------------------------+ | PARAMETER | VALUE | +------------------------------+--------------------------------------------------------------------+ | RPC URL | http://127.0.0.1:9650/ext/bc/C/rpc | +------------------------------+--------------------------------------------------------------------+ | EVM Chain ID | 43112 | +------------------------------+--------------------------------------------------------------------+ | TOKEN SYMBOL | AVAX | +------------------------------+--------------------------------------------------------------------+ | Address | 0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC | +------------------------------+--------------------------------------------------------------------+ | Balance | 49999489.829989485 | +------------------------------+--------------------------------------------------------------------+ | Private Key | 56289e99c94b6912bfc12adc093c9b51124f0dc54ac7a766b2bc5ccf558d8027 | +------------------------------+--------------------------------------------------------------------+ | BlockchainID | 2JeJDKL9Bvn1vLuuPL1DpUccBCVUh7iRnkv3a5pV9kJW5HbuQz | + +--------------------------------------------------------------------+ | | 0xabc1bd35cb7313c8a2b62980172e6d7ef42aaa532c870499a148858b0b6a34fd | +------------------------------+--------------------------------------------------------------------+ | ICM Messenger Address | 0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf | +------------------------------+--------------------------------------------------------------------+ | ICM Registry Address | 0x17aB05351fC94a1a67Bf3f56DdbB941aE6c63E25 | +------------------------------+--------------------------------------------------------------------+ ``` Controlling Relayer Execution[​](#controlling-relayer-execution "Direct link to heading") ----------------------------------------------------------------------------------------- Besides having the option to not use a Relayer at Avalanche L1 creation time, the Relayer can be stopped and restarted on used request. To stop the Relayer: ```bash avalanche interchain relayer stop --local ✓ Local AWM Relayer successfully stopped ``` To start it again: ```bash avalanche interchain relayer start --local using latest awm-relayer version (v1.1.0) Executing AWM-Relayer... ✓ Local AWM Relayer successfully started Logs can be found at ~/.avalanche-cli/runs/awm-relayer.log ``` # Teleporter Token Bridge (/docs/tooling/avalanche-cli/cross-chain/teleporter-token-bridge) --- title: Teleporter Token Bridge description: Deploy an example Teleporter Token Bridge on the local network. --- Teleporter Token Bridge enables users to transfer tokens between Avalanche L1s. The bridge is a set of smart contracts that are deployed across multiple Avalanche L1s, and leverages Teleporter for cross-chain communication. For more information on Teleporter Token Bridge, check: - [Teleporter Token Bridge README](https://github.com/ava-labs/teleporter-token-bridge) ## How to Deploy Teleporter Token Bridge on a Local Network This how-to guide focuses on deploying Teleporter Token Bridge on a local Avalanche network. After this tutorial, you would have learned how to transfer an ERC-20 token between two Teleporter-enabled Avalanche L1s and between C-Chain and a Teleporter-enabled Avalanche L1. ## Prerequisites For our example, you will first need to create and deploy a Teleporter-enabled Avalanche L1 in Local Network. We will name our Avalanche L1 `testblockchain`. - To create a Teleporter-enabled Avalanche L1 configuration, [visit here](/docs/tooling/avalanche-cli/cross-chain/teleporter-local-network#create-subnet-configurations) - To deploy a Teleporter-enabled Avalanche L1, [visit here](/docs/tooling/avalanche-cli/cross-chain/teleporter-local-network#deploy-the-subnets-to-local-network) ## Deploy ERC-20 Token in C-Chain First, let's create an ERC-20 Token and deploy it to C-Chain. For our example, it will be called TOK. Sample script to deploy the ERC-20 Token can be found [here](https://github.com/ava-labs/avalanche-cli/blob/main/cmd/contractcmd/deploy_erc20.go). To deploy the ERC-20 Token to C-Chain, we will call: ```bash avalanche contract deploy erc20 ``` When the command is run, our EWOQ address `0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC` would have received 100000 TOK tokens in C-Chain. Note that `0x5DB9A7629912EBF95876228C24A848de0bfB43A9` is our ERC-20 Token address, which we will use in our next command. ## Deploy Teleporter Token Bridge Next, we will now deploy Teleporter Token Bridge to our Local Network, where we will deploy the Home Contract to C-Chain and the Remote Contract to our Avalanche L1. ```bash avalanche teleporter bridge deploy ✔ Local Network ✔ C-Chain ✔ Deploy a new Home for the token ✔ An ERC-20 token Enter the address of the ERC-20 Token: 0x5DB9A7629912EBF95876228C24A848de0bfB43A9 ✔ Subnet testblockchain Downloading Bridge Contracts Compiling Bridge Home Deployed to http://127.0.0.1:9650/ext/bc/C/rpc Home Address: 0x4Ac1d98D9cEF99EC6546dEd4Bd550b0b287aaD6D Remote Deployed to http://127.0.0.1:9650/ext/bc/2TnSWd7odhkDWKYFDZHqU7CvtY8G6m46gWxUnhJRNYu4bznrrc/rpc Remote Address: 0x7DD1190e6F6CE8eE13C08F007FdAEE2f881B45D0 ``` Before we transfer our ERC-20 token from C-Chain to our Avalanche L1, we will first call `avalanche key list` command to check our initial balances in C-Chain and Avalanche L1. We will inquire the balances of our ERC-20 Token TOK both in C-Chain and our Avalanche L1, which has the address of `0x5DB9A7629912EBF95876228C24A848de0bfB43A9` in the C-Chain and address of `0x7DD1190e6F6CE8eE13C08F007FdAEE2f881B45D0` in our Avalanche L1 `testblockchain`. ```bash `avalanche key list --local --keys ewoq,blockchain_airdrop --subnets c,testblockchain --tokens 0x5DB9A7629912EBF95876228C24A848de0bfB43A9,0x7DD1190e6F6CE8eE13C08F007FdAEE2f881B45D0` +--------+--------------------+------------+--------------------------------------------+---------------+-----------------+---------------+ | KIND | NAME | SUBNET | ADDRESS | TOKEN | BALANCE | NETWORK | +--------+--------------------+---------+--------------------------------------------+---------------+-----------------+---------------+ | stored | ewoq | testblockchain | 0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC | TOK (0x7DD1.) | 0 | Local Network | + + +---------+--------------------------------------------+---------------+-----------------+---------------+ | | | C-Chain | 0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC | TOK (0x5DB9.) | 100000.000000000| Local Network | + +--------------------+---------+--------------------------------------------+---------------+-----------------+---------------+ | | blockchain | testblockchain | 0x5a4601D594Aa3848cA5EE0770b7883d3DBC666f6 | TOK (0x7DD1.) | 0 | Local Network | + + _airdrop +---------+--------------------------------------------+---------------+-----------------+---------------+ | | | C-Chain | 0x5a4601D594Aa3848cA5EE0770b7883d3DBC666f6 | TOK (0x5DB9.) | 0 | Local Network | +--------+--------------------+------------+--------------------------------------------+---------------+-----------------+---------------+ ``` ## Transfer the Token from C-Chain to Our Avalanche L1 Now we will transfer 100 TOK tokens from our `ewoq` address in C-Chain to blockchain_airdrop address in our Avalanche L1 `testblockchain`. Note that we will be using the Home contract address `0x4Ac1d98D9cEF99EC6546dEd4Bd550b0b287aaD6D` and Remote contract address `0x7DD1190e6F6CE8eE13C08F007FdAEE2f881B45D0`. ```bash avalanche key transfer ✔ Local Network ✔ C-Chain ✔ Subnet testblockchain Enter the address of the Bridge on c-chain: 0x4Ac1d98D9cEF99EC6546dEd4Bd550b0b287aaD6D Enter the address of the Bridge on testblockchain: 0x7DD1190e6F6CE8eE13C08F007FdAEE2f881B45D0 ✔ ewoq ✔ Key ✔ blockchain_airdrop Amount to send (TOKEN units): 100 ``` ## Verify That Transfer Is Successful We will call `avalanche key list` command again to verify that the transfer is successful. `blockchain_airdrop` should now have 100 TOK tokens in our Avalanche L1 `testblockchain` and our EWOQ account now has 99900 TOK tokens in C-Chain. ```bash avalanche key list --local --keys ewoq,blockchain_airdrop --subnets c,testblockchain --tokens 0x5DB9A7629912EBF95876228C24A848de0bfB43A9,0x7DD1190e6F6CE8eE13C08F007FdAEE2f881B45D0 +--------+--------------------+------------+--------------------------------------------+---------------+-----------------+---------------+ | KIND | NAME | SUBNET | ADDRESS | TOKEN | BALANCE | NETWORK | +--------+--------------------+---------+--------------------------------------------+---------------+-----------------+---------------+ | stored | ewoq | testblockchain | 0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC | TOK (0x7DD1.) | 0 | Local Network | + + +---------+--------------------------------------------+---------------+-----------------+---------------+ | | | C-Chain | 0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC | TOK (0x5DB9.) | 99900.000000000 | Local Network | + +--------------------+---------+--------------------------------------------+---------------+-----------------+---------------+ | | blockchain | testblockchain | 0x5a4601D594Aa3848cA5EE0770b7883d3DBC666f6 | TOK (0x7DD1.) | 100.000000000 | Local Network | + + _airdrop +---------+--------------------------------------------+---------------+-----------------+---------------+ | | | C-Chain | 0x5a4601D594Aa3848cA5EE0770b7883d3DBC666f6 | TOK (0x5DB9.) | 0 | Local Network | +--------+--------------------+------------+--------------------------------------------+---------------+-----------------+---------------+ ``` And that's it! You have now successfully completed your first transfer from C-Chain to Avalanche L1 using Teleporter Token Bridge! # Import an Avalanche L1 (/docs/tooling/avalanche-cli/guides/import-avalanche-l1) --- title: Import an Avalanche L1 description: Learn how to import an Avalanche-l1 into Avalanche-CLI. --- Context[​](#context "Direct link to heading") --------------------------------------------- In previous instances, Avalanche L1s might have been manually created through transaction issuance to node APIs, whether it was done using a local node or public API nodes. However, the current focus is on integrating Avalanche-CLI. To achieve this integration, this guide demonstrates the process of importing an Avalanche L1 to the Avalanche-CLI to enable better management of the Avalanche L1's configuration. This how-to uses the BEAM Avalanche L1 deployed on Fuji Testnet as the example Avalanche L1. Requirements[​](#requirements "Direct link to heading") ------------------------------------------------------- For the import to work properly, you need: - The Avalanche L1's genesis file, stored on disk - The Avalanche L1's SubnetID Import the Avalanche L1[​](#import-the-avalanche-l1 "Direct link to heading") ----------------------------------------------------------------- For these use cases, Avalanche-CLI now supports the `import public` command. Start the import by issuing ``` avalanche blockchain import public ``` The tool prompts for the network from which to import. The invariant assumption here is that the network is a public network, either the Fuji testnet or Mainnet. In other words, importing from a local network isn't supported. ``` Use the arrow keys to navigate: ↓ ↑ → ← ? Choose a network to import from: ▸ Fuji Mainnet ``` As stated earlier, this is from Fuji, so select it. As a next step, Avalanche-CLI asks for the path of the genesis file on disk: ``` ✗ Provide the path to the genesis file: /tmp/subnet_evm.genesis.json ``` The wizard checks if the file at the provided path exists, refer to the checkmark at the beginning of the line: ``` ✔ Provide the path to the genesis file: /tmp/subnetevm_genesis.json ``` Subsequently, the wizard asks if nodes have already been deployed for this Avalanche L1. ``` Use the arrow keys to navigate: ↓ ↑ → ← ? Have nodes already been deployed to this subnet?: Yes ▸ No ``` ### Nodes are Already Validating This Avalanche L1[​](#nodes-are-already-validating-this-avalanche-l1 "Direct link to heading") If nodes already have been deployed, the wizard attempts to query such a node for detailed data like the VM version. This allows the tool to skip querying GitHub (or wherever the VM's repository is hosted) for the VM's version, but rather we'll get the exact version which is actually running on the node. For this to work, a node API URL is requested from the user, which is used for the query. This requires that the node's API IP and port are accessible from the machine running Avalanche-CLI, or the node is obviously not reachable, and thus the query times out and fails, and the tool exits. The node should also be validating the given Avalanche L1 for the import to be meaningful, otherwise, the import fails with missing information. If the query succeeded, the wizard jumps to prompt for the Avalanche L1 ID (SubnetID). ``` Please provide an API URL of such a node so we can query its VM version (e.g. http://111.22.33.44:5555): http://154.42.240.119:9650 What is the ID of the subnet?: ie1wUBR2bQDPkGCRf2CBVzmP55eSiyJsFYqeGXnTYt2r33aKW ``` The rest of the wizard is identical to the next section, except that there is no prompt for the VM version anymore. ### Nodes Aren't Yet Validating this Avalanche L1, the Nodes API URL are Unknown, or Inaccessible (Firewalls)[​](#nodes-arent-yet-validating-this-avalanche-l1-the-nodes-api-url-are-unknown-or-inaccessible-firewalls "Direct link to heading") If you don't have a node's API URL at hand, or it's not reachable from the machine running Avalanche-CLI, or maybe no nodes have even been deployed yet because only the `CreateSubnet` transaction has been issued, for example, you can query the public APIs. You can't know for sure what Avalanche L1 VM versions the validators are running though, therefore the tool has to prompt later. So, select `No` when the tool asks for deployed nodes: Thus, at this point the wizard requests the Avalanche L1's ID, without which it can't know what to import. Remember the ID is different on different networks. From the [Testnet Avalanche L1 Explorer](https://subnets-test.avax.network/beam) you can see that BEAM's Avalanche L1 ID (SubnetID) is `ie1wUBR2bQDPkGCRf2CBVzmP55eSiyJsFYqeGXnTYt2r33aKW`: ``` ✔ What is the ID of the subnet?: ie1wUBR2bQDPkGCRf2CBVzmP55eSiyJsFYqeGXnTYt2r33aKW ``` Notice the checkmark at line start, it signals that there is ID format validation. If you hit `enter` now, the tool queries the public APIs for the given network, and if successful, it prints some information about the Avalanche L1, and proceeds to ask about the Avalanche L1's type: ``` Getting information from the Fuji network... Retrieved information. BlockchainID: y97omoP2cSyEVfdSztQHXD9EnfnVP9YKjZwAxhUfGbLAPYT9t, Name: BEAM, VMID: kLPs8zGsTVZ28DhP1VefPCFbCgS7o5bDNez8JUxPVw9E6Ubbz Use the arrow keys to navigate: ↓ ↑ → ← ? What's this VM's type?: ▸ Subnet-EVM Custom ``` Avalanche-CLI needs to know the VM type, to hit its repository and select what VM versions are available. This works automatically for Ava Labs VMs (like Subnet-EVM). Custom VMs aren't supported yet at this point, but are next on the agenda. As the import is for BEAM, and you know that it's a Subnet-EVM type, select that. The tool then queries the (GitHub) repository for available releases, and prompts the user to pick the version she wants to use: ``` ✔ Subnet-EVM Use the arrow keys to navigate: ↓ ↑ → ← ? Pick the version for this VM: ▸ v0.4.5 v0.4.5-rc.1 v0.4.4 v0.4.4-rc.0 ↓ v0.4.3 ``` There is only so much the tool can help here, the Avalanche L1 manager/administrator should know what they want to use Avalanche-CLI for, how, and why they're importing the Avalanche L1. It's crucial to understand that the correct versions are only known to the user. The latest might be usually fine, but the tool can't make assumptions about it easily. This is why it's indispensable that the wizard prompts the user, and the tool requires her to choose. If you selected to query an actual Avalanche L1 validator, not the public APIs, in the preceding step. In such a scenario, the tool skips this picking. ``` ✔ v0.4.5 Subnet BEAM imported successfully ``` The choice finalizes the wizard, which hopefully signals that the import succeeded. If something went wrong, the error messages provide cause information. This means you can now use Avalanche-CLI to handle the imported Avalanche L1 in the accustomed way. For example, you could deploy the BEAM Avalanche L1 locally. For a complete description of options, flags, and the command, visit the [command reference](/docs/tooling/cli-commands#avalanche-l1-import). # Run with Docker (/docs/tooling/avalanche-cli/guides/run-with-docker) --- title: Run with Docker description: Instructions for running Avalanche-CLI in a Docker container. --- To run Avalanche-CLI in a docker container, you need to enable ipv6. Edit `/etc/docker/daemon.json`. Add this snippet then restart the docker service. ```json { "ipv6": true, "fixed-cidr-v6": "fd00::/80" } ``` # Add Validator (/docs/tooling/avalanche-cli/maintain/add-validator-l1) --- title: Add Validator description: Learn how to add a validator to an Avalanche L1. --- ### Add a Validator to an Avalanche L1 ```bash avalanche blockchain addValidator ``` #### Choose Network Choose the network where the operation will be ```bash ? Choose a network for the operation: ▸ Local Network Devnet Fuji Testnet Mainnet ``` #### Choose P-Chain Fee Payer Choose the key that will be used to pay for the transaction fees on the P-Chain. ```bash ? Which key should be used to pay for transaction fees on P-Chain?: ▸ Use stored key Use ledger ``` #### Enter Node ID Enter the NodeID of the node you want to add as a blockchain validator. ```bash ✗ What is the NodeID of the node you want to add as a blockchain validator?: ``` You can find the NodeID in the node's configuration file or the console when first started with avalanchego. An example of a NodeID is `NodeID-7Xhw2mDxuDS44j42TCB6U5579esbSt3Lg` #### Enter the BLS Public Key Enter the public key of the node's BLS ```bash Next, we need the public key and proof of possession of the node's BLS Check https://build.avax.network/docs/rpcs/other/info-rpc#infogetnodeid for instructions on calling info.getNodeID API ✗ What is the node's BLS public key?: ``` You can find the BLS public key in the node's configuration file or the console when first started with avalanchego. #### Enter BLS Proof of Possession Enter the proof of possession of the node's BLS ```bash ✗ What is the node's BLS proof of possession?: ``` You can find the BLS proof of possession in the node's configuration file or the console when first started with avalanchego. #### Enter AVAX Balance This balance will be used to pay the P-Chain continuous staking fee. ```bash ✗ What balance would you like to assign to the validator (in AVAX)?: ``` #### Enter Leftover AVAX Address This address will receive any leftover AVAX when the node is removed from the validator set. ```bash ? Which key would you like to set as change owner for leftover AVAX if the node is removed from validator set?: ▸ Get address from an existing stored key (created from avalanche key create or avalanche key import) Custom ``` #### Enter Disable Validator Address This address will be able to disable the validator using P-Chain transactions. ```bash Which address do you want to be able to disable the validator using P-Chain transactions?: ▸ Get address from an existing stored key (created from avalanche key create or avalanche key import) Custom ``` ### Proof of Stake specific parameters If your network was created with Proof of Stake validator manager, you will be asked for the following additional parameters. You can also pass these parameters as flags to the command. ```bash --delegation-fee uint16 delegation fee (in bips) --stake-amount uint amount of native tokens to stake --staking-period duration how long this validator will be staking ``` # Delete an Avalanche L1 (/docs/tooling/avalanche-cli/maintain/delete-avalanche-l1) --- title: Delete an Avalanche L1 description: Learn how to delete an Avalanche L1. --- Deleting an Avalanche L1 Configuration[​](#deleting-a-avalanche-l1-configuration "Direct link to heading") --------------------------------------------------------------------------------------------- To delete a created Avalanche L1 configuration, run: ```bash avalanche blockchain delete ``` Deleting a Deployed Avalanche L1[​](#deleting-a-deployed-avalanche-l1 "Direct link to heading") ----------------------------------------------------------------------------------- You can't delete Avalanche L1s deployed to Mainnet or the Fuji Testnet. However, you may delete Avalanche L1s deployed to a local network by cleaning the network state with below command: ```bash avalanche network clean ``` # Remove Validator (/docs/tooling/avalanche-cli/maintain/remove-validator-l1) --- title: Remove Validator description: Learn how to remove a validator from an Avalanche L1. --- ### Remove a Validator from an Avalanche L1 ```bash avalanche blockchain removeValidator ``` #### Choose the Network Choose the network where the validator is registered. ```bash ? Choose a network for the operation: ▸ Local Network Devnet Fuji Testnet Mainnet ``` #### Choose P-Chain fee payer Choose the key to pay for the transaction fees on the P-Chain. ```bash ? Which key should be used to pay for transaction fees on P-Chain?: ▸ Use stored key Use ledger ``` #### Enter Node-ID of the Validator to Remove Enter the Node-ID of the validator you want to remove. ```bash ✗ What is the NodeID of the node you want to remove as a blockchain validator?: ``` You can find the NodeID in the node's configuration file or the console when first started with avalanchego. An example of a NodeID is `NodeID-7Xhw2mDxuDS44j42TCB6U5579esbSt3Lg` #### Confirm the removal ```bash Validator manager owner 0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC pays for the initialization of the validator's removal (Blockchain gas token) RPC Endpoint: http://127.0.0.1:9652/ext/bc/2qmU6w47Mp7D7fGhbRuZm6Z1Nn6FZXZAxKpaeTMFiRQW9CBErh/rpc Forcing removal of NodeID-7cQrriPWGXa5yuJGZUgsxxwH9j4T8pPkY as it is a PoS bootstrap validator Using validationID: 228zzCgDmAmuaJDGnFkFgnVqbbJPRF7qF1Xpd3dhtqGDhMjJK2 for nodeID: NodeID-7cQrriPWGXa5yuJGZUgsxxwH9j4T8pPkY ValidationID: 228zzCgDmAmuaJDGnFkFgnVqbbJPRF7qF1Xpd3dhtqGDhMjJK2 SetSubnetValidatorWeightTX fee: 0.000078836 AVAX SetSubnetValidatorWeightTx ID: 2FUimPZ37DscPJiQDLrtKtumER3LNr48MJi6VR2jGXkYEKpCaq ✓ Validator successfully removed from the Subnet ``` ### Proof of Authority Networks If the network is a Proof of Authority network, the validator must be removed from the network by the Validator Manager owner. ### Proof of Stake Networks If the network is a Proof of Stake network, the validator must be removed by the initial staker. Rewards will be distributed to the validator after the removal. It is important to note that the initial PoS Validator set are treated as bootstrap validators and are not elgible for rewards. # Troubleshooting (/docs/tooling/avalanche-cli/maintain/troubleshooting) --- title: Troubleshooting description: If you run into trouble deploying your Avalanche L1, use this document for tips to resolve common issues. --- Deployment Times Out[​](#deployment-times-out "Direct link to heading") ----------------------------------------------------------------------- During a local deployment, your network may fail to start. Your error may look something like this: ```bash [~]$ avalanche blockchain deploy myblockchain ✔ Local Network Deploying [myblockchain] to Local Network Backend controller started, pid: 26388, output at: /Users/user/.avalanche-cli/runs/server_20221231_111605/avalanche-cli-backend VMs ready. Starting network... .................................................................................. .................................................................................. ......Error: failed to query network health: rpc error: code = DeadlineExceeded desc = context deadline exceeded ``` Avalanche-CLI only supports running one local Avalanche network at a time. If other instances of AvalancheGo are running concurrently, your Avalanche-CLI network fails to start. To test for this error, start by shutting down any Avalanche nodes started by Avalanche-CLI. ```bash avalanche network clean --hard ``` Next, look for any lingering AvalancheGo processes with: ```bash ps aux | grep avalanchego ``` If any processes are running, you need to stop them before you can launch your VM with Avalanche-CLI. If you're running a validator node on the same box you're using Avalanche-CLI, **don't** end any of these lingering AvalancheGo processes. This may shut down your validator and could affect your validation uptime. Incompatible RPC Version for Custom VM[​](#incompatible-rpc-version-for-custom-vm "Direct link to heading") ----------------------------------------------------------------------------------------------------------- If you're locally deploying a custom VM, you may run into this error message. ```bash [~]$ avalanche blockchain deploy myblockchain ✔ Local Network Deploying [myblockchain] to Local Network Backend controller started, pid: 26388, output at: /Users/user/.avalanche-cli/runs/server_20221231_111605/avalanche-cli-backend VMs ready. Starting network... ......... Blockchain has been deployed. Wait until network acknowledges... .................................................................................. .................................................................................. ......Error: failed to query network health: rpc error: code = DeadlineExceeded desc = context deadline exceeded ``` This error has many possible causes, but a common cause is usually due to **an RPC protocol version mismatch.** AvalancheGo communicates with custom VMs over RPC using [gRPC](https://grpc.io/). gRPC defines a protocol specification shared by both AvalancheGo and the VM. **Both components must be running the same RPC version for VM deployment to work.** Your custom VM's RPC version is set by the version of AvalancheGo that you import. By default, Avalanche-CLI creates local Avalalanche networks that run the latest AvalancheGo release. ### Example[​](#example "Direct link to heading") Here's an example with real numbers from the AvalancheGo compatibility page: - If the latest AvalancheGo release is version v1.10.11, then Avalanche-CLI deploys a network with RPC version 28. - For your deploy to be successful, your VM must also have RPC version 28. Because only AvalancheGo versions v1.10.9, v1.10.10 and v1.10.11 supports RPC version 28, your VM **must** import one of those versions. ### Solution[​](#solution "Direct link to heading") Error: `RPCChainVM protocol version mismatch between AvalancheGo and Virtual Machine plugin` This error occurs when the RPCChainVM protocol version used by VMs like Subnet-EVM are incompatible with the protocol version of AvalancheGo. If your VM has an RPC version mismatch, you have two options: 1. Update the version of AvalancheGo you use in your VM. This is the correct long-term approach. 2. Use Avalanche-CLI to deploy an older version of AvalancheGo by using the `--avalanchego-version` flag. Both the [`blockchain deploy`](/docs/tooling/cli-commands#deploy) and [`network start`](/docs/tooling/cli-commands#start) commands support setting the AvalancheGo version explicitly. Although it's very important to keep your version of AvalancheGo up-to-date, this workaround helps you avoid broken builds in the short term. You must upgrade to the latest AvalancheGo version when deploying publicly to Fuji Testnet or Avalanche Mainnet. ### More Information[​](#more-information "Direct link to heading") Similar version matching is required in different tools on the ecosystem. Here is a compatibility table showing which RPCChainVM Version implements the more recent releases of AvalancheGo, Subnet-EVM, Precompile-EVM and HyperSDK. |RPCChainVM|AvalancheGo |Subnet-EVM |Precompile-EVM |HyperSDK | |----------|------------------|---------------|---------------|----------------| |26 |v1.10.1-v1.10.4 |v0.5.1-v0.5.2 |v0.1.0-v0.1.1 |v0.0.6-v0.0.9 | |27 |v1.10.5-v1.10.8 |v0.5.3 |v0.1.2 |v0.0.10-v0.0.12 | |28 |v1.10.9-v1.10.12 |v0.5.4-v0.5.6 |v0.1.3-v0.1.4 |v0.0.13-v0.0.15 | |30 |v1.10.15-v1.10.17 |v0.5.9-v0.5.10 |v0.1.6-v0.1.7 |- | |29 |v1.10.13-v1.10.14 |v0.5.7-v0.5.8 |v0.1.5 |- | |31 |v1.10.18- v1.10.19|v0.5.11 |v0.1.8 |v0.0.16 (latest)| |33 |v1.11.0 |v0.6.0-v0.6.1 |v0.2.0 |- | |34 |v1.11.1- v1.11.2 |v0.6.2 |- |- | |35 |v1.11.3 (latest) |v0.6.3 (latest)|v0.2.1 (latest)|- | You can view the full RPC compatibility broken down by release version for each tool here: [AvalancheGo](https://github.com/ava-labs/avalanchego/blob/master/version/compatibility.json). [Subnet-EVM](https://github.com/ava-labs/subnet-evm/blob/master/compatibility.json). [Precompile-EVM](https://github.com/ava-labs/precompile-evm/blob/main/compatibility.json). Updates to AvalancheGo's RPC version are **not** tied to its semantic version scheme. Minor AvalancheGo version bumps may include a breaking RPC version bump. Fix for MacBook Air M1/M2: ‘Bad CPU type in executable' Error[​](#fix-for-macbook-air-m1m2-bad-cpu-type-in-executable-error "Direct link to heading") ----------------------------------------------------------------------------------------------------------------------------------------------------- When running `avalanche blockchain deploy` via the Avalanche-CLI, the terminal may throw an error that contains the following: ```bash zsh: bad CPU type in executable: /Users/user.name/Downloads/build/avalanchego ``` This is because some Macs lack support for x86 binaries. Running the following command should fix this issue: ```bash /usr/sbin/softwareupdate --install-rosetta ``` # View Avalanche L1s (/docs/tooling/avalanche-cli/maintain/view-avalanche-l1s) --- title: View Avalanche L1s description: CLI commands for viewing avalanche-l1s. --- ## List Avalanche L1 Configurations You can list the Avalanche L1s you've created with: `avalanche blockchain list` Example: ```bash > avalanche blockchain list +--------------+--------------+----------+---------------------------------------------------+------------+------------+-----------+ | SUBNET | CHAIN | CHAINID | VMID | TYPE | VM VERSION | FROM REPO | +--------------+--------------+----------+---------------------------------------------------+------------+------------+-----------+ | myblockchain | myblockchain | 111 | qDNV9vtxZYYNqm7TN1mYBuaaknLdefDbFK8bFmMLTJQJKaWjV | Subnet-EVM | v0.7.0 | false | +--------------+--------------+----------+---------------------------------------------------+------------+------------+-----------+ ``` To see detailed information about your deployed Avalanche L1s, add the `--deployed` flag: ```bash > avalanche blockchain list --deployed +--------------+--------------+---------------------------------------------------+---------------+----------------+---------+ | SUBNET | CHAIN | VM ID | LOCAL NETWORK | FUJI (TESTNET) | MAINNET | +--------------+--------------+---------------------------------------------------+---------------+----------------+---------+ | myblockchain | myblockchain | qDNV9vtxZYYNqm7TN1mYBuaaknLdefDbFK8bFmMLTJQJKaWjV | Yes | No | No | +--------------+--------------+---------------------------------------------------+---------------+----------------+---------+ ```bash ## Describe Avalanche L1 Configurations To see the details of a specific configuration, run: `avalanche blockchain describe ` Example: ```bash > avalanche blockchain describe myblockchain +---------------------------------------------------------------------------------------------------------------------------------+ | MYBLOCKCHAIN | +---------------+-----------------------------------------------------------------------------------------------------------------+ | Name | myblockchain | +---------------+-----------------------------------------------------------------------------------------------------------------+ | VM ID | qDNV9vtxZYYNqm7TN1mYBuaaknLdefDbFK8bFmMLTJQJKaWjV | +---------------+-----------------------------------------------------------------------------------------------------------------+ | VM Version | v0.7.0 | +---------------+-----------------------------------------------------------------------------------------------------------------+ | Validation | Proof Of Authority | +---------------+--------------------------+--------------------------------------------------------------------------------------+ | Local Network | ChainID | 12345 | | +--------------------------+--------------------------------------------------------------------------------------+ | | SubnetID | fvx83jt2BWyibBRL4SRMa6WzjWp7GSFUeUUeoeBe1AqJ5Ey5w | | +--------------------------+--------------------------------------------------------------------------------------+ | | BlockchainID (CB58) | 2QGB9GbEhsFJLSRVii2mKs8dxugHzmK98G5391P2bvXSCb4sED | | +--------------------------+--------------------------------------------------------------------------------------+ | | BlockchainID (HEX) | 0xb883b54815c84a3f0903dbccd289ed5563395dd61c189db626e2d2680546b990 | | +--------------------------+--------------------------------------------------------------------------------------+ | | RPC Endpoint | http://127.0.0.1:60538/ext/bc/2QGB9GbEhsFJLSRVii2mKs8dxugHzmK98G5391P2bvXSCb4sED/rpc | +---------------+--------------------------+--------------------------------------------------------------------------------------+ +------------------------------------------------------------------------------------+ | ICM | +---------------+-----------------------+--------------------------------------------+ | Local Network | ICM Messenger Address | 0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf | | +-----------------------+--------------------------------------------+ | | ICM Registry Address | 0x695Ea5FbeBBdc99cA679F5fD7768f179d2281d74 | +---------------+-----------------------+--------------------------------------------+ +-------------------------------+ | TOKEN | +--------------+----------------+ | Token Name | TUTORIAL Token | +--------------+----------------+ | Token Symbol | TUTORIAL | +--------------+----------------+ +----------------------------------------------------------------------------------------------------------------------------------------+ | INITIAL TOKEN ALLOCATION | +-------------------------+------------------------------------------------------------------+---------------+---------------------------+ | DESCRIPTION | ADDRESS AND PRIVATE KEY | AMOUNT (OWEN) | AMOUNT (WEI) | +-------------------------+------------------------------------------------------------------+---------------+---------------------------+ | Main funded account | 0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC | 1000000 | 1000000000000000000000000 | | ewoq | 56289e99c94b6912bfc12adc093c9b51124f0dc54ac7a766b2bc5ccf558d8027 | | | +-------------------------+------------------------------------------------------------------+---------------+---------------------------+ | Used by ICM | 0x001CBe3650FAD190d9ccBd57b289124F5131AA57 | 600 | 600000000000000000000 | | cli-teleporter-deployer | d00b93e1526d05a30b681911a3e0f5e5528add205880c1cafa4f84cdb2746b00 | | | +-------------------------+------------------------------------------------------------------+---------------+---------------------------+ +-----------------------------------------------------------------------------------------------------------------+ | SMART CONTRACTS | +-----------------------+--------------------------------------------+--------------------------------------------+ | DESCRIPTION | ADDRESS | DEPLOYER | +-----------------------+--------------------------------------------+--------------------------------------------+ | Proxy Admin | 0xC0fFEE1234567890aBCdeF1234567890abcDef34 | 0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC | +-----------------------+--------------------------------------------+--------------------------------------------+ | PoA Validator Manager | 0x0C0DEbA5E0000000000000000000000000000000 | | +-----------------------+--------------------------------------------+--------------------------------------------+ | Transparent Proxy | 0x0Feedc0de0000000000000000000000000000000 | | +-----------------------+--------------------------------------------+--------------------------------------------+ +----------------------------------------------------------------------+ | INITIAL PRECOMPILE CONFIGS | +------------+-----------------+-------------------+-------------------+ | PRECOMPILE | ADMIN ADDRESSES | MANAGER ADDRESSES | ENABLED ADDRESSES | +------------+-----------------+-------------------+-------------------+ | Warp | n/a | n/a | n/a | +------------+-----------------+-------------------+-------------------+ +--------------------------------------------------------------------------+ | NODES | +-------+------------------------------------------+-----------------------+ | NAME | NODE ID | LOCALHOST ENDPOINT | +-------+------------------------------------------+-----------------------+ | node1 | NodeID-7Xhw2mDxuDS44j42TCB6U5579esbSt3Lg | http://127.0.0.1:9650 | +-------+------------------------------------------+-----------------------+ | node2 | NodeID-MFrZFVCXPv5iCn6M9K6XduxGTYp891xXZ | http://127.0.0.1:9652 | +-------+------------------------------------------+-----------------------+ +--------------------------------------------------------------------------------------------------------+ | WALLET CONNECTION | +-----------------+--------------------------------------------------------------------------------------+ | Network RPC URL | http://127.0.0.1:60538/ext/bc/2QGB9GbEhsFJLSRVii2mKs8dxugHzmK98G5391P2bvXSCb4sED/rpc | +-----------------+--------------------------------------------------------------------------------------+ | Network Name | myblockchain | +-----------------+--------------------------------------------------------------------------------------+ | Chain ID | 12345 | +-----------------+--------------------------------------------------------------------------------------+ | Token Symbol | TUTORIAL | +-----------------+--------------------------------------------------------------------------------------+ | Token Name | TUTORIAL Token | +-----------------+--------------------------------------------------------------------------------------+ ``` ## Viewing a Genesis File If you'd like to see the raw genesis file, supply the `--genesis` flag to the describe command: `avalanche blockchain describe --genesis` Example: ```bash > avalanche blockchain describe myblockchain --genesis { "config": { "berlinBlock": 0, "byzantiumBlock": 0, "chainId": 111, "constantinopleBlock": 0, "eip150Block": 0, "eip155Block": 0, "eip158Block": 0, "feeConfig": { "gasLimit": 12000000, "targetBlockRate": 2, "minBaseFee": 25000000000, "targetGas": 60000000, "baseFeeChangeDenominator": 36, "minBlockGasCost": 0, "maxBlockGasCost": 1000000, "blockGasCostStep": 200000 }, "homesteadBlock": 0, "istanbulBlock": 0, "londonBlock": 0, "muirGlacierBlock": 0, "petersburgBlock": 0, "warpConfig": { "blockTimestamp": 1734549536, "quorumNumerator": 67, "requirePrimaryNetworkSigners": true } }, "nonce": "0x0", "timestamp": "0x67632020", "extraData": "0x", "gasLimit": "0xb71b00", "difficulty": "0x0", "mixHash": "0x0000000000000000000000000000000000000000000000000000000000000000", "coinbase": "0x0000000000000000000000000000000000000000", "alloc": { "001cbe3650fad190d9ccbd57b289124f5131aa57": { "balance": "0x2086ac351052600000" }, "0c0deba5e0000000000000000000000000000000": { "code": "", "balance": "0x0", "nonce": "0x1" }, "0feedc0de0000000000000000000000000000000": { "code": "", "storage": { "0x360894a13ba1a3210667c828492db98dca3e2076cc3735a920a3ca505d382bbc": "0x0000000000000000000000000c0deba5e0000000000000000000000000000000", //sslot for proxy implementation "0xb53127684a568b3173ae13b9f8a6016e243e63b6e8ee1178d6a717850b5d6103": "0x000000000000000000000000c0ffee1234567890abcdef1234567890abcdef34" //sslot for proxy admin }, "balance": "0x0", "nonce": "0x1" }, "8db97c7cece249c2b98bdc0226cc4c2a57bf52fc": { "balance": "0xd3c21bcecceda1000000" }, "c0ffee1234567890abcdef1234567890abcdef34": { "code": "", "storage": { "0x0000000000000000000000000000000000000000000000000000000000000000": "0x0000000000000000000000008db97c7cece249c2b98bdc0226cc4c2a57bf52fc" //sslot for owner }, "balance": "0x0", "nonce": "0x1" } }, "airdropHash": "0x0000000000000000000000000000000000000000000000000000000000000000", "airdropAmount": null, "number": "0x0", "gasUsed": "0x0", "parentHash": "0x0000000000000000000000000000000000000000000000000000000000000000", "baseFeePerGas": null, "excessBlobGas": null, "blobGasUsed": null } ``` # Ledger P-Chain Transfer (/docs/tooling/avalanche-cli/transactions/ledger-p-chain-transfer) --- title: Ledger P-Chain Transfer description: Transferring funds between P-Chain using Avalanche CLI. --- Transferring funds between P-Chain wallets becomes necessary in certain situations: 1. Funds need to be sent to the Avalanche L1 control key, which might have a zero balance due to fee payments. The Avalanche L1 control key requires funding to ensure proper support for Avalanche L1 operations. 2. Funds need to be moved from one Ledger address index to another. A Ledger manages an infinite sequence of addresses all derived from a master private key and can sign for any of those addresses. Each one is referred to by an index, or the associated address. Avalanche-CLI usually expects to use index 0, but sometimes, the funds are in a different index. Occasionally, a transfer made to a ledger can be made to an address different from the default one used by the CLI. To enable direct transfers between P-Chain addresses, use the command `avalanche key transfer`. This operation involves a series of import/export actions with the P-Chain and X-Chain. The fee for this operation is four times the typical import operation fee, which comes out to 0.004 AVAX. You can find more information about fees [here](/docs/rpcs/other/guides/txn-fees). The `key transfer` command can also be applied to the stored keys managed by the CLI. It enables moving funds from one stored key to another, and from a ledger to a stored key or the other way. This how-to guide focuses on transferring funds between ledger accounts. ## Prerequisites - [`Avalanche-CLI`](/docs/tooling/avalanche-cli) installed - Multiple Ledger devices [configured for Avalanche](/docs/tooling/avalanche-cli/create-deploy-avalanche-l1s/deploy-on-mainnet#setting-up-your-ledger) Example: Sending All Funds From One Ledger to Another[​](#example-sending-all-funds-from-one-ledger-to-another "Direct link to heading") ---------------------------------------------------------------------------------------------------------------------------------------- - Source address: ledger A, index 2 (the web wallet shows 4.5 AVAX for this ledger) - Target address: ledger B, index 0 (the web wallet shows 0 AVAX for this ledger) ### Determine Sender Address Index[​](#determine-sender-address-index "Direct link to heading") A ledger can manage an infinite amount of addresses derived from a main private key. Because of this, many operations require the user to specify an address index. After confirming with a web wallet that 4.5 AVAX is available on p-chain address `P-avax10an3cucdfqru984pnvv6y0rspvvclz63e523m0`, connect ledger A. With the avalanche app running, execute: ```bash avalanche key list --mainnet --ledger 0,1,2,3,4,5 ``` To see p-chain addresses and balances for the first 6 indices in the ledger derived owner addresses. ```bash +--------+---------+-------------------------+-----------------------------------------------+---------+---------+ | KIND | NAME | CHAIN | ADDRESS | BALANCE | NETWORK | +--------+---------+-------------------------+-----------------------------------------------+---------+---------+ | ledger | index 0 | P-Chain (Bech32 format) | P-avax1g8yucm7j0cnwwru4rp5lkzw6dpdxjmc2rfkqs9 | 0 | Mainnet | + +---------+ +-----------------------------------------------+---------+---------+ | | index 1 | | P-avax1drppshkst2ccygyq37m2z9e3ex2jhkd2txcm5r | 0 | Mainnet | + +---------+ +-----------------------------------------------+---------+---------+ | | index 2 | | P-avax10an3cucdfqru984pnvv6y0rspvvclz63e523m0 | 4.5 | Mainnet | + +---------+ +-----------------------------------------------+---------+---------+ | | index 3 | | P-avax1yfpm7v5y5rej2nu7t2r0ffgrlpfq36je0rc5k6 | 0 | Mainnet | + +---------+ +-----------------------------------------------+---------+---------+ | | index 4 | | P-avax17nqvwcqsa8ddgeww8gzmfe932pz2syaj2vyd89 | 0 | Mainnet | + +---------+ +-----------------------------------------------+---------+---------+ | | index 5 | | P-avax1jzvnd05vsfksrtatm2e3rzu6eux9a287493yf8 | 0 | Mainnet | +--------+---------+-------------------------+-----------------------------------------------+---------+---------+ ``` The address `P-avax10an3cucdfqru984pnvv6y0rspvvclz63e523m0` has 4.5 AVAX and is associated with index 2 of ledger A. ### Determine Receiver Address Index[​](#determine-receiver-address-index "Direct link to heading") In this case the user wants to use index 0, the one CLI by default expects to contain funds. For the transfer command, it is also needed to know the target p-chain address. Do the following to obtain it: With the ledger B connected and the avalache app running, execute: ```bash avalanche key list --mainnet --ledger 0 ``` ```bash +--------+---------+-------------------------+-----------------------------------------------+---------+---------+ | KIND | NAME | CHAIN | ADDRESS | BALANCE | NETWORK | +--------+---------+-------------------------+-----------------------------------------------+---------+---------+ | ledger | index 0 | P-Chain (Bech32 format) | P-avax1r4aceznjkz8ch4pmpqrmkq4f3sl952mdrdt6xm | 0 | Mainnet | +--------+---------+-------------------------+-----------------------------------------------+---------+---------+ ``` Target address to be used is `P-avax1r4aceznjkz8ch4pmpqrmkq4f3sl952mdrdt6xm`, containing 0 funds. ### Send the Transfer[​](#send-the-transfer "Direct link to heading") A P-Chain to P-chain transfer is a two-part operation. There is no need for the two parts to be executed on the same machine, only for them to have some common params. For each part, the appropriate ledger (either source or target) must be connected to the machine executing it. The first step moves the money out of the source account into a X-Chain account owner by the receiver. It needs to be signed by the sending ledger. Enter the amount of AVAX to send to the recipient. This amount does not include fees. Note that the sending ledger pays all the fees. Then start the command: ```bash avalanche key transfer ``` First step is to specify the network. `Mainnet` in this case: ```bash Use the arrow keys to navigate: ↓ ↑ → ← ? Network to use: ▸ Mainnet Fuji Local Network ``` Next, the step of the transfer must be specified. Send in this case: ```bash ? Step of the transfer: ▸ Send Receive ``` Next, the key source for the sender address. That is, the key that is going to sign the sending transactions. Select `Use ledger`: ```bash ? Which key source should be used to for the sender address?: Use stored key ▸ Use ledger ``` Next, the ledger index is asked for. Input `2`: ```bash ✗ Ledger index to use: 2 ``` Next, the amount to be sent is asked for: ```bash ✗ Amount to send (AVAX units): 4.496 ``` The, the target address is required: ```bash ✗ Receiver address: P-avax1r4aceznjkz8ch4pmpqrmkq4f3sl952mdrdt6xm ``` After that, a confirmation message is printed. Read carefully and choose `Yes`: ```bash this operation is going to: - send 4.496000000 AVAX from P-avax10an3cucdfqru984pnvv6y0rspvvclz63e523m0 to target address P-avax1r4aceznjkz8ch4pmpqrmkq4f3sl952mdrdt6xm - take a fee of 0.004000000 AVAX from source address P-avax10an3cucdfqru984pnvv6y0rspvvclz63e523m0 Use the arrow keys to navigate: ↓ ↑ → ← ? Confirm transfer: No ▸ Yes ``` After this, the first part is completed: ### Receive the Transfer[​](#receive-the-transfer "Direct link to heading") In this step, Ledger B signs the transaction to receive the funds. It imports the funds on the X-Chain before exporting them back to the desired P-Chain address. Connect ledger B and execute avalanche app. Then start the command: ```bash avalanche key transfer ``` Specify the `Mainnet` network: ```bash Use the arrow keys to navigate: ↓ ↑ → ← ? Network to use: ▸ Mainnet Fuji Local Network ``` Next, the step of the transfer must be specified. Receive in this case: ```bash ? Step of the transfer: Send ▸ Receive ``` Then, select Ledger as the key source that is going to sign the receiver operations. ```bash ? Which key source should be used to for the receiver address?: Use stored key ▸ Use ledger ``` Next, the ledger index is asked for. Input `0`: ```bash ✗ Ledger index to use: 0 ``` Next, the amount to receive is asked for: ```bash ✗ Amount to send (AVAX units): 4.496 ``` After that, a confirmation message is printed. Select `Yes`: ```bash this operation is going to: - receive 4.496000000 AVAX at target address P-avax1r4aceznjkz8ch4pmpqrmkq4f3sl952mdrdt6xm: Use the arrow keys to navigate: ↓ ↑ → ← ? Confirm transfer: No ▸ Yes ``` Finally, the second part of the operation is executed and the transfer is completed. ```bash Issuing ImportTx P -> X Issuing ExportTx X -> P Issuing ImportTx X -> P ``` ### Verifying Results of the Transfer Operation using `key list`[​](#verifying-results-of-the-transfer-operation-using-key-list "Direct link to heading") First verify ledger A accounts. Connect ledger A and open the avalanche app: ```bash avalanche key list --mainnet --ledger 0,1,2,3,4,5 ``` With result: ```bash +--------+---------+-------------------------+-----------------------------------------------+---------+---------+ | KIND | NAME | CHAIN | ADDRESS | BALANCE | NETWORK | +--------+---------+-------------------------+-----------------------------------------------+---------+---------+ | ledger | index 0 | P-Chain (Bech32 format) | P-avax1g8yucm7j0cnwwru4rp5lkzw6dpdxjmc2rfkqs9 | 0 | Mainnet | + +---------+ +-----------------------------------------------+---------+---------+ | | index 1 | | P-avax1drppshkst2ccygyq37m2z9e3ex2jhkd2txcm5r | 0 | Mainnet | + +---------+ +-----------------------------------------------+---------+---------+ | | index 2 | | P-avax10an3cucdfqru984pnvv6y0rspvvclz63e523m0 | 0 | Mainnet | + +---------+ +-----------------------------------------------+---------+---------+ | | index 3 | | P-avax1yfpm7v5y5rej2nu7t2r0ffgrlpfq36je0rc5k6 | 0 | Mainnet | + +---------+ +-----------------------------------------------+---------+---------+ | | index 4 | | P-avax17nqvwcqsa8ddgeww8gzmfe932pz2syaj2vyd89 | 0 | Mainnet | + +---------+ +-----------------------------------------------+---------+---------+ | | index 5 | | P-avax1jzvnd05vsfksrtatm2e3rzu6eux9a287493yf8 | 0 | Mainnet | +--------+---------+-------------------------+-----------------------------------------------+---------+---------+ ``` Next, verify ledger B accounts. Connect ledger B and open the avalanche app: ```bash avalanche key list --mainnet --ledger 0,1,2,3,4,5 ``` With result: ```bash +--------+---------+-------------------------+-----------------------------------------------+---------+---------+ | KIND | NAME | CHAIN | ADDRESS | BALANCE | NETWORK | +--------+---------+-------------------------+-----------------------------------------------+---------+---------+ | ledger | index 0 | P-Chain (Bech32 format) | P-avax1r4aceznjkz8ch4pmpqrmkq4f3sl952mdrdt6xm | 4.496 | Mainnet | + +---------+ +-----------------------------------------------+---------+---------+ | | index 1 | | P-avax18e9qsm30du590lhkwydhmkfwhcc9999gvxcaez | 0 | Mainnet | + +---------+ +-----------------------------------------------+---------+---------+ | | index 2 | | P-avax1unkkjstggvdty5gtnfhc0mgnl7qxa52z2d4c9y | 0 | Mainnet | + +---------+ +-----------------------------------------------+---------+---------+ | | index 3 | | P-avax1ek7n0zky3py7prxcrgnmh44y3wm6lc7r7x5r8e | 0 | Mainnet | + +---------+ +-----------------------------------------------+---------+---------+ | | index 4 | | P-avax1rsz6nt6qht5ep37qjk7ht0u9h30mgfhehsmqea | 0 | Mainnet | + +---------+ +-----------------------------------------------+---------+---------+ | | index 5 | | P-avax17u5wm4tfex7xr27xlwejm28pyk84tj0jzp42zz | 0 | Mainnet | +--------+---------+-------------------------+-----------------------------------------------+---------+---------+ ``` ### Recovery Steps[​](#recovery-steps "Direct link to heading") As a multi step operation, the receiving part of the transfer can have intermediate errors, due for example to temporal network connections on the client side. The CLI is going to capture errors and provide the user with a recovery message of the kind: ```bash ERROR: restart from this step by using the same command with extra arguments: --receive-recovery-step 1 ``` If this happen, the receiving operation should be started the same way, choosing the same options, but adding the extra suggested parameter: ```bash avalanche key transfer --receive-recovery-step 1 ``` Then, the CLI is going to resume where it left off. # Send AVAX on C/P-Chain (/docs/tooling/avalanche-cli/transactions/native-send) --- title: Send AVAX on C/P-Chain description: Learn how to execute a native transfer on the C or P-Chain using the Avalanche CLI. --- # Prerequisites - Install the [Avalanche CLI](/docs/tooling/avalanche-cli). - Use the CLI to [create a key](/docs/tooling/cli-commands#key-create). - Fund the key with AVAX. You can use the [faucet](https://test.core.app/tools/testnet-faucet/?subnet=c&token=c) with coupon code `devrel-avax-0112` to get testnet AVAX. - *Optionally*, you can [export](/docs/tooling/cli-commands#key-export) your private key for use in scripting or other tools. ## Initiate the `transfer` Command and Walk Through the Prompts In your terminal, run the following command: ```zsh avalanche key transfer ``` This command and all of its flags are documented [here](/docs/tooling/cli-commands#key-transfer). You will be prompted to answer the following questions: ```zsh ? On what Network do you want to execute the transfer?: ▸ Mainnet Fuji Testnet Devnet Local Network ``` If you select "Devnet", you must input the RPC URL. If your devnet's C-Chain RPC is `https://demo.avax-dev.network/ext/bc/C/rpc`, you should input the URL as: ```zsh ✔ Devnet Endpoint: https://demo.avax-dev.network ``` Select the chain you want to transfer funds from: ```zsh ? Where are the funds to transfer?: ▸ P-Chain C-Chain My blockchain isn't listed ``` Select the chain you want to transfer funds to: ```zsh ? Destination Chain: ▸ P-Chain X-Chain ``` Select the step of the transfer process you want to execute: ```zsh ? Step of the transfer: ▸ Send Receive ``` If you are performing a native transfer where the sender and receiver address are on the same chain, you only need to complete a "send" transaction. If you wish to perform a cross-chain transfer (i.e. from C to P-Chain), you should abort this flow and reinitiate the command as `avalanche key transfer --fund-p-chain` or `avalanche key transfer --fund-x-chain`, completing both the "send" and "receive" flows with keys stored in the CLI. You can fund your CLI-stored key with AVAX on the C-Chain using the [faucet](https://test.core.app/tools/testnet-faucet/?subnet=c&token=c) with coupon code `devrel-avax-0112`. Select the sender address: ```zsh ? Which key should be used as the sender?: ▸ Use stored key Use ledger ? Which stored key should be used as the sender address?: ▸ DemoKey MyKey ewoq ``` Specify the amount to send, input the destination address: ```zsh ✗ Amount to send (AVAX units): 100 ✗ Destination address: P-avax1zgjx8zj7z7zj7z7zj7z7zj7z7zj7zj7zj7zj7e ``` Review the transaction details and confirm/abort: ```zsh this operation is going to: - send 100.000000000 AVAX from P-avax1gmuqt8xg9j4h88kj3hyprt23nf50azlfg8txn2 to destination address P-avax1f630gvct4ht35ragcheapnn2n5cv2tkmq73ec0 - take a fee of 0.001000000 AVAX from source address P-avax1gmuqt8xg9j4h88kj3hyprt23nf50azlfg8txn2 ? Confirm transfer: No ▸ Yes ``` After a successful transfer, you can check your CLI keys' balances with the [command](/docs/tooling/cli-commands#key-list): `avalanche key list`. # Precompile Configs (/docs/tooling/avalanche-cli/upgrade/avalanche-l1-precompile-config) --- title: Precompile Configs description: Learn how to upgrade your Subnet-EVM precompile configurations. --- You can customize Subnet-EVM based Avalanche L1s after deployment by enabling and disabling precompiles. To do this, create a `upgrade.json` file and place it in the appropriate directory. This document describes how to perform such network upgrades. It's specific for Subnet-EVM upgrades. The document [Upgrade an Avalanche L1](/docs/avalanche-l1s/upgrade/considerations) describes all the background information required regarding Avalanche L1 upgrades. It's very important that you have read and understood the previously linked document. Failing to do so can potentially grind your network to a halt. This tutorial assumes that you have already [installed](/docs/tooling/avalanche-cli) Avalanche-CLI. It assumes you have already created and deployed an Avalanche L1 called `testblockchain`. Generate the Upgrade File[​](#generate-the-upgrade-file "Direct link to heading") --------------------------------------------------------------------------------- The [Precompiles](/docs/avalanche-l1s/evm-configuration/customize-avalanche-l1#network-upgrades-enabledisable-precompiles) documentation describes what files the network upgrade requires, and where to place them. To generate a valid `upgrade.json` file, run: ```bash avalanche blockchain upgrade generate testblockchain ``` If you didn't create `testblockchain` yet, you would see this result: ```bash avalanche blockchain upgrade generate testblockchain The provided Avalanche L1 name "testblockchain" does not exist ``` Again, it makes no sense to try the upgrade command if the Avalanche L1 doesn't exist. If that's the case, please go ahead and [create](/docs/tooling/avalanche-cli) the Avalanche L1 first. If the Avalanche L1 definition exists, the tool launches a wizard. It may feel a bit redundant, but you first see some warnings, to draw focus to the dangers involved: ```bash avalanche blockchain upgrade generate testblockchain Performing a network upgrade requires coordinating the upgrade network-wide. A network upgrade changes the rule set used to process and verify blocks, such that any node that upgrades incorrectly or fails to upgrade by the time that upgrade goes into effect may become out of sync with the rest of the network. Any mistakes in configuring network upgrades or coordinating them on validators may cause the network to halt and recovering may be difficult. Please consult https://build.avax.network/docs/subnets/customize-a-subnet#network-upgrades-enabledisable-precompiles for more information Use the arrow keys to navigate: ↓ ↑ → ← ? Press [Enter] to continue, or abort by choosing 'no': ▸ Yes No ``` Go ahead and select `Yes` if you understand everything and you agree. You see a last note, before the actual configuration wizard starts: ```bash Avalanchego and this tool support configuring multiple precompiles. However, we suggest to only configure one per upgrade. Use the arrow keys to navigate: ↓ ↑ → ← ? Select the precompile to configure: ▸ Contract Deployment Allow List Manage Fee Settings Native Minting Transaction Allow List ``` Refer to [Precompiles](/docs/avalanche-l1s/evm-configuration/customize-avalanche-l1#precompiles) for a description of available precompiles and how to configure them. Make sure you understand precompiles thoroughly and how to configure them before attempting to continue. For every precompile in the list, the wizard guides you to provide correct information by prompting relevant questions. For the sake of this tutorial, select `Transaction Allow List`. The document [Restricting Who Can Submit Transactions](/docs/avalanche-l1s/evm-configuration/customize-avalanche-l1#restricting-who-can-submit-transactions) describes what this precompile is about. ```bash ✔ Transaction Allow List Set parameters for the "Manage Fee Settings" precompile Use the arrow keys to navigate: ↓ ↑ → ← ? When should the precompile be activated?: ▸ In 5 minutes In 1 day In 1 week In 2 weeks Custom ``` This is actually common to all precompiles: they require an activation timestamp. If you think about it, it makes sense: you want a synchronized activation of your precompile. So think for a moment about when you want to set the activation timestamp to. You can select one of the suggested times in the future, or you can pick a custom one. After picking `Custom`, it shows the following prompt: ```bash ✔ Custom ✗ Enter the block activation UTC datetime in 'YYYY-MM-DD HH:MM:SS' format: ``` The format is `YYYY-MM-DD HH:MM:SS`, therefore `2023-03-31 14:00:00` would be a valid timestamp. Notice that the timestamp is in UTC. Please make sure you have converted the time from your timezone to UTC. Also notice the `✗` at the beginning of the line. The CLI tool does input validation, so if you provide a valid timestamp, the `x` disappears: ```bash ✔ Enter the block activation UTC datetime in 'YYYY-MM-DD HH:MM:SS' format: 2023-03-31 14:00:00 ``` The timestamp must be in the **future**, so make sure you use such a timestamp should you be running this tutorial after `2023-03-31 14:00:00`. After you provided the valid timestamp, proceed with the precompile specific configurations: ```bash The chosen block activation time is 2023-03-31 14:00:00 Use the arrow keys to navigate: ↓ ↑ → ← ? Add 'adminAddresses'?: ▸ Yes No ``` This will enable the addresses added in this section to add other admins and/or add enabled addresses for transaction issuance. The addresses provided in this tutorial are fake. However, make sure you or someone you trust have full control over the addresses. Otherwise, you might bring your Avalanche L1 to a halt. ```bash ✔ Yes Use the arrow keys to navigate: ↓ ↑ → ← ? Provide 'adminAddresses': ▸ Add Delete Preview More Info ↓ Done ``` The prompting runs with a pattern used throughout the tool: 1. Select an operation: - `Add`: adds a new address to the current list - `Delete`: removes an address from the current list - `Preview`: prints the current list 2. `More info` prints additional information for better guidance, if available 3. Select `Done` when you completed the list Go ahead and add your first address: ```bash ✔ Add ✔ Add an address: 0xaaaabbbbccccddddeeeeffff1111222233334444 ``` Add another one: ```bash ✔ Add Add an address: 0xaaaabbbbccccddddeeeeffff1111222233334444 ✔ Add ✔ Add an address: 0x1111222233334444aaaabbbbccccddddeeeeffff ``` Select `Preview` this time to confirm the list is correct: ```bash ✔ Preview 0. 0xaaaAbbBBCccCDDddEeEEFFfF1111222233334444 1. 0x1111222233334444aAaAbbBBCCCCDdDDeEeEffff Use the arrow keys to navigate: ↓ ↑ → ← ? Provide 'adminAddresses': ▸ Add Delete Preview More Info ↓ Done ``` If it looks good, select `Done` to continue: ```bash ✔ Done Use the arrow keys to navigate: ↓ ↑ → ← ? Add 'enabledAddresses'?: ▸ Yes No ``` Add one such enabled address, these are addresses which can issue transactions: ```bash ✔ Add ✔ Add an address: 0x55554444333322221111eeeeaaaabbbbccccdddd█ ``` After you added this address, and selected `Done`, the tool asks if you want to add another precompile: ```bash ✔ Done Use the arrow keys to navigate: ↓ ↑ → ← ? Should we configure another precompile?: ▸ No Yes ``` If you needed to add another one, you would select `Yes` here. The wizard would guide you through the other available precompiles, excluding already configured ones. To avoid making this tutorial too long, the assumption is you're done here. Select `No`, which ends the wizard. This means you have successfully terminated the generation of the upgrade file, often called upgrade bytes. The tool stores them internally. You shouldn't move files around manually. Use the `export` and `import` commands to get access to the files. So at this point you can either: - Deploy your upgrade bytes locally - Export your upgrade bytes to a file, for installation on a validator running on another machine - Import a file into a different machine running Avalanche-CLI How To Upgrade a Local Network[​](#how-to-upgrade-a-local-network "Direct link to heading") ------------------------------------------------------------------------------------------- The normal use case for this operation is that: - You already created an Avalanche L1 - You already deployed the Avalanche L1 locally - You already generated the upgrade file with the preceding command or imported into the tool - This tool already started the network If the preceding requirements aren't met, the network upgrade command fails. Therefore, to apply your generated or imported upgrade configuration: ```bash avalanche blockchain upgrade apply testblockchain ``` A number of checks run. For example, if you created the Avalanche L1 but didn't deploy it locally: ```bash avalanche blockchain upgrade apply testblockchain Error: no deployment target available Usage: avalanche blockchain upgrade apply [blockchainName] [flags] Flags: --avalanchego-chain-config-dir string avalanchego's chain config file directory (default "/home/fabio/.avalanchego/chains") --config create upgrade config for future Avalanche L1 deployments (same as generate) --fuji fuji apply upgrade existing fuji deployment (alias for `testnet`) -h, --help help for apply --local local apply upgrade existing local deployment --mainnet mainnet apply upgrade existing mainnet deployment --print if true, print the manual config without prompting (for public networks only) --testnet testnet apply upgrade existing testnet deployment (alias for `fuji`) Global Flags: --log-level string log level for the application (default "ERROR") ``` Go ahead and [deploy](/docs/tooling/avalanche-cli/create-deploy-avalanche-l1s/deploy-locally) first your Avalanche L1 if that's your case. If you already had deployed the Avalanche L1 instead, you see something like this: ```bash avalanche blockchain upgrade apply testblockchain Use the arrow keys to navigate: ↓ ↑ → ← ? What deployment would you like to upgrade: ▸ Existing local deployment ``` Select `Existing local deployment`. This installs the upgrade file on all nodes of your local network running in the background. Et voilà. This is the output shown if all went well: ```bash ✔ Existing local deployment ....... Network restarted and ready to use. Upgrade bytes have been applied to running nodes at these endpoints. The next upgrade will go into effect 2023-03-31 09:00:00 +-------+------------+-----------------------------------------------------------------------------------+ | NODE | VM | URL | +-------+------------+-----------------------------------------------------------------------------------+ | node1 | testblockchain | http://0.0.0.0:9650/ext/bc/2YTRV2roEhgvwJz7D7vr33hUZscpaZgcYgUTjeMK9KH99NFnsH/rpc | +-------+------------+-----------------------------------------------------------------------------------+ | node2 | testblockchain | http://0.0.0.0:9652/ext/bc/2YTRV2roEhgvwJz7D7vr33hUZscpaZgcYgUTjeMK9KH99NFnsH/rpc | +-------+------------+-----------------------------------------------------------------------------------+ | node3 | testblockchain | http://0.0.0.0:9654/ext/bc/2YTRV2roEhgvwJz7D7vr33hUZscpaZgcYgUTjeMK9KH99NFnsH/rpc | +-------+------------+-----------------------------------------------------------------------------------+ | node4 | testblockchain | http://0.0.0.0:9656/ext/bc/2YTRV2roEhgvwJz7D7vr33hUZscpaZgcYgUTjeMK9KH99NFnsH/rpc | +-------+------------+-----------------------------------------------------------------------------------+ | node5 | testblockchain | http://0.0.0.0:9658/ext/bc/2YTRV2roEhgvwJz7D7vr33hUZscpaZgcYgUTjeMK9KH99NFnsH/rpc | +-------+------------+-----------------------------------------------------------------------------------+ ``` There is only so much the tool can do here for you. It installed the upgrade bytes _as-is_ as you configured respectively provided them to the tool. You should verify yourself that the upgrades were actually installed correctly, for example issuing some transactions - mind the timestamp!. Apply the Upgrade to a Public Node (Fuji or Mainnet)[​](#apply-the-upgrade-to-a-public-node-fuji-or-mainnet "Direct link to heading") ------------------------------------------------------------------------------------------------------------------------------------- For this scenario to work, you should also have deployed the Avalanche L1 to the public network (Fuji or Mainnet) with this tool. Otherwise, the tool won't know the details of the Avalanche L1, and won't be able to guide you. Assuming the Avalanche L1 has been already deployed to Fuji, when running the `apply` command, the tool notices the deployment: ```bash avalanche blockchain upgrade apply testblockchain Use the arrow keys to navigate: ↓ ↑ → ← ? What deployment would you like to upgrade: Existing local deployment ▸ Fuji ``` If not, you would not find the `Fuji` entry here. This scenario assumes that you are running the `fuji` validator on the same machine which is running Avalanche-CLI. If this is the case, the tool tries to install the upgrade file at the expected destination. If you use default paths, it tries to install at `$HOME/.avalanchego/chains/`, creating the chain id directory, so that the file finally ends up at `$HOME/.avalanchego/chains//upgrade.json`. If you are _not_ using default paths, you can configure the path by providing the flag `--avalanchego-chain-config-dir` to the tool. For example: ```bash avalanche blockchain upgrade apply testblockchain --avalanchego-chain-config-dir /path/to/your/chains ``` Make sure to identify correctly where your chain config dir is, or the node might fail to find it. If all is correct, the file gets installed: ```bash avalanche blockchain upgrade apply testblockchain ✔ Fuji The chain config dir avalanchego uses is set at /home/fabio/.avalanchego/chains Trying to install the upgrade files at the provided /home/fabio/.avalanchego/chains path Successfully installed upgrade file ``` If however the node is _not_ running on this same machine where you are executing Avalanche-CLI, there is no point in running this command for a Fuji node. In this case, you might rather export the file and install it at the right location. To see the instructions about how to go about this, add the `--print` flag: ```bash avalanche blockchain upgrade apply testblockchain --print ✔ Fuji To install the upgrade file on your validator: 1. Identify where your validator has the avalanchego chain config dir configured. The default is at $HOME/.avalanchego/chains (/home/user/.avalanchego/chains on this machine). If you are using a different chain config dir for your node, use that one. 2. Create a directory with the blockchainID in the configured chain-config-dir (e.g. $HOME/.avalanchego/chains/ExDKhjXqiVg7s35p8YJ56CJpcw6nJgcGCCE7DbQ4oBknZ1qXi) if doesn't already exist. 3. Create an `upgrade.json` file in the blockchain directory with the content of your upgrade file. This is the content of your upgrade file as configured in this tool: { "precompileUpgrades": [ { "txAllowListConfig": { "adminAddresses": [ "0xb3d82b1367d362de99ab59a658165aff520cbd4d" ], "enabledAddresses": null, "blockTimestamp": 1677550447 } } ] } ****************************************************************************************************************** * Upgrades are tricky. The syntactic correctness of the upgrade file is important. * * The sequence of upgrades must be strictly observed. * * Make sure you understand https://build.avax.network/docs/nodes/configure/configs-flags#subnet-chain-configs * * before applying upgrades manually. * ****************************************************************************************************************** ``` The instructions also show the content of your current upgrade file, so you can just select that if you wish. Or actually export the file. Export the Upgrade File[​](#export-the-upgrade-file "Direct link to heading") ----------------------------------------------------------------------------- If you have generated the upgrade file, you can export it: ```bash avalanche blockchain upgrade export testblockchain ✔ Provide a path where we should export the file to: /tmp/testblockchain-upgrade.json ``` Just provide a valid path to the prompt, and the tool exports the file there. ```bash avalanche blockchain upgrade export testblockchain Provide a path where we should export the file to: /tmp/testblockchain-upgrade.json Writing the upgrade bytes file to "/tmp/testblockchain-upgrade.json"... File written successfully. ``` You can now take that file and copy it to validator nodes, see preceding instructions. Import the Upgrade File[​](#import-the-upgrade-file "Direct link to heading") ----------------------------------------------------------------------------- You or someone else might have generated the file elsewhere, or on another machine. And now you want to install it on the validator machine, using Avalanche-CLI. You can import the file: ```bash avalanche blockchain upgrade import testblockchain Provide the path to the upgrade file to import: /tmp/testblockchain-upgrade.json ``` An existing file with the same path and filename would be overwritten. After you have imported the file, you can `apply` it either to a local network or to a locally running validator. Follow the instructions for the appropriate use case. # Virtual Machine (/docs/tooling/avalanche-cli/upgrade/avalanche-l1-virtual-machine) --- title: Virtual Machine description: This how-to guide explains how to upgrade the VM of an already-deployed Avalanche L1. --- To upgrade a local Avalanche L1, you first need to pause the local network. To do so, run: ```bash avalanche network stop ``` Next, you need to select the new VM to run your Avalanche L1 on. If you're running a Subnet-EVM Avalanche L1, you likely want to bump to the latest released version. If you're running a Custom VM, you'll want to choose another custom binary. Start the upgrade wizard with: ```bash avalanche blockchain upgrade vm ``` where you replace `` with the name of the Avalanche L1 you would like to upgrade. ## Selecting a VM Deployment to Upgrade After starting the Avalanche L1 Upgrade Wizard, you should see something like this: ```bash ? What deployment would you like to upgrade: ▸ Update config for future deployments Existing local deployment ``` If you select the first option, Avalanche-CLI updates your Avalanche L1's config and any future calls to `avalanche blockchain deploy` use the new version you select. However, any existing local deployments continue to use the old version. If you select the second option, the opposite occurs. The existing local deployment switches to the new VM but subsequent deploys use the original. ## Select a VM to Upgrade To The next option asks you to select your new virtual machine. ```bash ? How would you like to update your Avalanche L1's virtual machine: ▸ Update to latest version Update to a specific version Update to a custom binary ``` If you're using the Subnet-EVM, you'll have the option to upgrade to the latest released version. You can also select a specific version or supply a custom binary. If your Avalanche L1 already uses a custom VM, you need to select another custom binary. Once you select your VM, you should see something like: ```bash Upgrade complete. Ready to restart the network. ``` ## Restart the Network If you are running multiple Avalanche L1s concurrently, you may need to update multiple Avalanche L1s to restart the network. All of your deployed must be using the same RPC Protocol version. You can see more details about this [here](/docs/nodes/maintain/upgrade#incompatible-rpc-version-for-custom-vm). Finally, restart the network with: ```bash avalanche network start ``` If the network starts correctly, your Avalanche L1 is now running the upgraded VM. # avm.getAssetDescription (/docs/rpcs/x-chain/chain/avm_getAssetDescription) --- title: avm.getAssetDescription full: true _openapi: method: POST route: /ext/bc/X#avm.getAssetDescription toc: [] structuredData: headings: [] contents: - content: >- Get information about an asset including name, symbol, and denomination. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Get information about an asset including name, symbol, and denomination. # avm.getBlock (/docs/rpcs/x-chain/chain/avm_getBlock) --- title: avm.getBlock full: true _openapi: method: POST route: /ext/bc/X#avm.getBlock toc: [] structuredData: headings: [] contents: - content: Returns the block with the given id. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the block with the given id. # avm.getBlockByHeight (/docs/rpcs/x-chain/chain/avm_getBlockByHeight) --- title: avm.getBlockByHeight full: true _openapi: method: POST route: /ext/bc/X#avm.getBlockByHeight toc: [] structuredData: headings: [] contents: - content: Returns block at the given height. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns block at the given height. # avm.getHeight (/docs/rpcs/x-chain/chain/avm_getHeight) --- title: avm.getHeight full: true _openapi: method: POST route: /ext/bc/X#avm.getHeight toc: [] structuredData: headings: [] contents: - content: Returns the height of the last accepted block. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the height of the last accepted block. # avm.getTx (/docs/rpcs/x-chain/chain/avm_getTx) --- title: avm.getTx full: true _openapi: method: POST route: /ext/bc/X#avm.getTx toc: [] structuredData: headings: [] contents: - content: Returns the specified transaction. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Returns the specified transaction. # avm.getTxFee (/docs/rpcs/x-chain/chain/avm_getTxFee) --- title: avm.getTxFee full: true _openapi: method: POST route: /ext/bc/X#avm.getTxFee toc: [] structuredData: headings: [] contents: - content: Get the transaction fees of the network. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Get the transaction fees of the network. # avm.getUTXOs (/docs/rpcs/x-chain/chain/avm_getUTXOs) --- title: avm.getUTXOs full: true _openapi: method: POST route: /ext/bc/X#avm.getUTXOs toc: [] structuredData: headings: [] contents: - content: > Gets the UTXOs that reference a given address. If sourceChain is specified, then it will retrieve the atomic UTXOs exported from that chain to the X Chain. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Gets the UTXOs that reference a given address. If sourceChain is specified, then it will retrieve the atomic UTXOs exported from that chain to the X Chain. # avm.issueTx (/docs/rpcs/x-chain/chain/avm_issueTx) --- title: avm.issueTx full: true _openapi: method: POST route: /ext/bc/X#avm.issueTx toc: [] structuredData: headings: [] contents: - content: Send a signed transaction to the network. --- {/* This file was generated by Fumadocs. Do not edit this file directly. Any changes should be made by running the generation command again. */} Send a signed transaction to the network. # Authentication (/docs/tooling/avalanche-sdk/chainkit/authentication) --- title: Authentication description: Authentication for the ChainKit SDK icon: Lock --- ### Per-Client Security Schemes This SDK supports the following security scheme globally: | Name | Type | Scheme | | -------- | ------ | ------- | | `apiKey` | apiKey | API key | The ChainKit SDK can be used without an API key, but rate limits will be lower. Adding an API key allows for higher rate limits. To get an API key, create one via [Builder Console](/console/utilities/data-api-keys) and securely store it. Whether or not you use an API key, you can still interact with the SDK effectively, but the API key provides performance benefits for higher request volumes. ```javascript import { Avalanche } from "@avalanche-sdk/chainkit"; const avalancheSDK = new Avalanche({ apiKey: "", chainId: "43114", network: "mainnet", }); async function run() { const result = await avalancheSDK.metrics.healthCheck(); // Handle the result console.log(result); } run(); ``` Never hardcode your API key directly into your code. Instead, securely store it and retrieve it from an environment variable, a secrets manager, or a dedicated configuration storage mechanism. This ensures that sensitive information remains protected and is not exposed in version control or publicly accessible code. # Custom HTTP Client (/docs/tooling/avalanche-sdk/chainkit/custom-http) --- title: Custom HTTP Client description: Custom HTTP Client for the ChainKit SDK icon: Server --- The TypeScript SDK makes API calls using an HTTPClient that wraps the native [Fetch API](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API). This client is a thin wrapper around `fetch` and provides the ability to attach hooks around the request lifecycle that can be used to modify the request or handle errors and response. The `HTTPClient` constructor takes an optional `fetcher` argument that can be used to integrate a third-party HTTP client or when writing tests to mock out the HTTP client and feed in fixtures. The following example shows how to use the `beforeRequest` hook to to add a custom header and a timeout to requests and how to use the `requestError` hook to log errors: ```javascript import { Avalanche } from "@avalanche-sdk/chainkit"; import { HTTPClient } from "@avalanche-sdk/chainkit/lib/http"; const httpClient = new HTTPClient({ // fetcher takes a function that has the same signature as native `fetch`. fetcher: (request) => { return fetch(request); }, }); httpClient.addHook("beforeRequest", (request) => { const nextRequest = new Request(request, { signal: request.signal || AbortSignal.timeout(5000), }); nextRequest.headers.set("x-custom-header", "custom value"); return nextRequest; }); httpClient.addHook("requestError", (error, request) => { console.group("Request Error"); console.log("Reason:", `${error}`); console.log("Endpoint:", `${request.method} ${request.url}`); console.groupEnd(); }); const sdk = new Avalanche({ httpClient }); ``` # Error Handling (/docs/tooling/avalanche-sdk/chainkit/errors) --- title: Error Handling description: Error Handling for the ChainKit SDK icon: Bug --- All SDK methods return a response object or throw an error. If Error objects are specified in your OpenAPI Spec, the SDK will throw the appropriate Error type. | Error Object | Status Code | Content Type | | :------------------------- | :---------- | :--------------- | | errors.BadRequest | 400 | application/json | | errors.Unauthorized | 401 | application/json | | errors.Forbidden | 403 | application/json | | errors.NotFound | 404 | application/json | | errors.TooManyRequests | 429 | application/json | | errors.InternalServerError | 500 | application/json | | errors.BadGateway | 502 | application/json | | errors.ServiceUnavailable | 503 | application/json | | errors.SDKError | 4xx-5xx | / | Validation errors can also occur when either method arguments or data returned from the server do not match the expected format. The SDKValidationError that is thrown as a result will capture the raw value that failed validation in an attribute called `rawValue`. Additionally, a `pretty()` method is available on this error that can be used to log a nicely formatted string since validation errors can list many issues and the plain error string may be difficult read when debugging. ```javascript import { Avalanche } from "@avalanche-sdk/chainkit"; import { BadGateway, BadRequest, Forbidden, InternalServerError, NotFound, SDKValidationError, ServiceUnavailable, TooManyRequests, Unauthorized, } from "@avalanche-sdk/chainkit/models/errors"; const avalancheSDK = new Avalanche({ apiKey: "", chainId: "43114", network: "mainnet", }); async function run() { try { await avalancheSDK.data.nfts.reindex({ address: "0xB97EF9Ef8734C71904D8002F8b6Bc66Dd9c48a6E", tokenId: "145", }); } catch (err) { switch (true) { case err instanceof SDKValidationError: { // Validation errors can be pretty-printed console.error(err.pretty()); // Raw value may also be inspected console.error(err.rawValue); return; } case err instanceof BadRequest: { // Handle err.data$: BadRequestData console.error(err); return; } case err instanceof Unauthorized: { // Handle err.data$: UnauthorizedData console.error(err); return; } case err instanceof Forbidden: { // Handle err.data$: ForbiddenData console.error(err); return; } case err instanceof NotFound: { // Handle err.data$: NotFoundData console.error(err); return; } case err instanceof TooManyRequests: { // Handle err.data$: TooManyRequestsData console.error(err); return; } case err instanceof InternalServerError: { // Handle err.data$: InternalServerErrorData console.error(err); return; } case err instanceof BadGateway: { // Handle err.data$: BadGatewayData console.error(err); return; } case err instanceof ServiceUnavailable: { // Handle err.data$: ServiceUnavailableData console.error(err); return; } default: { throw err; } } } } run(); ``` # Getting Started (/docs/tooling/avalanche-sdk/chainkit/getting-started) --- title: Getting Started description: Get started with the ChainKit SDK icon: Rocket --- ### ChainKit SDK The ChainKit SDK provides web3 application developers with multi-chain data related to Avalanche's primary network, Avalanche L1s, and Ethereum. With the Data API, you can easily build products that leverage real-time and historical transaction and transfer history, native and token balances, and various types of token metadata. **Migration Notice**: This SDK was previously known as the AvaCloud SDK. We have made namespace changes and will discontinue the AvaCloud SDK in favor of the ChainKit SDK. For migration guidance and specific method updates, please refer to the individual method documentation. The SDK is currently available in TypeScript, with more languages coming soon. If you are interested in a language that is not listed, please reach out to us in the [#avalanche-sdk](https://discord.com/channels/578992315641626624/1416238478915665961) channel in the [Avalanche Discord](https://discord.gg/avax). [https://www.npmjs.com/package/@avalanche-sdk/chainkit](https://www.npmjs.com/package/@avalanche-sdk/chainkit) [https://github.com/ava-labs/avalanche-sdk-typescript](https://github.com/ava-labs/avalanche-sdk-typescript) ### SDK Installation ```bash npm install @avalanche-sdk/client ``` ```bash yarn add @avalanche-sdk/client ``` ```bash bun add @avalanche-sdk/client ``` ### SDK Example Usage ```javascript import { Avalanche } from "@avalanche-sdk/chainkit"; const avalancheSDK = new Avalanche({ apiKey: "", chainId: "43114", network: "mainnet", }); async function run() { const result = await avalancheSDK.metrics.healthCheck(); // Handle the result console.log(result); } run(); ``` Refer to the code samples provided for each route to see examples of how to use them in the SDK. Explore routes here [Data API](/docs/api-reference/data-api/getting-started), [Metrics API](/docs/api-reference/metrics-api/getting-started) & [Webhooks API](/docs/api-reference/webhook-api). # Global Parameters (/docs/tooling/avalanche-sdk/chainkit/global-parameters) --- title: Global Parameters description: Global parameters for the ChainKit SDK icon: Globe --- Certain parameters are configured globally. These parameters may be set on the SDK client instance itself during initialization. When configured as an option during SDK initialization, These global values will be used as defaults on the operations that use them. When such operations are called, there is a place in each to override the global value, if needed. For example, you can set `chainId` to `43114` at SDK initialization and then you do not have to pass the same value on calls to operations like getBlock. But if you want to do so you may, which will locally override the global setting. See the example code below for a demonstration. ### Available Globals The following global parameters are available. | Name | Type | Required | Description | | :-------- | :---------------------------- | :------- | :------------------------------------------------------- | | `chainId` | string | No | A supported EVM chain id, chain alias, or blockchain id. | | `network` | components.GlobalParamNetwork | No | A supported network type, either mainnet or a testnet. | Example ```javascript import { Avalanche } from "@avalanche-sdk/chainkit"; const avalancheSDK = new Avalanche({ apiKey: "", chainId: "43114", // Sets chainId globally, will be used if not passed during method call. network: "mainnet", }); async function run() { const result = await avalancheSDK.data.evm.blocks.get({ blockId: "0x17533aeb5193378b9ff441d61728e7a2ebaf10f61fd5310759451627dfca2e7c", chainId: "", // Override the globally set chain id. }); // Handle the result console.log(result); } run(); ``` # Pagination (/docs/tooling/avalanche-sdk/chainkit/pagination) --- title: Pagination description: Pagination for the ChainKit SDK icon: StickyNote --- Some of the endpoints in this SDK support pagination. To use pagination, you make your SDK calls as usual, but the returned response object will also be an async iterable that can be consumed using the `for await...of` syntax. Here's an example of one such pagination call: ```javascript import { Avalanche } from "@avalanche-sdk/chainkit"; const avalancheSDK = new Avalanche({ apiKey: "", chainId: "43114", network: "mainnet", }); async function run() { const result = await avalancheSDK.metrics.chains.list({ network: "mainnet", }); for await (const page of result) { // Handle the page console.log(page); } } run(); ``` # Retries (/docs/tooling/avalanche-sdk/chainkit/retries) --- title: Retries description: Retries for the ChainKit SDK icon: RotateCcw --- Some of the endpoints in this SDK support retries. If you use the SDK without any configuration, it will fall back to the default retry strategy provided by the API. However, the default retry strategy can be overridden on a per-operation basis, or across the entire SDK. To change the default retry strategy for a single API call, simply provide a retryConfig object to the call: ```javascript import { Avalanche } from "@avalanche-sdk/chainkit"; const avalancheSDK = new Avalanche({ apiKey: "", chainId: "43114", network: "mainnet", }); async function run() { const result = await avalancheSDK.metrics.healthCheck({ retries: { strategy: "backoff", backoff: { initialInterval: 1, maxInterval: 50, exponent: 1.1, maxElapsedTime: 100, }, retryConnectionErrors: false, }, }); // Handle the result console.log(result); } run(); ``` If you'd like to override the default retry strategy for all operations that support retries, you can provide a retryConfig at SDK initialization: ```javascript import { Avalanche } from "@avalanche-sdk/chainkit"; const avalancheSDK = new Avalanche({ retryConfig: { strategy: "backoff", backoff: { initialInterval: 1, maxInterval: 50, exponent: 1.1, maxElapsedTime: 100, }, retryConnectionErrors: false, }, apiKey: "", chainId: "43114", network: "mainnet", }); async function run() { const result = await avalancheSDK.metrics.healthCheck(); // Handle the result console.log(result); } run(); ``` # Client & Transports (/docs/tooling/avalanche-sdk/client/clients-transports) --- title: Client & Transports icon: Plug --- ## Overview Clients provide type-safe interfaces for interacting with Avalanche. Transports handle the communication layer. This separation lets you switch between HTTP, WebSocket, or custom providers without changing your code. The SDK is built on [viem](https://viem.sh), so you get full Ethereum compatibility plus native support for P-Chain, X-Chain, and C-Chain operations. ## Clients Clients are TypeScript interfaces that abstract RPC calls and provide type-safe APIs. ### Avalanche Client (Public Client) The read-only client for querying blockchain data across all Avalanche chains. ```typescript import { createAvalancheClient } from "@avalanche-sdk/client"; import { avalanche } from "@avalanche-sdk/client/chains"; const client = createAvalancheClient({ chain: avalanche, transport: { type: "http", }, }); // Access different chains const pChainHeight = await client.pChain.getHeight(); const cChainBalance = await client.getBalance({ address: "0x742d35Cc6634C0532925a3b8D4C9db96C4b4d8b6", }); ``` ### Avalanche Wallet Client Extends the public client with transaction signing and sending capabilities. ```typescript import { createAvalancheWalletClient } from "@avalanche-sdk/client"; import { privateKeyToAvalancheAccount } from "@avalanche-sdk/client/accounts"; import { avalanche } from "@avalanche-sdk/client/chains"; import { avaxToWei } from "@avalanche-sdk/client/utils"; const account = privateKeyToAvalancheAccount("0x..."); const walletClient = createAvalancheWalletClient({ account, chain: avalanche, transport: { type: "http", }, }); // Send AVAX const hash = await walletClient.send({ to: "0x742d35Cc6634C0532925a3b8D4C9db96C4b4d8b6", amount: avaxToWei(0.001), }); ``` ### Chain-Specific Clients Access chain-specific operations through sub-clients: - `client.pChain` - Validator operations, staking, subnet management - `client.xChain` - Asset transfers, UTXO operations - `client.cChain` - Atomic transactions - API clients - Admin, Info, Health, Index API, Proposervm operations ## Transports Transports handle data transmission between your application and Avalanche nodes. They abstract RPC protocol implementation. ### HTTP Transport Uses standard HTTP/HTTPS connections. Most common choice for production applications. ```typescript import { createAvalancheClient } from "@avalanche-sdk/client"; import { avalanche } from "@avalanche-sdk/client/chains"; const client = createAvalancheClient({ chain: avalanche, transport: { type: "http", // Optional: specify custom URL // url: "https://api.avax.network/ext/bc/C/rpc", }, }); ``` ### WebSocket Transport Maintains a persistent connection for real-time subscriptions and event streaming. ```typescript import { createAvalancheClient } from "@avalanche-sdk/client"; import { avalanche } from "@avalanche-sdk/client/chains"; const client = createAvalancheClient({ chain: avalanche, transport: { type: "ws", // Optional: specify custom WebSocket URL // url: "wss://api.avax.network/ext/bc/C/ws", }, }); ``` ### Custom Transport (EIP-1193) Supports custom transport implementations, including EIP-1193 providers (MetaMask, WalletConnect, etc.). ```typescript import { createAvalancheWalletClient } from "@avalanche-sdk/client"; import { avalanche } from "@avalanche-sdk/client/chains"; import "@avalanche-sdk/client/window"; // Using window.ethereum (Core, MetaMask, etc.) const walletClient = createAvalancheWalletClient({ account: account, chain: avalanche, transport: { type: "custom", provider: window.ethereum, // Or // provider: window.avalanche, }, }); ``` ## Transport Configuration ### Mainnet/Testnet ```typescript import { createAvalancheClient } from "@avalanche-sdk/client"; import { avalanche, avalancheFuji } from "@avalanche-sdk/client/chains"; // Mainnet const mainnetClient = createAvalancheClient({ chain: avalanche, transport: { type: "http" }, }); // Testnet (Fuji) const testnetClient = createAvalancheClient({ chain: avalancheFuji, transport: { type: "http" }, }); ``` ### Custom Endpoints ```typescript const client = createAvalancheClient({ chain: avalanche, transport: { type: "http", url: "https://your-custom-rpc-endpoint.com/ext/bc/C/rpc", // Optional: Add headers for authentication // fetchOptions: { // headers: { // Authorization: `Bearer ${apiKey}`, // }, // }, }, }); ``` ### Switching Transports Switch between transports without changing your application logic: ```typescript // HTTP client const httpClient = createAvalancheClient({ chain: avalanche, transport: { type: "http" }, }); // WebSocket client const wsClient = createAvalancheClient({ chain: avalanche, transport: { type: "ws" }, }); // Both have the same API const height1 = await httpClient.pChain.getHeight(); const height2 = await wsClient.pChain.getHeight(); ``` ## Client Selection ### Public Client vs Wallet Client | Feature | Public Client | Wallet Client | | ----------------------- | ------------- | --------------------- | | **Read Operations** | ✅ Yes | ✅ Yes (inherits all) | | **Transaction Signing** | ❌ No | ✅ Yes | | **Transaction Sending** | ❌ No | ✅ Yes | | **Account Required** | ❌ No | ✅ Yes | **Use Public Client for:** Reading blockchain data, querying balances, fetching validator info, reading smart contract state. **Use Wallet Client for:** Sending transactions, signing messages, transferring assets, interacting with smart contracts. ### Chain-Specific Clients The main client provides access to all chains. You can also create standalone chain-specific clients: ```typescript // Main client - access all chains const client = createAvalancheClient({ ... }); await client.pChain.getHeight(); await client.xChain.getBalance({ ... }); // Chain-specific client import { createPChainClient } from "@avalanche-sdk/client"; const pChainOnly = createPChainClient({ ... }); await pChainOnly.getHeight(); ``` **Use chain-specific clients when:** Your app only interacts with one chain, you want smaller bundle size, or need specialized configuration. **Use main client when:** Your app uses multiple chains or you want unified configuration. ## Next Steps - **[Avalanche Client](clients/avalanche-client)** - Read-only operations - **[Avalanche Wallet Client](clients/wallet-client)** - Transaction operations - **[Chain-Specific Clients](clients/p-chain-client)** - P-Chain, X-Chain, and C-Chain clients - **[Public Actions](methods/public-methods/p-chain)** - Read operations reference - **[Wallet Actions](methods/wallet-methods/wallet)** - Write operations reference The SDK follows the same transport patterns as [viem](https://viem.sh/docs/clients/public.html#transport) for compatibility. # Getting Started (/docs/tooling/avalanche-sdk/client/getting-started) --- title: Getting Started icon: Rocket description: Get started with the Avalanche Client SDK - your gateway to building on Avalanche with TypeScript. --- ## Overview The Avalanche Client SDK provides a TypeScript interface for interacting with Avalanche. Built on [viem](https://viem.sh), it offers full Ethereum compatibility plus native support for P-Chain, X-Chain, and C-Chain operations. [https://www.npmjs.com/package/@avalanche-sdk/client](https://www.npmjs.com/package/@avalanche-sdk/client) [https://github.com/ava-labs/avalanche-sdk-typescript](https://github.com/ava-labs/avalanche-sdk-typescript) ## Installation Install the Avalanche Client SDK using your preferred package manager: ```bash npm install @avalanche-sdk/client ``` ```bash yarn add @avalanche-sdk/client ``` ```bash bun add @avalanche-sdk/client ``` ### Requirements - Node.js >= 20.0.0 - TypeScript >= 5.0.0 (recommended) - Modern browsers: Chrome 88+, Firefox 85+, Safari 14+, Edge 88+ ## Quick Start ### Public Client Create a read-only client: ```typescript import { createAvalancheClient } from "@avalanche-sdk/client"; import { avalanche } from "@avalanche-sdk/client/chains"; const client = createAvalancheClient({ chain: avalanche, transport: { type: "http" }, }); // Read operations const pChainHeight = await client.pChain.getHeight(); const balance = await client.getBalance({ address: "0xA0Cf798816D4b9b9866b5330EEa46a18382f251e", }); ``` ### Wallet Client Create a wallet client for transactions: ```typescript import { createAvalancheWalletClient } from "@avalanche-sdk/client"; import { privateKeyToAvalancheAccount } from "@avalanche-sdk/client/accounts"; import { avalanche } from "@avalanche-sdk/client/chains"; import { avaxToWei } from "@avalanche-sdk/client/utils"; const account = privateKeyToAvalancheAccount("0x..."); const walletClient = createAvalancheWalletClient({ account, chain: avalanche, transport: { type: "http" }, }); // Send AVAX const hash = await walletClient.send({ to: "0x742d35Cc6634C0532925a3b8D4C9db96C4b4d8b6", amount: avaxToWei(0.001), }); ``` ## Account Creation ### Private Key ```typescript import { privateKeyToAvalancheAccount } from "@avalanche-sdk/client/accounts"; const account = privateKeyToAvalancheAccount("0x..."); ``` ### Mnemonic ```typescript import { mnemonicsToAvalancheAccount } from "@avalanche-sdk/client/accounts"; const account = mnemonicsToAvalancheAccount("test test test..."); ``` ### HD Key ```typescript import { hdKeyToAvalancheAccount, HDKey } from "@avalanche-sdk/client/accounts"; const hdKey = HDKey.fromMasterSeed(seed); const account = hdKeyToAvalancheAccount(hdKey); ``` [Learn more about accounts →](accounts) ## Transport Configuration ### HTTP ```typescript const client = createAvalancheClient({ chain: avalanche, transport: { type: "http" }, }); ``` ### WebSocket ```typescript const client = createAvalancheClient({ chain: avalanche, transport: { type: "ws" }, }); ``` ### Custom Endpoint ```typescript const client = createAvalancheClient({ chain: avalanche, transport: { type: "http", url: "https://your-custom-rpc-endpoint.com/ext/bc/C/rpc", }, }); ``` [Learn more about transports →](clients-transports) ## Avalanche Chains Avalanche has three primary chains: - **P-Chain (Platform Chain)**: Validators, staking, subnets - **X-Chain (Exchange Chain)**: Asset transfers (UTXO model) - **C-Chain (Contract Chain)**: Smart contracts (Ethereum-compatible) ```typescript // P-Chain const validators = await client.pChain.getCurrentValidators({}); // X-Chain const balance = await client.xChain.getBalance({ address: "X-avax1example...", assetID: "AVAX", }); // C-Chain const balance = await client.getBalance({ address: "0x..." }); ``` ## Using viem Features The SDK is built on viem, so you have access to all viem functionality: ```typescript import { formatEther, parseEther } from "@avalanche-sdk/client/utils"; const valueInWei = parseEther("1.0"); const valueInAvax = formatEther(1000000000000000000n); const receipt = await walletClient.waitForTransactionReceipt({ hash }); ``` See the [viem documentation](https://viem.sh/docs/getting-started) for more utilities. ## Next Steps - **[Clients & Transports](clients-transports)** - Understanding clients and transports - **[Account Management](accounts)** - Creating and managing accounts - **[Wallet Operations](methods/wallet-methods/wallet)** - Sending transactions - **[P-Chain Operations](methods/public-methods/p-chain)** - Validator and staking - **[X-Chain Operations](methods/public-methods/x-chain)** - Asset transfers - **[C-Chain Operations](methods/public-methods/c-chain)** - EVM operations ## Need Help? - [Discord community](https://discord.gg/avax) - [GitHub examples](https://github.com/ava-labs/avalanche-sdk-typescript/tree/main/client/examples) - [Open an issue](https://github.com/ava-labs/avalanche-sdk-typescript/issues) # Getting Started (/docs/tooling/avalanche-sdk/interchain/getting-started) --- title: Getting Started description: Install and configure the Interchain SDK icon: rocket --- ## Installation npm install @avalanche-sdk/interchain @avalanche-sdk/client pnpm add @avalanche-sdk/interchain @avalanche-sdk/client yarn add @avalanche-sdk/interchain @avalanche-sdk/client bun add @avalanche-sdk/interchain @avalanche-sdk/client ## Setup ### 1. Create Wallet Client ```typescript import { createAvalancheWalletClient } from "@avalanche-sdk/client"; import { avalancheFuji } from "@avalanche-sdk/client/chains"; import { privateKeyToAvalancheAccount } from "@avalanche-sdk/client/accounts"; const account = privateKeyToAvalancheAccount("0x..."); const wallet = createAvalancheWalletClient({ account, chain: avalancheFuji, transport: { type: "http" }, }); ``` ### 2. Initialize ICM Client ```typescript import { createICMClient } from "@avalanche-sdk/interchain"; import { avalancheFuji, dispatch } from "@avalanche-sdk/interchain/chains"; const icm = createICMClient(wallet, avalancheFuji, dispatch); ``` ## Send Your First Message ```typescript async function sendMessage() { const hash = await icm.sendMsg({ sourceChain: avalancheFuji, destinationChain: dispatch, message: "Hello from Avalanche!", }); console.log("Message sent:", hash); } ``` ## Send Your First Token Transfer ```typescript import { createICTTClient } from "@avalanche-sdk/interchain"; const ictt = createICTTClient(avalancheFuji, dispatch); // Deploy token and contracts (one-time setup) const { contractAddress: tokenAddress } = await ictt.deployERC20Token({ walletClient: wallet, sourceChain: avalancheFuji, name: "My Token", symbol: "MTK", initialSupply: 1000000, }); // Send tokens const { txHash } = await ictt.sendToken({ walletClient: wallet, sourceChain: avalancheFuji, destinationChain: dispatch, tokenHomeContract: "0x...", tokenRemoteContract: "0x...", recipient: "0x...", amountInBaseUnit: 100, }); ``` ## Next Steps - Learn about [Interchain Messaging](/avalanche-sdk/interchain/icm) - Explore [Token Transfers](/avalanche-sdk/interchain/ictt) - Understand [Warp Messages](/avalanche-sdk/interchain/warp) # Interchain SDK (/docs/tooling/avalanche-sdk/interchain) --- title: Interchain SDK icon: network description: Send cross-chain messages and transfer tokens between Avalanche chains and subnets --- ## Overview The Interchain SDK enables cross-chain communication on Avalanche. Send messages and transfer ERC20 tokens between Avalanche C-Chain and subnets using the Teleporter protocol. **Key Features:** - **Interchain Messaging (ICM)** - Send arbitrary messages across chains - **Interchain Token Transfers (ICTT)** - Transfer ERC20 tokens between chains - **Warp Messages** - Parse and build Warp protocol messages - **Type-safe** - Full TypeScript support with IntelliSense ## Quick Start ```typescript import { createAvalancheWalletClient } from "@avalanche-sdk/client"; import { privateKeyToAvalancheAccount } from "@avalanche-sdk/client/accounts"; import { createICMClient } from "@avalanche-sdk/interchain"; import { avalancheFuji, dispatch } from "@avalanche-sdk/interchain/chains"; // Setup wallet const account = privateKeyToAvalancheAccount("0x..."); const wallet = createAvalancheWalletClient({ account, chain: avalancheFuji, transport: { type: "http" }, }); // Create ICM client const icm = createICMClient(wallet); // Send message const hash = await icm.sendMsg({ sourceChain: avalancheFuji, destinationChain: dispatch, message: "Hello from Avalanche!", }); ``` ## What's Next # Testing Cross-Chain Messaging (/docs/tooling/tmpnet/guides/cross-chain-messaging) --- title: Testing Cross-Chain Messaging description: Test Teleporter cross-chain messaging between Avalanche L1s --- This guide shows how to test cross-chain messaging using Teleporter, Avalanche's native cross-chain communication protocol. Learn the complete flow from sending messages to relaying and verifying delivery. ## Overview Teleporter enables L1s to communicate by: 1. **Sending** a message on the source chain 2. **Aggregating** signatures from validators via Warp 3. **Relaying** the signed message to the destination chain 4. **Verifying** delivery and execution ## Prerequisites - Complete [Getting Started](/docs/tooling/tmpnet/guides/getting-started) - Have a network with two L1s configured - Understand Warp message basics ## Basic Send and Receive Flow ### Complete Test Example ```go title="teleporter_test.go" package teleporter_test import ( "context" "flag" "math/big" "os" "testing" "time" "github.com/ava-labs/avalanchego/tests/fixture/e2e" "github.com/ava-labs/avalanchego/tests/fixture/tmpnet" teleportermessenger "github.com/ava-labs/teleporter/abi-bindings/go/teleporter/TeleporterMessenger" "github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/crypto" "github.com/onsi/ginkgo/v2" . "github.com/onsi/gomega" ) var ( network *tmpnet.Network e2eFlags *e2e.FlagVars l1A, l1B L1TestInfo teleporterAddress common.Address fundedKey *ecdsa.PrivateKey ) func TestMain(m *testing.M) { e2eFlags = e2e.RegisterFlags() flag.Parse() os.Exit(m.Run()) } func TestTeleporter(t *testing.T) { if os.Getenv("RUN_E2E") == "" { t.Skip("RUN_E2E not set") } RegisterFailHandler(ginkgo.Fail) ginkgo.RunSpecs(t, "Teleporter Test Suite") } var _ = ginkgo.BeforeSuite(func() { ctx, cancel := context.WithTimeout(context.Background(), 10*time.Minute) defer cancel() // Create network with two L1s and Teleporter pre-deployed network, l1A, l1B, teleporterAddress = createNetworkWithTeleporter(ctx) fundedKey = network.PreFundedKeys[0] }) var _ = ginkgo.AfterSuite(func() { if network != nil { network.Stop(context.Background()) } }) var _ = ginkgo.Describe("[Teleporter Messaging]", func() { ginkgo.It("should send and receive message", ginkgo.Label("teleporter", "basic"), func() { ctx := context.Background() fundedAddress := crypto.PubkeyToAddress(fundedKey.PublicKey) // Create signature aggregator aggregator := NewSignatureAggregator( l1A.NodeURIs[0], []ids.ID{l1A.SubnetID, l1B.SubnetID}, ) defer aggregator.Shutdown() // 1. Send cross-chain message messageID, receipt := sendCrossChainMessage( ctx, l1A, l1B, teleporterAddress, fundedAddress, []byte("Hello from Chain A!"), fundedKey, ) Expect(receipt.Status).To(Equal(uint64(1))) // 2. Relay message to destination deliveryReceipt := relayMessage( ctx, receipt, l1A, l1B, teleporterAddress, aggregator, fundedKey, ) Expect(deliveryReceipt.Status).To(Equal(uint64(1))) // 3. Verify message was received teleporter := getTeleporterContract(l1B, teleporterAddress) delivered, err := teleporter.MessageReceived( &bind.CallOpts{}, messageID, ) Expect(err).NotTo(HaveOccurred()) Expect(delivered).To(BeTrue()) }) }) ``` ## Step-by-Step Implementation ### 1. Sending a Message ```go func sendCrossChainMessage( ctx context.Context, source L1TestInfo, destination L1TestInfo, teleporterAddress common.Address, recipientAddress common.Address, message []byte, senderKey *ecdsa.PrivateKey, ) (common.Hash, *types.Receipt) { // Get Teleporter contract teleporter := getTeleporterContract(source, teleporterAddress) // Prepare message input input := teleportermessenger.TeleporterMessageInput{ DestinationBlockchainID: destination.BlockchainID, DestinationAddress: recipientAddress, FeeInfo: teleportermessenger.TeleporterFeeInfo{ FeeTokenAddress: common.Address{}, // No fee for this example Amount: big.NewInt(0), }, RequiredGasLimit: big.NewInt(100000), AllowedRelayerAddresses: []common.Address{}, // Any relayer allowed Message: message, } // Send transaction opts, err := bind.NewKeyedTransactorWithChainID(senderKey, source.EVMChainID) Expect(err).NotTo(HaveOccurred()) tx, err := teleporter.SendCrossChainMessage(opts, input) Expect(err).NotTo(HaveOccurred()) // Wait for transaction success receipt := waitForSuccess(ctx, source, tx.Hash()) // Extract message ID from logs messageID := extractMessageIDFromReceipt(receipt, teleporter) return messageID, receipt } ``` ### 2. Constructing Warp Message ```go func constructWarpMessage( ctx context.Context, source L1TestInfo, destination L1TestInfo, receipt *types.Receipt, aggregator *SignatureAggregator, ) *avalancheWarp.Message { // Extract unsigned Warp message from logs unsignedMessage := extractWarpMessageFromLogs(ctx, receipt, source) // Wait for all validators to accept the block waitForAllValidatorsToAcceptBlock( ctx, source.NodeURIs, source.BlockchainID, receipt.BlockNumber.Uint64(), ) // Get signed message from aggregator signedMessage, err := aggregator.CreateSignedMessage( unsignedMessage, nil, // No justification needed for Teleporter source.SubnetID, 67, // 67% quorum (warp.WarpDefaultQuorumNumerator) ) Expect(err).NotTo(HaveOccurred()) return signedMessage } ``` ### 3. Relaying the Message ```go func relayMessage( ctx context.Context, sourceReceipt *types.Receipt, source L1TestInfo, destination L1TestInfo, teleporterAddress common.Address, aggregator *SignatureAggregator, relayerKey *ecdsa.PrivateKey, ) *types.Receipt { // Construct signed Warp message signedMessage := constructWarpMessage( ctx, source, destination, sourceReceipt, aggregator, ) // Get Teleporter contract on destination teleporter := getTeleporterContract(destination, teleporterAddress) // Create predicate transaction (includes Warp message in access list) tx := createPredicateTx( ctx, destination, teleporterAddress, signedMessage, relayerKey, func(opts *bind.TransactOpts) (*types.Transaction, error) { return teleporter.ReceiveCrossChainMessage(opts, 0, relayerKey.Address) }, ) // Send and wait for success err := destination.RPCClient.SendTransaction(ctx, tx) Expect(err).NotTo(HaveOccurred()) receipt := waitForSuccess(ctx, destination, tx.Hash()) return receipt } ``` ## Message Fees and Relayer Rewards ### Sending with Fees ```go ginkgo.It("should handle message fees", ginkgo.Label("teleporter", "fees"), func() { ctx := context.Background() // Deploy ERC20 token for fees feeTokenAddress, feeToken := deployERC20Token( ctx, l1A, fundedKey, "FeeToken", "FEE", ) // Approve Teleporter to spend tokens approveERC20( ctx, l1A, feeToken, teleporterAddress, big.NewInt(1e18), fundedKey, ) // Send message with fee feeAmount := big.NewInt(1000) input := teleportermessenger.TeleporterMessageInput{ DestinationBlockchainID: l1B.BlockchainID, DestinationAddress: recipientAddress, FeeInfo: teleportermessenger.TeleporterFeeInfo{ FeeTokenAddress: feeTokenAddress, Amount: feeAmount, }, RequiredGasLimit: big.NewInt(100000), Message: []byte("paid message"), } messageID, receipt := sendMessage(ctx, l1A, teleporterAddress, input, fundedKey) // Relay and collect fee relayReceipt := relayMessage(ctx, receipt, l1A, l1B, teleporterAddress, aggregator, relayerKey) // Verify relayer received fee verifyRelayerReward(ctx, l1B, teleporterAddress, relayerAddress, feeTokenAddress, feeAmount) }) ``` ### Redeeming Relayer Rewards ```go func redeemRelayerRewards( ctx context.Context, l1 L1TestInfo, teleporterAddress common.Address, feeTokenAddress common.Address, relayerKey *ecdsa.PrivateKey, ) { teleporter := getTeleporterContract(l1, teleporterAddress) relayerAddress := crypto.PubkeyToAddress(relayerKey.PublicKey) // Check pending rewards pendingReward, err := teleporter.CheckRelayerRewardAmount( &bind.CallOpts{}, relayerAddress, feeTokenAddress, ) Expect(err).NotTo(HaveOccurred()) Expect(pendingReward.Uint64()).To(BeNumerically(">", 0)) // Redeem rewards opts, _ := bind.NewKeyedTransactorWithChainID(relayerKey, l1.EVMChainID) tx, err := teleporter.RedeemRelayerRewards(opts, feeTokenAddress) Expect(err).NotTo(HaveOccurred()) receipt := waitForSuccess(ctx, l1, tx.Hash()) // Verify rewards received feeToken := getERC20Contract(l1, feeTokenAddress) balance, err := feeToken.BalanceOf(&bind.CallOpts{}, relayerAddress) Expect(err).NotTo(HaveOccurred()) Expect(balance).To(Equal(pendingReward)) } ``` ## Advanced Patterns ### Adding Fees to Existing Messages ```go ginkgo.It("should add fee to message", ginkgo.Label("teleporter", "add-fee"), func() { ctx := context.Background() // Send message with low initial fee messageID, _ := sendMessage(ctx, l1A, teleporterAddress, input, fundedKey) // Add additional fee additionalFee := big.NewInt(500) approveERC20(ctx, l1A, feeToken, teleporterAddress, additionalFee, fundedKey) teleporter := getTeleporterContract(l1A, teleporterAddress) opts, _ := bind.NewKeyedTransactorWithChainID(fundedKey, l1A.EVMChainID) tx, err := teleporter.AddFeeAmount( opts, messageID, teleportermessenger.TeleporterFeeInfo{ FeeTokenAddress: feeTokenAddress, Amount: additionalFee, }, ) Expect(err).NotTo(HaveOccurred()) waitForSuccess(ctx, l1A, tx.Hash()) }) ``` ### Sending Specific Receipts ```go ginkgo.It("should send specific receipts", ginkgo.Label("teleporter", "receipts"), func() { ctx := context.Background() // Send messages A->B messageIDs := []common.Hash{ sendSimpleMessage(ctx, l1A, l1B, "msg1"), sendSimpleMessage(ctx, l1A, l1B, "msg2"), sendSimpleMessage(ctx, l1A, l1B, "msg3"), } // Relay all messages for _, msgID := range messageIDs { relayMessageByID(ctx, l1A, l1B, msgID, aggregator, fundedKey) } // Send receipts back B->A teleporter := getTeleporterContract(l1B, teleporterAddress) opts, _ := bind.NewKeyedTransactorWithChainID(fundedKey, l1B.EVMChainID) tx, err := teleporter.SendSpecifiedReceipts( opts, l1A.BlockchainID, messageIDs, teleportermessenger.TeleporterFeeInfo{ FeeTokenAddress: common.Address{}, Amount: big.NewInt(0), }, []common.Address{}, ) Expect(err).NotTo(HaveOccurred()) receiptTx := waitForSuccess(ctx, l1B, tx.Hash()) // Relay receipt message to A relayMessage(ctx, receiptTx, l1B, l1A, teleporterAddress, aggregator, fundedKey) }) ``` ### Retrying Failed Execution ```go ginkgo.It("should retry failed message execution", ginkgo.Label("teleporter", "retry"), func() { ctx := context.Background() // Send message that will fail (insufficient gas) input := teleportermessenger.TeleporterMessageInput{ DestinationBlockchainID: l1B.BlockchainID, DestinationAddress: contractAddress, RequiredGasLimit: big.NewInt(10), // Too low Message: callData, } messageID, receipt := sendMessage(ctx, l1A, teleporterAddress, input, fundedKey) // Relay - will fail but message is delivered relayMessage(ctx, receipt, l1A, l1B, teleporterAddress, aggregator, fundedKey) // Verify message not executed teleporter := getTeleporterContract(l1B, teleporterAddress) executed, _ := teleporter.MessageReceived(&bind.CallOpts{}, messageID) Expect(executed).To(BeFalse()) // Retry with more gas opts, _ := bind.NewKeyedTransactorWithChainID(fundedKey, l1B.EVMChainID) opts.GasLimit = 500000 tx, err := teleporter.RetryMessageExecution( opts, l1A.BlockchainID, teleportermessenger.TeleporterMessage{ MessageID: messageID, DestinationBlockchainID: l1B.BlockchainID, DestinationAddress: contractAddress, RequiredGasLimit: big.NewInt(10), Message: callData, }, ) Expect(err).NotTo(HaveOccurred()) waitForSuccess(ctx, l1B, tx.Hash()) // Verify now executed executed, _ = teleporter.MessageReceived(&bind.CallOpts{}, messageID) Expect(executed).To(BeTrue()) }) ``` ## Signature Aggregation ### Setting Up Aggregator ```go type SignatureAggregator struct { client *aggregator.Client subnetIDs []ids.ID } func NewSignatureAggregator( nodeURI string, subnetIDs []ids.ID, ) *SignatureAggregator { client, err := aggregator.NewSignatureAggregatorClient(nodeURI) Expect(err).NotTo(HaveOccurred()) return &SignatureAggregator{ client: client, subnetIDs: subnetIDs, } } func (a *SignatureAggregator) CreateSignedMessage( unsignedMessage *avalancheWarp.UnsignedMessage, justification []byte, subnetID ids.ID, quorumNum uint64, ) (*avalancheWarp.Message, error) { signedMessage, err := a.client.CreateSignedMessage( unsignedMessage, justification, subnetID, quorumNum, ) return signedMessage, err } func (a *SignatureAggregator) Shutdown() { // Clean up aggregator resources } ``` ## Testing Error Scenarios ### Deliver to Wrong Chain ```go ginkgo.It("should reject wrong chain delivery", ginkgo.Label("teleporter", "error"), func() { ctx := context.Background() // Send message A -> B messageID, receipt := sendMessage(ctx, l1A, l1B, message, fundedKey) // Try to deliver on wrong chain (A instead of B) signedMessage := constructWarpMessage(ctx, l1A, l1A, receipt, aggregator) // This should fail tx := createPredicateTx(ctx, l1A, teleporterAddress, signedMessage, fundedKey, /*...*/) err := l1A.RPCClient.SendTransaction(ctx, tx) // Expect transaction to fail receipt := waitForFailure(ctx, l1A, tx.Hash()) Expect(receipt.Status).To(Equal(uint64(0))) }) ``` ### Insufficient Gas ```go ginkgo.It("should handle insufficient gas", ginkgo.Label("teleporter", "error"), func() { // Message with required gas of 100k input := teleportermessenger.TeleporterMessageInput{ RequiredGasLimit: big.NewInt(100000), // ... other fields } messageID, receipt := sendMessage(ctx, l1A, teleporterAddress, input, fundedKey) // Relay with insufficient gas (50k) tx := createPredicateTxWithGasLimit( ctx, l1B, teleporterAddress, signedMessage, 50000, relayerKey, ) // Transaction succeeds but message execution fails receipt := waitForSuccess(ctx, l1B, tx.Hash()) // Message not marked as received teleporter := getTeleporterContract(l1B, teleporterAddress) received, _ := teleporter.MessageReceived(&bind.CallOpts{}, messageID) Expect(received).To(BeFalse()) }) ``` ## Best Practices 1. **Always use signature aggregator**: Don't manually collect signatures 2. **Wait for block acceptance**: Ensure validators have seen the block before aggregating 3. **Handle async operations**: Use `Eventually` for checking message delivery 4. **Test error cases**: Verify wrong chain, insufficient gas, etc. 5. **Clean up aggregator**: Always defer `aggregator.Shutdown()` 6. **Use appropriate timeouts**: Cross-chain operations can be slow ## Common Patterns ### Wait for Message Delivery ```go Eventually(func() bool { delivered, _ := teleporter.MessageReceived(&bind.CallOpts{}, messageID) return delivered }, 30*time.Second, 500*time.Millisecond).Should(BeTrue()) ``` ### Verify Receipt Received ```go func verifyReceiptReceived( ctx context.Context, l1 L1TestInfo, teleporterAddress common.Address, messageID common.Hash, ) { teleporter := getTeleporterContract(l1, teleporterAddress) received, err := teleporter.ReceiptReceived(&bind.CallOpts{}, messageID) Expect(err).NotTo(HaveOccurred()) Expect(received).To(BeTrue()) } ``` ## Next Steps Convert subnets to L1s with validator managers Deep dive into Warp message construction Helper functions for transactions and events ## Additional Resources - [Teleporter Documentation](https://github.com/ava-labs/teleporter) - [ICM Services Tests](https://github.com/ava-labs/icm-services/tree/main/tests) - [Warp Messaging](https://docs.avax.network/cross-chain) # Getting Started with tmpnet Testing (/docs/tooling/tmpnet/guides/getting-started) --- title: Getting Started with tmpnet Testing description: Set up your first Ginkgo test suite with tmpnet for testing Avalanche L1s --- This guide shows you how to set up a Ginkgo test suite with tmpnet for testing Avalanche L1s. All testing with tmpnet should use Ginkgo for consistency and best practices. ## Prerequisites Install required packages: ```bash go get github.com/onsi/ginkgo/v2 go get github.com/onsi/gomega go get github.com/ava-labs/avalanchego/tests/fixture/tmpnet go get github.com/ava-labs/avalanchego/tests/fixture/e2e ``` ## Basic Test Suite Structure ### 1. Create Test Suite File Create a test file with the standard Ginkgo setup: ```go title="my_test.go" package mypackage_test import ( "context" "flag" "os" "testing" "time" "github.com/ava-labs/avalanchego/tests/fixture/e2e" "github.com/ava-labs/avalanchego/tests/fixture/tmpnet" "github.com/ava-labs/avalanchego/utils/logging" "github.com/onsi/ginkgo/v2" . "github.com/onsi/gomega" ) var ( network *tmpnet.Network e2eFlags *e2e.FlagVars ) // TestMain registers flags and runs tests func TestMain(m *testing.M) { e2eFlags = e2e.RegisterFlags() flag.Parse() os.Exit(m.Run()) } // Test entry point func TestE2E(t *testing.T) { if os.Getenv("RUN_E2E") == "" { t.Skip("Environment variable RUN_E2E not set; skipping E2E tests") } RegisterFailHandler(ginkgo.Fail) ginkgo.RunSpecs(t, "My E2E Test Suite") } ``` **Key Components:** - `TestMain`: Registers e2e flags for network reuse - `RUN_E2E` check: Gates tests so they only run when explicitly requested - `RegisterFailHandler`: Integrates Gomega assertions with Ginkgo - `RunSpecs`: Ginkgo test entry point ### 2. Network Lifecycle with BeforeSuite/AfterSuite Create a network once for all tests: ```go var _ = ginkgo.BeforeSuite(func() { // Create network context with timeout ctx, cancel := context.WithTimeout( context.Background(), 5*time.Minute, ) defer cancel() runtimeCfg, err := e2eFlags.NodeRuntimeConfig() // validates AVALANCHEGO_PATH/AVAGO_PLUGIN_DIR or CLI flags Expect(err).NotTo(HaveOccurred()) // Create network configuration network = &tmpnet.Network{ Owner: "my-test-network", Nodes: tmpnet.NewNodesOrPanic(5), DefaultRuntimeConfig: *runtimeCfg, DefaultFlags: tmpnet.FlagsMap{ "log-level": "info", "network-max-reconnect-delay": "1s", }, } // Bootstrap network err = tmpnet.BootstrapNewNetwork( ctx, logging.NoLog{}, network, e2eFlags.RootNetworkDir(), // empty string uses default ~/.tmpnet/networks ) Expect(err).NotTo(HaveOccurred()) }) var _ = ginkgo.AfterSuite(func() { if network != nil { Expect(network.Stop(context.Background())).To(Succeed()) } }) ``` **Important Notes:** - `BeforeSuite` runs once before all tests in the suite - `AfterSuite` ensures cleanup even if tests fail - Use generous timeouts for network bootstrap (5+ minutes) - `e2eFlags` enables network reuse (explained below) ### 3. Write Your First Test ```go var _ = ginkgo.Describe("[Basic Tests]", func() { ginkgo.It("should have healthy nodes", ginkgo.Label("smoke"), func() { Expect(network).NotTo(BeNil()) Expect(network.Nodes).To(HaveLen(5)) for _, node := range network.Nodes { Expect(node.IsHealthy()).To(BeTrue()) } }) ginkgo.It("should have valid URIs", ginkgo.Label("smoke"), func() { for _, node := range network.Nodes { Expect(node.URI).NotTo(BeEmpty()) } }) }) ``` ## Running Tests ### Basic Execution ```bash # Run E2E tests RUN_E2E=1 go test -v # Run with Ginkgo directly RUN_E2E=1 ginkgo -v # Run specific test suite RUN_E2E=1 ginkgo -v ./tests/my-suite/ ``` ### Filter by Label ```bash # Run only smoke tests RUN_E2E=1 ginkgo --label-filter="smoke" ./... # Run all except slow tests RUN_E2E=1 ginkgo --label-filter="!slow" ./... ``` ### Filter by Name ```bash # Run specific test RUN_E2E=1 ginkgo --focus="should have healthy nodes" ./... # Skip specific test RUN_E2E=1 ginkgo --skip="flaky test" ./... ``` ## Network Reuse for Faster Iteration Network bootstrap is slow (2-5 minutes). Reuse networks across test runs: ### First Run: Create Network ```bash RUN_E2E=1 ginkgo -v # Creates network in ~/.tmpnet/networks/[timestamp] ``` The network directory will be printed: ``` Network created at: /home/user/.tmpnet/networks/20250312-143052.123456 ``` ### Subsequent Runs: Reuse Network ```bash # Reuse existing network (skips bootstrap) RUN_E2E=1 ginkgo -v -- --reuse-network --network-dir=/home/user/.tmpnet/networks/20250312-143052.123456 # Or export TMPNET_NETWORK_DIR and pass --reuse-network export TMPNET_NETWORK_DIR=/home/user/.tmpnet/networks/20250312-143052.123456 RUN_E2E=1 ginkgo -v -- --reuse-network ``` ### Stop Existing Networks ```bash tmpnetctl stop-network --network-dir=/home/user/.tmpnet/networks/20250312-143052.123456 ``` ## Testing with Multiple L1s Most tests need multiple L1s (chains) for cross-chain scenarios. Here's the standard two-L1 setup: ### Complete Example ```go title="two_l1s_test.go" package mypackage_test import ( "context" "flag" "math/big" "os" "testing" "time" "github.com/ava-labs/avalanchego/ids" "github.com/ava-labs/avalanchego/tests/fixture/e2e" "github.com/ava-labs/avalanchego/tests/fixture/tmpnet" "github.com/ava-labs/avalanchego/utils/logging" "github.com/ethereum/go-ethereum/crypto" "github.com/ethereum/go-ethereum/ethclient" "github.com/onsi/ginkgo/v2" . "github.com/onsi/gomega" ) // L1TestInfo holds information about an L1 type L1TestInfo struct { SubnetID ids.ID BlockchainID ids.ID NodeURIs []string RPCClient *ethclient.Client EVMChainID *big.Int Name string } var ( network *tmpnet.Network e2eFlags *e2e.FlagVars l1A L1TestInfo l1B L1TestInfo ) func TestMain(m *testing.M) { e2eFlags = e2e.RegisterFlags() flag.Parse() os.Exit(m.Run()) } func TestTwoL1s(t *testing.T) { if os.Getenv("RUN_E2E") == "" { t.Skip("RUN_E2E not set") } RegisterFailHandler(ginkgo.Fail) ginkgo.RunSpecs(t, "Two L1s Test Suite") } var _ = ginkgo.BeforeSuite(func() { ctx, cancel := context.WithTimeout(context.Background(), 10*time.Minute) defer cancel() runtimeCfg, err := e2eFlags.NodeRuntimeConfig() Expect(err).NotTo(HaveOccurred()) nodes := tmpnet.NewNodesOrPanic(4) // Create network with 2 subnets (helpers like utils.NewTmpnetSubnet set genesis/config) network = &tmpnet.Network{ Owner: "two-l1s-test", Nodes: nodes, Subnets: []*tmpnet.Subnet{ { Name: "L1-A", ValidatorIDs: tmpnet.NodesToIDs(nodes[:2]), Chains: []*tmpnet.Chain{{ VMID: constants.EVMID, Config: `{"log-level": "info"}`, }}, }, { Name: "L1-B", ValidatorIDs: tmpnet.NodesToIDs(nodes[2:]), Chains: []*tmpnet.Chain{{ VMID: constants.EVMID, Config: `{"log-level": "info"}`, }}, }, }, DefaultRuntimeConfig: *runtimeCfg, } err = tmpnet.BootstrapNewNetwork( ctx, logging.NoLog{}, network, e2eFlags.RootNetworkDir(), ) Expect(err).NotTo(HaveOccurred()) // Set up L1 info l1A = getL1Info(network.Subnets[0], "L1-A") l1B = getL1Info(network.Subnets[1], "L1-B") }) var _ = ginkgo.AfterSuite(func() { if network != nil { network.Stop(context.Background()) } }) > For runnable code, supply real genesis bytes/config for each chain (see `tests/contracts/lib/icm-contracts/lib/subnet-evm/tests/utils/tmpnet.go` in icm-services for a working helper). // Helper to extract L1 info func getL1Info(subnet *tmpnet.Subnet, name string) L1TestInfo { chain := subnet.Chains[0] rpcClient, err := ethclient.Dial(chain.Nodes[0].URI + "/ext/bc/" + chain.ChainID.String() + "/rpc") Expect(err).NotTo(HaveOccurred()) evmChainID, err := rpcClient.ChainID(context.Background()) Expect(err).NotTo(HaveOccurred()) var nodeURIs []string for _, node := range chain.Nodes { nodeURIs = append(nodeURIs, node.URI) } return L1TestInfo{ SubnetID: subnet.SubnetID, BlockchainID: chain.ChainID, NodeURIs: nodeURIs, RPCClient: rpcClient, EVMChainID: evmChainID, Name: name, } } var _ = ginkgo.Describe("[Two L1 Tests]", func() { ginkgo.It("should have two L1s configured", func() { Expect(l1A.Name).To(Equal("L1-A")) Expect(l1B.Name).To(Equal("L1-B")) Expect(l1A.EVMChainID.Uint64()).To(Equal(uint64(12345))) Expect(l1B.EVMChainID.Uint64()).To(Equal(uint64(54321))) }) }) ``` ## Using Pre-funded Keys Every tmpnet network has pre-funded keys for transactions: ```go var _ = ginkgo.Describe("[Funded Keys]", func() { ginkgo.It("should have pre-funded key", func() { // Get first pre-funded key key := network.PreFundedKeys[0] ecdsaKey := key.ToECDSA() // Get address address := crypto.PubkeyToAddress(ecdsaKey.PublicKey) // Check balance on C-Chain balance, err := l1A.RPCClient.BalanceAt( context.Background(), address, nil, ) Expect(err).NotTo(HaveOccurred()) Expect(balance.Uint64()).To(BeNumerically(">", 0)) }) }) ``` ## Best Practices ### 1. Use BeforeSuite for Expensive Setup ```go // Good: Share network across tests var _ = ginkgo.BeforeSuite(func() { network = createNetwork() }) // Avoid: Creating network per test (very slow) var _ = ginkgo.BeforeEach(func() { network = createNetwork() // Don't do this! }) ``` ### 2. Use Appropriate Timeouts ```go // Network bootstrap: 5-10 minutes ctx, cancel := context.WithTimeout(context.Background(), 10*time.Minute) // Individual operations: 30-60 seconds ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second) ``` ### 3. Always Clean Up ```go var _ = ginkgo.AfterSuite(func() { if network != nil { Expect(network.Stop(context.Background())).To(Succeed()) } }) ``` ### 4. Use Labels for Organization ```go ginkgo.It("test name", ginkgo.Label("smoke", "fast"), func() { // Test code }) ``` ### 5. Use Eventually for Async Operations ```go // Good: Poll with Eventually Eventually(func() bool { return node.IsHealthy() }, 30*time.Second, 500*time.Millisecond).Should(BeTrue()) // Avoid: Fixed sleep time.Sleep(10 * time.Second) // Don't do this! ``` ## Debugging Tests ### Enable Verbose Logging ```bash RUN_E2E=1 ginkgo -v -trace ./... ``` ### Print Network Directory Add to your BeforeSuite: ```go var _ = ginkgo.BeforeSuite(func() { // ... create network ... ginkgo.GinkgoWriter.Printf("Network directory: %s\n", network.Dir) }) ``` ### Keep Network After Failure Manually stop the network if a test fails to inspect logs: ```bash # Run test RUN_E2E=1 ginkgo -v # If test fails, network stays running # Inspect logs in network directory # Stop when done debugging tmpnet stop --dir=/path/to/network ``` ## Common Patterns ### Get Node URIs ```go uris := network.GetNodeURIs() for _, uri := range uris { ginkgo.GinkgoWriter.Printf("Node: %s\n", uri) } ``` ### Check All Nodes Healthy ```go for _, node := range network.Nodes { Expect(node.IsHealthy()).To(BeTrue()) } ``` ### Wait for Node Health ```go err := node.WaitForHealthy(context.Background()) Expect(err).NotTo(HaveOccurred()) ``` ## Next Steps Convert subnets to L1s with validator managers Test Teleporter message flows between L1s Test validator registration and delegation Test Warp message construction and verification ## Additional Resources - [Ginkgo Documentation](https://onsi.github.io/ginkgo/) - [Gomega Matchers](https://onsi.github.io/gomega/) - [ICM Services Tests](https://github.com/ava-labs/icm-services/tree/main/tests) - [tmpnet Package Docs](https://pkg.go.dev/github.com/ava-labs/avalanchego/tests/fixture/tmpnet) # Testing L1 Conversion (/docs/tooling/tmpnet/guides/l1-conversion) --- title: Testing L1 Conversion description: Learn how to convert subnets to L1s with validator managers in your tests --- This guide shows how to test converting a subnet to an L1 (Layer 1) blockchain with a validator manager. L1 conversion is the process of upgrading a subnet to use Proof of Stake consensus with custom validator management. > Pattern guide only. These snippets mirror helpers in `icm-services` (for example `icm-contracts/tests/network`) and rely on shared utilities for genesis creation, contract bindings, and Warp signing. Copy from those source files for runnable code. ## Overview Converting a subnet to an L1 involves: 1. Deploying a validator manager contract 2. Issuing a `ConvertSubnetToL1Tx` on the P-Chain 3. Initializing the validator set with a Warp message 4. Managing validator registration and removal ## Prerequisites - Complete the [Getting Started](/docs/tooling/tmpnet/guides/getting-started) guide - Understand basic Ginkgo test setup - Have a network with at least one L1 created ## Validator Manager Types Choose one of three validator manager types: | Type | Description | Use Case | |------|-------------|----------| | **PoA** | Proof of Authority | Permissioned networks with owner-controlled validators | | **Native Token Staking** | Stake native chain tokens | Public networks using chain's native currency | | **ERC20 Token Staking** | Stake ERC20 tokens | Networks with custom governance tokens | ## Basic L1 Conversion Flow ### Complete Test Example ```go title="l1_conversion_test.go" package conversion_test import ( "context" "flag" "math/big" "os" "testing" "time" "github.com/ava-labs/avalanchego/ids" "github.com/ava-labs/avalanchego/tests/fixture/e2e" "github.com/ava-labs/avalanchego/tests/fixture/tmpnet" "github.com/ava-labs/avalanchego/units" "github.com/ava-labs/avalanchego/utils/crypto/secp256k1" "github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/crypto" "github.com/ethereum/go-ethereum/ethclient" "github.com/onsi/ginkgo/v2" . "github.com/onsi/gomega" ) var ( network *tmpnet.Network e2eFlags *e2e.FlagVars fundedKey *secp256k1.PrivateKey l1Info L1TestInfo ) func TestMain(m *testing.M) { e2eFlags = e2e.RegisterFlags() flag.Parse() os.Exit(m.Run()) } func TestL1Conversion(t *testing.T) { if os.Getenv("RUN_E2E") == "" { t.Skip("RUN_E2E not set") } RegisterFailHandler(ginkgo.Fail) ginkgo.RunSpecs(t, "L1 Conversion Test Suite") } var _ = ginkgo.BeforeSuite(func() { ctx, cancel := context.WithTimeout(context.Background(), 10*time.Minute) defer cancel() // Create network with one L1 network = createNetworkWithL1(ctx) // Get funded key fundedKey = network.PreFundedKeys[0] // Get L1 info l1Info = getL1Info(network.Subnets[0]) }) var _ = ginkgo.AfterSuite(func() { if network != nil { network.Stop(context.Background()) } }) var _ = ginkgo.Describe("[L1 Conversion]", func() { ginkgo.It("should convert subnet to L1 with native staking", ginkgo.Label("conversion", "native-staking"), func() { ctx := context.Background() // Convert subnet to L1 nodes, validationIDs := convertSubnetToL1( ctx, network, l1Info, NativeTokenStakingManager, fundedKey, ) Expect(nodes).To(HaveLen(2)) Expect(validationIDs).To(HaveLen(2)) // Verify validators are active for _, validationID := range validationIDs { status := getValidatorStatus(ctx, l1Info, validationID) Expect(status).To(Equal(ValidatorStatusActive)) } }) }) ``` ## ConvertSubnet Implementation ### The ConvertSubnet Function Here's the core conversion logic based on ICM Services: ```go title="conversion.go" package conversion import ( "context" "math/big" "github.com/ava-labs/avalanchego/ids" "github.com/ava-labs/avalanchego/tests/fixture/tmpnet" "github.com/ava-labs/avalanchego/vms/platformvm/warp" "github.com/ethereum/go-ethereum/common" ) type ValidatorManagerType int const ( PoAValidatorManager ValidatorManagerType = iota NativeTokenStakingManager ERC20TokenStakingManager ) // ConvertSubnet converts a subnet to an L1 with validator manager func ConvertSubnet( ctx context.Context, network *tmpnet.Network, l1 L1TestInfo, managerType ValidatorManagerType, weights []uint64, fundedKey *ecdsa.PrivateKey, ) ([]*tmpnet.Node, []ids.ID) { // Step 1: Deploy validator manager validatorManagerAddress := deployValidatorManager( ctx, l1, managerType, fundedKey, ) // Step 2: Initialize validator manager settings initializeValidatorManager( ctx, l1, validatorManagerAddress, managerType, fundedKey, ) // Step 3: Add new nodes to network numValidators := len(weights) newNodes := make([]*tmpnet.Node, numValidators) for i := 0; i < numValidators; i++ { node := tmpnet.NewEphemeralNode(tmpnet.FlagsMap{}) err := network.StartNode(ctx, node) Expect(err).NotTo(HaveOccurred()) err = node.WaitForHealthy(ctx) Expect(err).NotTo(HaveOccurred()) newNodes[i] = node } // Step 4: Issue ConvertSubnetToL1Tx on P-Chain convertSubnetTxID := issueConvertSubnetToL1Tx( ctx, network, l1.SubnetID, validatorManagerAddress, fundedKey, ) // Step 5: Initialize validator set with Warp message initialValidationIDs := initializeValidatorSet( ctx, network, l1, validatorManagerAddress, newNodes, weights, convertSubnetTxID, fundedKey, ) // Step 6: Add new nodes as L1 validators for i, node := range newNodes { addNodeToL1(ctx, network, l1, node) } return newNodes, initialValidationIDs } // deployValidatorManager deploys the appropriate validator manager contract func deployValidatorManager( ctx context.Context, l1 L1TestInfo, managerType ValidatorManagerType, fundedKey *ecdsa.PrivateKey, ) common.Address { var address common.Address switch managerType { case PoAValidatorManager: address = deployPoAValidatorManager(ctx, l1, fundedKey) case NativeTokenStakingManager: address = deployNativeStakingManager(ctx, l1, fundedKey) case ERC20TokenStakingManager: address = deployERC20StakingManager(ctx, l1, fundedKey) } return address } // initializeValidatorManager sets up initial validator manager parameters func initializeValidatorManager( ctx context.Context, l1 L1TestInfo, managerAddress common.Address, managerType ValidatorManagerType, fundedKey *ecdsa.PrivateKey, ) { switch managerType { case PoAValidatorManager: // PoA: Set owner address ownerAddress := crypto.PubkeyToAddress(fundedKey.PublicKey) initializePoA(ctx, l1, managerAddress, ownerAddress, fundedKey) case NativeTokenStakingManager: // Native Staking: Set staking parameters settings := NativeTokenStakingSettings{ MinStakeDuration: 1, MinStakeAmount: big.NewInt(1e16), MaxStakeAmount: big.NewInt(10e18), MinDelegateFee: 1, MaxChurnPercentage: 20, ChurnPeriodSeconds: 1, WeightToValueFactor: big.NewInt(1e12), } initializeNativeStaking(ctx, l1, managerAddress, settings, fundedKey) case ERC20TokenStakingManager: // ERC20 Staking: Set token and parameters tokenAddress := common.HexToAddress("0x...") // Your ERC20 token settings := ERC20TokenStakingSettings{ MinStakeDuration: 1, MinStakeAmount: big.NewInt(1e18), MaxStakeAmount: big.NewInt(1000e18), MinDelegateFee: 1, MaxChurnPercentage: 20, ChurnPeriodSeconds: 1, } initializeERC20Staking(ctx, l1, managerAddress, tokenAddress, settings, fundedKey) } } // issueConvertSubnetToL1Tx issues the conversion transaction on P-Chain func issueConvertSubnetToL1Tx( ctx context.Context, network *tmpnet.Network, subnetID ids.ID, validatorManagerAddress common.Address, fundedKey *ecdsa.PrivateKey, ) ids.ID { // Get P-Chain wallet pChainWallet := network.GetPChainWallet(fundedKey) // Convert address to proper format managerAddressBytes := validatorManagerAddress.Bytes() var chainAddress [20]byte copy(chainAddress[:], managerAddressBytes) // Issue ConvertSubnetToL1Tx txID, err := pChainWallet.IssueConvertSubnetToL1Tx( subnetID, chainAddress, fundedKey, ) Expect(err).NotTo(HaveOccurred()) return txID } // initializeValidatorSet creates initial validators with Warp message func initializeValidatorSet( ctx context.Context, network *tmpnet.Network, l1 L1TestInfo, validatorManagerAddress common.Address, nodes []*tmpnet.Node, weights []uint64, convertTxID ids.ID, fundedKey *ecdsa.PrivateKey, ) []ids.ID { // Build initial validator set validatorSet := make([]InitialValidator, len(nodes)) for i, node := range nodes { validationID := createValidationID(node.NodeID, l1.SubnetID) validatorSet[i] = InitialValidator{ NodeID: node.NodeID.Bytes(), Weight: weights[i], BlsPublicKey: node.BlsPublicKey, } } // Create SubnetToL1ConversionMessage conversionData := SubnetToL1ConversionData{ SubnetID: l1.SubnetID, ManagerBlockchainID: l1.BlockchainID, ManagerAddress: validatorManagerAddress.Bytes(), Validators: validatorSet, } // Sign with P-Chain validators unsignedMessage := warp.NewUnsignedMessage( network.NetworkID, l1.BlockchainID, encodeConversionData(conversionData), ) signedMessage := signWarpMessage( ctx, network, unsignedMessage, l1.SubnetID, ) // Initialize validator set on contract validatorManager := getValidatorManagerContract(l1, validatorManagerAddress) tx := initializeValidatorSet( ctx, l1, validatorManager, conversionData, signedMessage.Bytes(), fundedKey, ) receipt := waitForSuccess(ctx, l1, tx.Hash()) // Extract validation IDs from events validationIDs := extractValidationIDsFromReceipt(receipt) return validationIDs } ``` ## Testing Different Validator Manager Types ### PoA (Proof of Authority) ```go ginkgo.It("should convert to PoA", ginkgo.Label("poa"), func() { nodes, validationIDs := convertSubnetToL1( context.Background(), network, l1Info, PoAValidatorManager, fundedKey, ) Expect(nodes).To(HaveLen(2)) Expect(validationIDs).To(HaveLen(2)) }) ``` ### Native Token Staking ```go ginkgo.It("should convert to native staking", ginkgo.Label("native-staking"), func() { // Use different weights for validators weights := []uint64{ 1 * units.Schmeckle, // Validator 1: 1 AVAX 1000 * units.Schmeckle, // Validator 2: 1000 AVAX } nodes, validationIDs := convertSubnetToL1WithWeights( context.Background(), network, l1Info, NativeTokenStakingManager, weights, fundedKey, ) Expect(nodes).To(HaveLen(2)) Expect(validationIDs).To(HaveLen(2)) // Verify weights for i, validationID := range validationIDs { weight := getValidatorWeight(context.Background(), l1Info, validationID) Expect(weight).To(Equal(weights[i])) } }) ``` ### ERC20 Token Staking ```go ginkgo.It("should convert to ERC20 staking", ginkgo.Label("erc20-staking"), func() { // Deploy ERC20 token first tokenAddress, token := deployERC20Token( context.Background(), l1Info, fundedKey, "Staking Token", "STK", ) // Convert with ERC20 manager nodes, validationIDs := convertSubnetToL1WithERC20( context.Background(), network, l1Info, tokenAddress, fundedKey, ) Expect(nodes).To(HaveLen(2)) Expect(validationIDs).To(HaveLen(2)) }) ``` ## Verifying Conversion Success ### Check Validator Status ```go func verifyValidatorsActive( ctx context.Context, l1 L1TestInfo, validationIDs []ids.ID, ) { validatorManager := getValidatorManagerContract(l1) for _, validationID := range validationIDs { validator, err := validatorManager.GetValidator( &bind.CallOpts{}, validationID, ) Expect(err).NotTo(HaveOccurred()) Expect(validator.Status).To(Equal(ValidatorStatusActive)) Expect(validator.Weight).To(BeNumerically(">", 0)) } } ``` ### Check P-Chain State ```go func verifyPChainValidators( ctx context.Context, network *tmpnet.Network, subnetID ids.ID, expectedCount int, ) { pClient := network.GetPChainClient() validators, err := pClient.GetCurrentValidators(ctx, subnetID, nil) Expect(err).NotTo(HaveOccurred()) Expect(validators).To(HaveLen(expectedCount)) } ``` ## Common Patterns ### Conversion with Proxy For upgradeable validator managers: ```go nodes, validationIDs, proxyAdmin := convertSubnetToL1WithProxy( ctx, network, l1Info, NativeTokenStakingManager, fundedKey, ) // Can upgrade later newImplementation := deployNewImplementation(ctx, l1Info, fundedKey) upgradeProxy(ctx, l1Info, proxyAdmin, newImplementation, fundedKey) ``` ### Multiple Validators with Different Weights ```go weights := []uint64{ 100, // Light validator 1000, // Medium validator 10000, // Heavy validator } nodes, validationIDs := convertSubnetToL1WithWeights( ctx, network, l1Info, NativeTokenStakingManager, weights, fundedKey, ) ``` ## Best Practices 1. **Use generous timeouts**: Conversion involves P-Chain transactions which can be slow 2. **Verify all steps**: Check validator status after conversion 3. **Test different weights**: Ensure validator weighting works correctly 4. **Handle errors**: P-Chain operations can fail, plan for retries 5. **Clean up**: Stop extra nodes in AfterEach if creating new networks per test ## Troubleshooting ### Conversion Transaction Fails ```go // Add retry logic var txID ids.ID err := retry.Do(func() error { var err error txID, err = issueConvertSubnetToL1Tx(ctx, network, subnetID, managerAddress, fundedKey) return err }, retry.Attempts(3), retry.Delay(time.Second)) ``` ### Warp Message Not Signed Ensure all validators have accepted the block: ```go waitForAllValidatorsToAcceptBlock( ctx, l1.NodeURIs, l1.BlockchainID, receipt.BlockNumber.Uint64(), ) ``` ## Next Steps Test complete validator lifecycle with native tokens Test staking and delegation workflows Test messaging between converted L1s # Monitoring (/docs/tooling/tmpnet/guides/monitoring) --- title: Monitoring description: Monitor your temporary networks with metrics, logs, and dashboards --- This guide shows you how to set up monitoring for your temporary networks using Prometheus for metrics and Promtail for logs. ## Overview tmpnet provides built-in integration with: - **Prometheus** - Collect and store metrics from all nodes - **Promtail** - Aggregate logs from all nodes - **Grafana** - Visualize metrics and logs in dashboards Monitoring helps you: - Debug network behavior - Identify performance bottlenecks - Analyze consensus patterns - Track resource usage - Troubleshoot issues ## Prerequisites ### Install Monitoring Tools The easiest way to get the required tools is using Nix: ```bash # Start a development shell with monitoring tools nix develop ``` This provides both `prometheus` and `promtail` binaries. ### Alternative: Manual Installation Install tools manually if you don't use Nix: **Prometheus:** ```bash # macOS brew install prometheus # Linux - download from prometheus.io/download ``` **Promtail:** ```bash # Download from GitHub releases # github.com/grafana/loki/releases ``` ### Configure Monitoring Backend You'll need access to a Prometheus and Loki backend. Set these environment variables: ```bash export PROMETHEUS_URL="https://your-prometheus.example.com" export PROMETHEUS_PUSH_URL="https://your-prometheus.example.com/api/v1/push" export PROMETHEUS_USERNAME="your-username" export PROMETHEUS_PASSWORD="your-password" export LOKI_URL="https://your-loki.example.com" export LOKI_PUSH_URL="https://your-loki.example.com/loki/api/v1/push" export LOKI_USERNAME="your-username" export LOKI_PASSWORD="your-password" ``` ## Quick Start ### Start Collectors Start Prometheus and Promtail collectors: ```bash # Start metrics collection tmpnetctl start-metrics-collector # Start log collection tmpnetctl start-logs-collector ``` ### Start Your Network Create a network - monitoring will be automatic: ```bash tmpnetctl start-network ``` **Output includes a Grafana link:** ``` Started network /home/user/.tmpnet/networks/20240312-143052.123456 Network metrics: https://grafana.example.com/... ``` ### View Metrics and Logs Click the Grafana link or open the URL saved at: ```bash cat ~/.tmpnet/networks/latest/metrics.txt ``` ### Stop Collectors When done, stop the collectors: ```bash tmpnetctl stop-metrics-collector tmpnetctl stop-logs-collector ``` ## Monitoring Configuration ### Service Discovery tmpnet uses file-based service discovery to automatically configure monitoring: **Prometheus Configuration:** ``` ~/.tmpnet/prometheus/file_sd_configs/ └── [network-uuid]-[node-id].json ``` **Promtail Configuration:** ``` ~/.tmpnet/promtail/file_sd_configs/ └── [network-uuid]-[node-id].json ``` When a node starts, tmpnet automatically creates these configuration files. When a node stops, the files are removed. ### Metric Labels All metrics include these labels for filtering: - `network_uuid` - Unique identifier for the network - `node_id` - Node ID - `is_ephemeral_node` - Whether the node is ephemeral - `network_owner` - User-defined network owner identifier When running in GitHub Actions, additional labels are added: - `gh_repo` - Repository name - `gh_workflow` - Workflow name - `gh_run_id` - Run ID - `gh_job_id` - Job ID ### Custom Grafana Instance Use a custom Grafana instance: ```bash export GRAFANA_URI="https://your-grafana.example.com/d/your-dashboard-id" ``` The emitted links will use your custom Grafana instance. ## Monitoring in Code ### Enable Monitoring Programmatically Start collectors from Go code: ```go import "github.com/ava-labs/avalanchego/tests/fixture/tmpnet" // Start Prometheus err := tmpnet.StartPrometheus( prometheusURL, prometheusUsername, prometheusPassword, pushURL, os.Stdout, // Progress output ) if err != nil { panic(err) } // Start Promtail err = tmpnet.StartPromtail( lokiURL, lokiUsername, lokiPassword, pushURL, os.Stdout, ) if err != nil { panic(err) } ``` ### Verify Collection Check that metrics and logs are being collected: ```go // Check metrics err := tmpnet.CheckMetricsExist(context.Background(), prometheusURL, prometheusUsername, prometheusPassword, network.UUID) if err != nil { fmt.Println("Metrics not found:", err) } // Check logs err = tmpnet.CheckLogsExist(context.Background(), lokiURL, lokiUsername, lokiPassword, network.UUID) if err != nil { fmt.Println("Logs not found:", err) } ``` ## Monitoring Patterns ### Development Workflow For local development: ```bash # Start collectors once tmpnetctl start-metrics-collector tmpnetctl start-logs-collector # Create/destroy networks as needed tmpnetctl start-network # ... test ... tmpnetctl stop-network # Collectors keep running # Stop when done with all testing tmpnetctl stop-metrics-collector tmpnetctl stop-logs-collector ``` ### Test Isolation Filter to specific networks using the network UUID: ``` # In Grafana, filter by: network_uuid="abc-123-def-456" ``` Each network gets a unique UUID, making it easy to isolate results. ### Ephemeral Node Monitoring Track ephemeral nodes separately: ``` # Filter to only ephemeral nodes: is_ephemeral_node="true" # Filter to only permanent nodes: is_ephemeral_node="false" ``` ## Common Metrics tmpnet collects standard AvalancheGo metrics: ### Node Health - `avalanche_network_peers` - Number of connected peers - `avalanche_P_vm_blks_accepted` - Accepted blocks on P-Chain - `avalanche_health_checks_failing` - Failing health checks ### Network Activity - `avalanche_network_msgs_sent` - Messages sent - `avalanche_network_msgs_received` - Messages received - `avalanche_network_bandwidth_throttler_inbound_acquired_bytes` - Inbound bandwidth ### Consensus - `avalanche_snowman_polls_successful` - Successful consensus polls - `avalanche_snowman_polls_failed` - Failed consensus polls - `avalanche_P_blks_processing` - Blocks currently processing ### Performance - `avalanche_P_vm_blks_processing_time` - Block processing time - `go_goroutines` - Number of goroutines - `go_memstats_alloc_bytes` - Memory allocated ### Custom VM Metrics If your custom VM exports Prometheus metrics, they'll be collected automatically. ## Log Collection ### Log Levels Configure log verbosity per node: ```go node.Flags = tmpnet.FlagsMap{ "log-level": "debug", // trace, debug, info, warn, error, fatal "log-display-level": "info", } ``` ### Log Queries in Grafana Example queries in Grafana Explore (Loki): ```text # All logs for a network {network_uuid="abc-123"} # Error logs only {network_uuid="abc-123"} |= "error" or "ERROR" # Logs from a specific node {network_uuid="abc-123", node_id="NodeID-7Xhw2..."} # Search for specific patterns {network_uuid="abc-123"} |= "consensus" # Regex search {network_uuid="abc-123"} |~ "block \\d+ accepted" ``` ### Structured Logging Enable JSON structured logging for easier parsing: ```go network.DefaultFlags = tmpnet.FlagsMap{ "log-format": "json", } ``` ## Troubleshooting ### Collectors Won't Start **Check if already running:** ```bash ps aux | grep prometheus ps aux | grep promtail ``` **Stop existing processes:** ```bash tmpnetctl stop-metrics-collector tmpnetctl stop-logs-collector ``` ### No Metrics Appear **Verify collectors are running:** ```bash ps aux | grep prometheus ``` **Check service discovery configs exist:** ```bash ls ~/.tmpnet/prometheus/file_sd_configs/ ``` **Verify network UUID:** ```bash cat ~/.tmpnet/networks/latest/config.json | jq -r '.uuid' ``` **Check Prometheus is scraping:** ```bash # Check Prometheus logs tail -f ~/.tmpnet/prometheus/*.log ``` ### No Logs Appear **Check Promtail is running:** ```bash ps aux | grep promtail ``` **Verify log files exist:** ```bash ls ~/.tmpnet/networks/latest/NodeID-*/logs/ ``` **Check Promtail configuration:** ```bash ls ~/.tmpnet/promtail/file_sd_configs/ ``` ### Can't Access Grafana **Check the metrics link:** ```bash cat ~/.tmpnet/networks/latest/metrics.txt ``` **Verify GRAFANA_URI is set:** ```bash echo $GRAFANA_URI ``` **Check network UUID is in the URL:** The URL should contain `var-network_uuid=YOUR_UUID` ## CI/CD Integration ### GitHub Actions Use the provided GitHub Action for automated monitoring: ```yaml - name: Run tests with monitoring uses: ./.github/actions/run-monitored-tmpnet-cmd with: run: ./scripts/test.sh prometheus_url: ${{ secrets.PROMETHEUS_URL }} prometheus_push_url: ${{ secrets.PROMETHEUS_PUSH_URL }} prometheus_username: ${{ secrets.PROMETHEUS_USERNAME }} prometheus_password: ${{ secrets.PROMETHEUS_PASSWORD }} loki_url: ${{ secrets.LOKI_URL }} loki_push_url: ${{ secrets.LOKI_PUSH_URL }} loki_username: ${{ secrets.LOKI_USERNAME }} loki_password: ${{ secrets.LOKI_PASSWORD }} ``` The action automatically: - Starts collectors - Runs your tests - Stops collectors - Uploads network artifacts - Emits Grafana links in logs ### Custom CI Systems For other CI systems: ```bash #!/bin/bash set -e # Start collectors tmpnetctl start-metrics-collector tmpnetctl start-logs-collector # Ensure cleanup on exit trap "tmpnetctl stop-metrics-collector; tmpnetctl stop-logs-collector" EXIT # Run your tests ./run-tests.sh # Metrics link is in the network directory cat ~/.tmpnet/networks/latest/metrics.txt ``` ## Advanced Monitoring ### Custom Metrics Dashboard Create custom Grafana dashboards using tmpnet labels: ``` # Panel: Network Message Rate rate(avalanche_network_msgs_sent{network_uuid="$network_uuid"}[1m]) # Panel: Block Processing Time (P-Chain) histogram_quantile(0.99, rate(avalanche_P_vm_blks_processing_time_bucket[5m])) # Panel: Active Validators avalanche_P_vm_validators_count{network_uuid="$network_uuid"} ``` ### Alerting Set up alerts based on network behavior: ``` # Alert when nodes disconnect avalanche_network_peers < 4 # Alert on high block processing time histogram_quantile(0.99, rate(avalanche_P_vm_blks_processing_time_bucket[5m])) > 1000 # Alert on failed health checks avalanche_health_checks_failing > 0 ``` ### Metric Retention Metrics are retained according to your Prometheus backend configuration. For long-running tests, ensure sufficient retention. ## Next Steps Detailed configuration options Complete tmpnetctl commands ## Additional Resources - [Prometheus Documentation](https://prometheus.io/docs/) - [Grafana Loki Documentation](https://grafana.com/docs/loki/) - [AvalancheGo Metrics](/docs/nodes/metrics) - [Full tmpnet README - Monitoring Section](https://github.com/ava-labs/avalanchego/blob/master/tests/fixture/tmpnet/README.md#monitoring) # Runtime Environments (/docs/tooling/tmpnet/guides/runtimes) --- title: Runtime Environments description: Choose between local process and Kubernetes runtimes for tmpnet test networks --- Runtimes in tmpnet provide the execution environment for your test network nodes. The `NodeRuntime` interface abstracts the complexity of managing node processes, allowing you to focus on testing your blockchain application rather than infrastructure details. At its core, tmpnet's runtime system defines how and where nodes run. The `NodeRuntime` interface provides a consistent API for starting nodes, managing their lifecycle, and interacting with their endpoints—regardless of whether nodes run as local processes or Kubernetes pods. This abstraction means you write your test setup once and can switch runtimes based on your testing needs. ```go // The NodeRuntime interface abstracts execution environment type NodeRuntime interface { Start(ctx context.Context) error InitiateStop(ctx context.Context) error WaitForStopped(ctx context.Context) error Restart(ctx context.Context) error IsHealthy(ctx context.Context) (bool, error) // ... } ``` ### When to Use Each Runtime | Scenario | Recommended Runtime | |----------|-------------------| | Local development and quick iteration | Local Process | | CI/CD pipelines | Kubernetes | | Networks with 1-10 nodes | Local Process | | Networks with 10+ nodes | Kubernetes | | Production environment testing | Kubernetes | | Laptop/desktop testing | Local Process | The Local Process Runtime is ideal for development and small-scale testing. For production-like environments, multi-machine deployments, or CI/CD pipelines, use the Kubernetes Runtime. --- ## Local Process Runtime The Local Process Runtime runs Avalanche nodes as operating system subprocesses on your local machine. Each node executes as an independent process with its own configuration, dynamically allocated ports, and isolated filesystem state. ### How It Works When you create a network using the Local Process Runtime, tmpnet performs the following workflow: 1. **Binary Validation** - Verifies the AvalancheGo binary exists at the configured path 2. **Network Directory Creation** - Creates `~/.tmpnet/networks/[timestamp]/` with subdirectories for each node 3. **Node Configuration** - Generates node-specific config files with dynamic port allocation 4. **Process Spawning** - Launches each node via `exec.Command(avalancheGoPath, "--config-file", flagsPath)` 5. **Health Monitoring** - Polls `GET /ext/health/liveness` until all nodes are ready Each node maintains its state in a dedicated directory structure: ``` ~/.tmpnet/networks/[timestamp]/ ├── config.json # Network configuration ├── genesis.json # Genesis file ├── network.env # Shell environment ├── metrics.txt # Grafana dashboard link ├── NodeID-7Xhw2.../ │ ├── config.json # Node runtime config │ ├── flags.json # Node flags │ ├── process.json # PID, URI, staking address │ ├── db/ # Node database │ ├── logs/main.log # Node logs │ └── plugins/ # VM plugins └── latest -> [timestamp] # Symlink to most recent ``` Dynamic port allocation is critical for running multiple networks simultaneously. When you set API ports to `"0"`, the operating system assigns available ports automatically, preventing conflicts. ### Configuration The `ProcessRuntimeConfig` struct controls how the Local Process Runtime operates: ```go type ProcessRuntimeConfig struct { // Path to avalanchego binary (required) AvalancheGoPath string // Directory containing VM plugin binaries PluginDir string // Reuse the same API port when restarting nodes ReuseDynamicPorts bool } ``` | Field | Required | Description | |-------|----------|-------------| | `AvalancheGoPath` | Yes | Absolute path to the `avalanchego` binary executable | | `PluginDir` | No | Directory containing VM plugin binaries (defaults to `~/.avalanchego/plugins`) | | `ReuseDynamicPorts` | No | Reuse the same API port when restarting nodes (default: `false`) | The `AvalancheGoPath` must point to a compiled AvalancheGo binary. If you're building from source, run `./scripts/build.sh` before using tmpnet. ### Quick Start First, ensure you have AvalancheGo built: ```bash # Clone and build AvalancheGo git clone https://github.com/ava-labs/avalanchego.git cd avalanchego ./scripts/build.sh # The binary is now at ./build/avalanchego ``` Then create your network: ```go package main import ( "context" "fmt" "log" "os" "github.com/ava-labs/avalanchego/tests/fixture/tmpnet" ) func main() { ctx := context.Background() // Create network with local process runtime network := &tmpnet.Network{ DefaultRuntimeConfig: tmpnet.NodeRuntimeConfig{ Process: &tmpnet.ProcessRuntimeConfig{ // Use absolute path to your avalanchego binary AvalancheGoPath: os.Getenv("HOME") + "/avalanchego/build/avalanchego", PluginDir: os.Getenv("HOME") + "/.avalanchego/plugins", ReuseDynamicPorts: true, }, }, Nodes: tmpnet.NewNodesOrPanic(5), } // Bootstrap the network if err := tmpnet.BootstrapNewNetwork( ctx, os.Stdout, network, "", // Use default network directory "", // Use AvalancheGoPath from config ); err != nil { log.Fatal(err) } defer network.Stop(ctx) // Get node URIs for interaction for _, node := range network.Nodes { fmt.Printf("Node %s: %s\n", node.NodeID, node.URI) } } ``` ### Advantages | Advantage | Description | |-----------|-------------| | **Fast Startup** | ~30 seconds for a 5-node network | | **No Container Overhead** | Nodes run as native processes without virtualization | | **Easy Debugging** | Direct access to logs at `~/.tmpnet/networks/*/NodeID-*/logs/` | | **Prometheus Integration** | Automatic file-based service discovery | | **Process Control** | Standard OS signals (SIGTERM, SIGSTOP) for node control | ### Limitations | Limitation | Details | |------------|---------| | **Platform Support** | macOS and Linux only (Windows users should use WSL2) | | **Single-Machine Scaling** | All nodes share CPU, memory, and disk resources | | **Port Exhaustion** | Large networks (20+ nodes) may exhaust available ports | | **Ephemeral State** | Network state is lost when the directory is deleted | --- ## Kubernetes Runtime The Kubernetes runtime deploys test networks on Kubernetes clusters, providing a production-like environment for testing at scale. ### How It Works The Kubernetes runtime implements tmpnet's network abstraction using native Kubernetes resources: (localhost:30791 for KIND)"] AVA0 --> PVC0 AVA1 --> PVC1 Pod0 --> SVC Pod1 --> SVC SVC --> EXT `} /> **Key Components:** - **StatefulSet**: Provides stable network identity and ordered deployment - **PersistentVolumeClaims**: Store blockchain data, surviving pod restarts - **Services**: Enable pod-to-pod DNS resolution - **Ingress**: Routes external traffic to node API endpoints ### Prerequisites Before using the Kubernetes runtime: 1. **Kubernetes Cluster**: KIND (recommended for local), Minikube, or cloud provider (GKE, EKS, AKS) 2. **kubectl CLI**: Configured with cluster access 3. **Container Registry Access**: For pulling `avaplatform/avalanchego` images 4. **RBAC Permissions**: Create/manage StatefulSets, Services, Ingress, PVCs ```bash # Verify kubectl is configured kubectl cluster-info kubectl auth can-i create pods --namespace=default ``` ### Configuration ```go type KubeRuntimeConfig struct { ConfigPath string // kubeconfig path (default: ~/.kube/config) ConfigContext string // kubeconfig context to use Namespace string // target namespace Image string // avalanchego container image VolumeSizeGB int // PVC size in GB (minimum 2) UseExclusiveScheduling bool // one pod per k8s node SchedulingLabelKey string // anti-affinity label key SchedulingLabelValue string // anti-affinity label value IngressHost string // e.g., "localhost:30791" IngressSecret string // TLS secret for HTTPS } ``` | Field | Description | Example | |-------|-------------|---------| | `ConfigContext` | Kubeconfig context | `"kind-tmpnet"` | | `Namespace` | Kubernetes namespace | `"tmpnet-test"` | | `Image` | Container image with tag | `"avaplatform/avalanchego:v1.11.0"` | | `VolumeSizeGB` | PVC size per node | `10` | | `UseExclusiveScheduling` | One pod per k8s node | `true` | | `IngressHost` | External access hostname | `"localhost:30791"` | Exclusive scheduling requires at least as many Kubernetes nodes as tmpnet nodes and doubles the startup timeout. ### Quick Start with KIND **1. Start KIND Cluster** ```bash # Use the provided script ./scripts/start_kind_cluster.sh # Creates: # - KIND cluster named "tmpnet" # - Ingress controller with NodePort # - Port forwarding on localhost:30791 ``` **2. Create Network** ```go package main import ( "context" "fmt" "log" "os" "github.com/ava-labs/avalanchego/tests/fixture/tmpnet" ) func main() { ctx := context.Background() // Configure Kubernetes runtime network := &tmpnet.Network{ DefaultRuntimeConfig: tmpnet.NodeRuntimeConfig{ Kube: &tmpnet.KubeRuntimeConfig{ ConfigContext: "kind-tmpnet", Namespace: "tmpnet-demo", Image: "avaplatform/avalanchego:latest", VolumeSizeGB: 5, IngressHost: "localhost:30791", }, }, Nodes: tmpnet.NewNodesOrPanic(5), } if err := tmpnet.BootstrapNewNetwork(ctx, os.Stdout, network, "", ""); err != nil { log.Fatal(err) } defer network.Stop(ctx) fmt.Println("Network created successfully!") } ``` **3. Verify Deployment** ```bash # Check pods kubectl get pods -n tmpnet-demo # Access node API curl http://localhost:30791/ext/health ``` ### Advantages | Advantage | Description | |-----------|-------------| | **Production-Like** | Mirrors real deployment patterns | | **Scalability** | Support 50+ node networks across cluster | | **Network Isolation** | Namespace boundaries and NetworkPolicy | | **CI/CD Ready** | Easy integration with GitHub Actions, Jenkins | | **Persistent Storage** | Data survives pod restarts | ### Limitations | Limitation | Details | |------------|---------| | **Slower Startup** | 3-5 minutes (image pull + scheduling) | | **Complex Debugging** | Requires `kubectl logs` and Kubernetes knowledge | | **Resource Overhead** | Kubernetes control plane adds ~2GB RAM | | **Expertise Required** | Understanding of Pods, Services, PVCs, Ingress | **Startup Timeout Calculation:** ```go timeout := time.Duration(nodeCount) * time.Minute if config.UseExclusiveScheduling { timeout *= 2 // Double for anti-affinity scheduling } ``` --- ## Runtime Comparison | Feature | Local Runtime | Kubernetes Runtime | |---------|--------------|-------------------| | **Startup Time** | ~30 seconds | 1-5 minutes | | **Max Nodes** | ~20 (resource-limited) | 100+ (cluster-limited) | | **Debugging** | Direct log files | `kubectl logs` | | **Persistence** | `~/.tmpnet/networks/` | PersistentVolumeClaims | | **Port Access** | localhost:dynamic | Ingress or port-forward | | **Best For** | Development, quick tests | CI/CD, scale testing | | **Prerequisites** | AvalancheGo binary | Kubernetes cluster | | **OS Support** | macOS, Linux | Any with kubectl | **Quick Decision Guide:** - Use **Local** for development and testing with fewer than 20 nodes - Use **Kubernetes** for CI/CD pipelines, large networks (20+ nodes), or production-like testing --- ## Advanced Topics ### Writing Runtime-Agnostic Tests The e2e framework provides a `TestEnvironment` abstraction that makes tests portable across runtimes: ```go import ( "github.com/ava-labs/avalanchego/tests/fixture/e2e" ) var _ = ginkgo.Describe("[Cross-Runtime Tests]", func() { ginkgo.It("should work on any runtime", func() { // Get the test environment (local or Kubernetes) env := e2e.GetEnv(tc) // Get network - abstracted across runtimes network := env.GetNetwork() // Get node URIs - automatically handles port-forward vs direct nodeURI := env.GetRandomNodeURI() // All operations work identically regardless of runtime client := jsonrpc.NewClient(nodeURI) health, err := client.Health(context.Background()) Expect(err).NotTo(HaveOccurred()) }) }) ``` Runtime selection is controlled by: - CLI flags: `--use-kubernetes=true` - Environment variables: `E2E_USE_KUBERNETES=true` - Test configuration defaults ### Bootstrap Monitor for Continuous Testing The **bootstrap monitor** is a Kubernetes-based tool for continuous bootstrap testing on persistent networks (mainnet, fuji). It validates that new AvalancheGo versions can successfully sync from genesis. **Architecture:** ``` StatefulSet: bootstrap-monitor ├── Init Container: bootstrap-monitor init │ └── Prepares configuration and data directory ├── Containers: │ ├── avalanchego (primary) │ │ └── Runs node with sync monitoring │ └── bootstrap-monitor wait-for-completion (sidecar) │ └── Polls health and emits completion status └── PersistentVolumeClaim: data └── Persistent storage for node database ``` **Three Sync Modes:** | Mode | Chains Synced | Duration | Use Case | |------|--------------|----------|----------| | `full-sync` | P, X, C (full) | Hours-days | Complete validation | | `c-chain-state-sync` | P, X (full), C (state) | 1-3 hours | Fast comprehensive test | | `p-chain-full-sync-only` | P (full) | 30-60 min | P-Chain validation only | ### Monitoring Integration Both runtimes integrate with Prometheus and Promtail using file-based service discovery: ``` ~/.tmpnet/prometheus/file_sd_configs/ └── [network-uuid]-[node-id].json ``` **Environment Variables:** ```bash # Prometheus export PROMETHEUS_URL="https://prometheus.example.com" export PROMETHEUS_USERNAME="user" export PROMETHEUS_PASSWORD="pass" # Loki (logs) export LOKI_URL="https://loki.example.com" # Grafana export GRAFANA_URI="https://grafana.example.com/d/tmpnet" ``` After starting a network, tmpnet emits a Grafana dashboard link: ```bash tmpnetctl start-network # Output includes: # Grafana: https://grafana.example.com/d/tmpnet?var-network_uuid=abc-123 ``` --- ## Troubleshooting For detailed troubleshooting of runtime-specific issues, see the [Troubleshooting Runtime Issues](/docs/tooling/tmpnet/troubleshooting-runtime) guide. ### Quick Fixes **Local Runtime - Port Conflicts:** ```bash pkill -f avalanchego lsof -i :9650-9660 ``` **Kubernetes - Pod Stuck Pending:** ```bash kubectl describe pod -n tmpnet kubectl get events -n tmpnet --sort-by='.lastTimestamp' ``` **Both - Health Check Failures:** ```bash # Check if node is still bootstrapping (requires jq: brew install jq) curl -s http://localhost:9650/ext/info | jq '.result.isBootstrapped' # Alternative without jq: curl -s http://localhost:9650/ext/health ``` --- ## Next Steps Create your first network Complete configuration options Set up metrics and logging Diagnose and fix issues # Testing P-Chain Staking and Delegation (/docs/tooling/tmpnet/guides/staking-and-delegation) --- title: Testing P-Chain Staking and Delegation description: Test Primary Network validator staking, delegation, and reward distribution --- This guide covers testing P-Chain staking and delegation on the Primary Network, including validator registration, delegator management, reward distribution, and uptime tracking. > This is a pattern guide. For a runnable example, see `tests/e2e/p/staking_rewards.go` in the avalanchego repo. Adapt the code there for your suite rather than copying these snippets verbatim. **Where to put your tests:** keep tmpnet-based staking tests alongside your application code or in an existing test suite (e.g., `tests/e2e` in your repo). Avoid creating a new repo just for these; reuse your project’s test harness and helpers so fixtures, CI, and dependencies stay in one place. ## Overview P-Chain staking differs from L1 validator management: - **P-Chain**: AddValidatorTx / AddDelegatorTx on Primary Network - **L1s**: Contract-based validator managers (see [L1 Validator Management](/docs/tooling/tmpnet/guides/validator-management-native-staking)) This guide covers P-Chain patterns from avalanchego tests including: - Validator registration with stake - Delegator addition and rewards - Uptime-based reward distribution - Cortina fork behavior (deferred delegatee rewards) - Time-based state transitions ## Prerequisites - Complete [Getting Started](/docs/tooling/tmpnet/guides/getting-started) - Understand Ginkgo test structure - Have a tmpnet network running ## Complete Test Example ```go title="p_chain_staking_test.go" package staking_test import ( "context" "flag" "os" "testing" "time" "github.com/ava-labs/avalanchego/ids" "github.com/ava-labs/avalanchego/tests/fixture/e2e" "github.com/ava-labs/avalanchego/tests/fixture/tmpnet" "github.com/ava-labs/avalanchego/units" "github.com/ava-labs/avalanchego/vms/platformvm/txs" "github.com/ava-labs/avalanchego/wallet/subnet/primary" "github.com/onsi/ginkgo/v2" . "github.com/onsi/gomega" ) var ( network *tmpnet.Network e2eFlags *e2e.FlagVars ) func TestMain(m *testing.M) { e2eFlags = e2e.RegisterFlags() flag.Parse() os.Exit(m.Run()) } func TestPChainStaking(t *testing.T) { if os.Getenv("RUN_E2E") == "" { t.Skip("RUN_E2E not set") } RegisterFailHandler(ginkgo.Fail) ginkgo.RunSpecs(t, "P-Chain Staking Test Suite") } var _ = ginkgo.BeforeSuite(func() { ctx, cancel := context.WithTimeout(context.Background(), 5*time.Minute) defer cancel() network = createNetwork(ctx) }) var _ = ginkgo.AfterSuite(func() { if network != nil { network.Stop(context.Background()) } }) var _ = ginkgo.Describe("[P-Chain Staking]", func() { ginkgo.It("should add validator and delegate", ginkgo.Label("staking", "p-chain"), func() { ctx := context.Background() // Add validator to Primary Network nodeID, txID := addPrimaryNetworkValidator( ctx, network, 2*units.Avax, // stake 24*time.Hour, // duration ) // Wait for validator to become active waitForValidatorActive(ctx, network, nodeID) // Add delegator delegationID := addDelegator( ctx, network, nodeID, 1*units.Avax, 12*time.Hour, ) // Verify delegation verifyDelegation(ctx, network, delegationID) }) }) ``` ## Adding Primary Network Validators ### AddValidatorTx Pattern ```go func addPrimaryNetworkValidator( ctx context.Context, network *tmpnet.Network, stakeAmount uint64, stakeDuration time.Duration, ) (ids.NodeID, ids.ID) { // Get pre-funded key fundedKey := network.PreFundedKeys[0] // Create P-Chain wallet pWallet := createPChainWallet(ctx, network, fundedKey) // Create new ephemeral node to validate node := tmpnet.NewEphemeralNode(tmpnet.FlagsMap{}) err := network.StartNode(ctx, node) Expect(err).NotTo(HaveOccurred()) err = node.WaitForHealthy(ctx) Expect(err).NotTo(HaveOccurred()) // Calculate start and end times startTime := time.Now().Add(1 * time.Minute) endTime := startTime.Add(stakeDuration) // Issue AddValidatorTx txID, err := pWallet.IssueAddPermissionlessValidatorTx( &txs.SubnetValidator{ Validator: txs.Validator{ NodeID: node.NodeID, Start: uint64(startTime.Unix()), End: uint64(endTime.Unix()), Wght: stakeAmount, }, Subnet: ids.Empty, // Primary Network }, &secp256k1fx.OutputOwners{ Threshold: 1, Addrs: []ids.ShortID{fundedKey.Address()}, }, &secp256k1fx.OutputOwners{ Threshold: 1, Addrs: []ids.ShortID{fundedKey.Address()}, }, 10, // Delegation fee: 10% ) Expect(err).NotTo(HaveOccurred()) return node.NodeID, txID } ``` ### Waiting for Validator Activation ```go func waitForValidatorActive( ctx context.Context, network *tmpnet.Network, nodeID ids.NodeID, ) { pClient := platform.NewClient(network.Nodes[0].URI) Eventually(func() bool { validators, err := pClient.GetCurrentValidators( ctx, ids.Empty, // Primary Network []ids.NodeID{nodeID}, ) if err != nil || len(validators) == 0 { return false } return validators[0].NodeID == nodeID }, 2*time.Minute, 1*time.Second).Should(BeTrue()) } ``` ## Delegation ### AddDelegatorTx Pattern ```go func addDelegator( ctx context.Context, network *tmpnet.Network, validatorNodeID ids.NodeID, delegationAmount uint64, delegationDuration time.Duration, ) ids.ID { delegatorKey := network.PreFundedKeys[1] pWallet := createPChainWallet(ctx, network, delegatorKey) startTime := time.Now().Add(1 * time.Minute) endTime := startTime.Add(delegationDuration) txID, err := pWallet.IssueAddPermissionlessDelegatorTx( &txs.SubnetValidator{ Validator: txs.Validator{ NodeID: validatorNodeID, Start: uint64(startTime.Unix()), End: uint64(endTime.Unix()), Wght: delegationAmount, }, Subnet: ids.Empty, }, &secp256k1fx.OutputOwners{ Threshold: 1, Addrs: []ids.ShortID{delegatorKey.Address()}, }, ) Expect(err).NotTo(HaveOccurred()) return txID } ``` ## Reward Distribution ### Testing Validator Rewards Based on avalanchego `reward_validator_test.go`: ```go ginkgo.It("should distribute validator rewards on completion", ginkgo.Label("rewards"), func() { ctx := context.Background() initialBalance := getBalance(ctx, network, validatorKey) // Add validator with min stake stakeAmount := 2000 * units.Avax nodeID, _ := addPrimaryNetworkValidator( ctx, network, stakeAmount, 15*24*time.Hour, // 15 days ) // Wait for validator period to end waitForValidatorRemoval(ctx, network, nodeID) // Check rewards received finalBalance := getBalance(ctx, network, validatorKey) // Expected: original stake + rewards // Reward calculation based on duration and stake expectedReward := calculateExpectedReward(stakeAmount, 15*24*time.Hour) Expect(finalBalance).To(BeNumerically(">=", initialBalance+stakeAmount+expectedReward)) }) ``` ### Delegation Rewards with Cortina Fork **Pre-Cortina**: Delegatee receives 25% immediately **Post-Cortina**: Delegatee rewards deferred until validator exits ```go ginkgo.It("should handle delegator rewards post-Cortina", ginkgo.Label("rewards", "delegation"), func() { ctx := context.Background() // Add validator nodeID, _ := addPrimaryNetworkValidator(ctx, network, 2*units.Avax, 24*time.Hour) // Add delegator delegatorInitial := getBalance(ctx, network, delegatorKey) delegationID := addDelegator(ctx, network, nodeID, 1*units.Avax, 12*time.Hour) // Wait for delegation period to end waitForDelegationEnd(ctx, network, delegationID) // Delegator receives 75% of rewards immediately (post-Cortina) delegatorFinal := getBalance(ctx, network, delegatorKey) delegatorReward := delegatorFinal - delegatorInitial - (1 * units.Avax) Expect(delegatorReward).To(BeNumerically(">", 0)) // Validator (delegatee) receives 25% when their validation period ends // This is deferred until validator exits (post-Cortina behavior) }) ``` ## Uptime-Based Rewards ### E2E Uptime Test Pattern From avalanchego `staking_rewards.go`: ```go ginkgo.It("should only reward validators with sufficient uptime", ginkgo.Label("uptime", "e2e"), func() { ctx := context.Background() // Add two validators alphaID, _ := addPrimaryNetworkValidator(ctx, network, 2*units.Avax, 48*time.Hour) betaID, _ := addPrimaryNetworkValidator(ctx, network, 2*units.Avax, 48*time.Hour) // Keep alpha online, stop beta betaNode := network.GetNode(betaID) err := betaNode.Stop(ctx) Expect(err).NotTo(HaveOccurred()) // Wait for validation periods time.Sleep(48 * time.Hour) // In real tests, advance time // Alpha gets rewards (good uptime) alphaBalance := getBalance(ctx, network, alphaKey) Expect(alphaBalance).To(BeNumerically(">", 2*units.Avax)) // Beta gets no rewards (insufficient uptime) betaBalance := getBalance(ctx, network, betaKey) Expect(betaBalance).To(Equal(2 * units.Avax)) // Only stake returned }) ``` ## Testing Edge Cases ### Insufficient Stake ```go ginkgo.It("should reject validator with insufficient stake", ginkgo.Label("validation"), func() { ctx := context.Background() pWallet := createPChainWallet(ctx, network, fundedKey) // Try to add validator with less than minimum stake _, err := pWallet.IssueAddPermissionlessValidatorTx( &txs.SubnetValidator{ Validator: txs.Validator{ NodeID: nodeID, Start: uint64(time.Now().Add(1 * time.Minute).Unix()), End: uint64(time.Now().Add(25 * time.Hour).Unix()), Wght: 100, // Way below minimum }, Subnet: ids.Empty, }, /*...*/ ) Expect(err).To(HaveOccurred()) Expect(err.Error()).To(ContainSubstring("insufficient stake")) }) ``` ### Over-Delegation From `vm_regression_test.go`: ```go ginkgo.It("should handle maximum delegation correctly", ginkgo.Label("validation", "delegation"), func() { ctx := context.Background() validatorStake := 2 * units.Avax nodeID, _ := addPrimaryNetworkValidator(ctx, network, validatorStake, 48*time.Hour) // First delegator: 5x validator stake (maximum) delegationID1 := addDelegator(ctx, network, nodeID, 5*validatorStake, 24*time.Hour) Expect(delegationID1).NotTo(BeEmpty()) // Second delegator: Should fail (would exceed 5x limit) _, err := addDelegatorTx(ctx, network, nodeID, 1*units.Avax, 24*time.Hour) Expect(err).To(HaveOccurred()) }) ``` ## Helper Functions ### Create P-Chain Wallet ```go func createPChainWallet( ctx context.Context, network *tmpnet.Network, key *secp256k1.PrivateKey, ) primary.Wallet { nodeURI := network.Nodes[0].URI wallet, err := primary.MakeWallet( ctx, &primary.WalletConfig{ URI: nodeURI, AVAXKeychain: secp256k1fx.NewKeychain(key), EthKeychain: secp256k1fx.NewKeychain(), }, ) Expect(err).NotTo(HaveOccurred()) return wallet } ``` ### Get P-Chain Balance ```go func getBalance( ctx context.Context, network *tmpnet.Network, key *secp256k1.PrivateKey, ) uint64 { pClient := platform.NewClient(network.Nodes[0].URI) utxos, err := pClient.GetUTXOs( ctx, []ids.ShortID{key.Address()}, ids.Empty, 0, 100, ) Expect(err).NotTo(HaveOccurred()) var balance uint64 for _, utxo := range utxos { balance += utxo.Out.Amount() } return balance } ``` ## Best Practices 1. **Use generous timeouts**: P-Chain operations can be slow 2. **Test uptime requirements**: Validators need sufficient uptime for rewards 3. **Handle fork behavior**: Cortina fork changed delegation reward distribution 4. **Test edge cases**: Minimum stake, maximum delegation, invalid times 5. **Verify balances**: Check stake return and reward distribution 6. **Clean up validators**: Stop test nodes after validation periods ## Key Differences: P-Chain vs L1 Validators | Aspect | P-Chain | L1 Validators | |--------|---------|---------------| | Transaction Type | AddValidatorTx | Contract calls | | Staking | AVAX on P-Chain | Native/ERC20 tokens on L1 | | Rewards | Protocol-level | Contract-managed | | Uptime | Tracked by P-Chain | Can use uptime proofs | | Registration | Direct P-Chain tx | Three-phase with Warp | ## Next Steps Test L1 contract-based validators Back to testing fundamentals ## Additional Resources - [avalanchego P-Chain Tests](https://github.com/ava-labs/avalanchego/tree/master/vms/platformvm) - [Reward Calculator Tests](https://github.com/ava-labs/avalanchego/blob/master/vms/platformvm/reward/calculator_test.go) - [Staking Rewards E2E](https://github.com/ava-labs/avalanchego/blob/master/tests/e2e/p/staking_rewards.go) # Subnet Testing (/docs/tooling/tmpnet/guides/subnet-testing) --- title: Subnet Testing description: Test subnet creation, validators, and cross-subnet interactions with tmpnet --- This guide covers advanced subnet testing scenarios using tmpnet, including subnet creation, validator management, and testing cross-subnet functionality. ## Overview tmpnet supports comprehensive subnet testing: - Create subnets with specific validators - Test validator operations (add/remove) - Configure subnet parameters - Test cross-subnet messaging with Warp - Validate L1 conversions ## Creating a Subnet with Specific Validators ### Basic Example Create a subnet validated by specific nodes: ```go import ( "context" "os" "time" "github.com/ava-labs/avalanchego/ids" "github.com/ava-labs/avalanchego/tests/fixture/tmpnet" "github.com/ava-labs/avalanchego/utils/constants" "github.com/ava-labs/avalanchego/utils/logging" ) // Create 5-node network network := &tmpnet.Network{ Nodes: tmpnet.NewNodesOrPanic(5), DefaultRuntimeConfig: tmpnet.NodeRuntimeConfig{ Process: &tmpnet.ProcessRuntimeConfig{ AvalancheGoPath: os.Getenv("AVALANCHEGO_PATH"), PluginDir: os.Getenv("AVAGO_PLUGIN_DIR"), }, }, } // Subnet validated by first 3 nodes only subnet := &tmpnet.Subnet{ Name: "my-subnet", ValidatorIDs: []ids.NodeID{ network.Nodes[0].NodeID, network.Nodes[1].NodeID, network.Nodes[2].NodeID, }, Chains: []*tmpnet.Chain{{ VMID: constants.XSVMID, Genesis: genesisBytes, }}, } network.Subnets = []*tmpnet.Subnet{subnet} // Bootstrap ctx, cancel := context.WithTimeout(context.Background(), 5*time.Minute) defer cancel() err := tmpnet.BootstrapNewNetwork(ctx, logging.NoLog{}, network, "") if err != nil { panic(err) } println("Subnet ID:", subnet.SubnetID.String()) ``` ### Subnet Configuration Customize subnet parameters: ```go subnet.Config = tmpnet.ConfigMap{ "proposerMinBlockDelay": 0, // Minimum block delay "proposerNumHistoricalBlocks": 50000, // Historical blocks } ``` ## Testing Multiple Subnets Create overlapping and isolated subnets: ```go nodes := tmpnet.NewNodesOrPanic(7) // Subnet A: nodes 0-2 subnetA := &tmpnet.Subnet{ Name: "subnet-a", ValidatorIDs: []ids.NodeID{nodes[0].NodeID, nodes[1].NodeID, nodes[2].NodeID}, Chains: []*tmpnet.Chain{chainA}, } // Subnet B: nodes 2-4 (node 2 validates both A and B) subnetB := &tmpnet.Subnet{ Name: "subnet-b", ValidatorIDs: []ids.NodeID{nodes[2].NodeID, nodes[3].NodeID, nodes[4].NodeID}, Chains: []*tmpnet.Chain{chainB}, } // Subnet C: nodes 5-6 (isolated) subnetC := &tmpnet.Subnet{ Name: "subnet-c", ValidatorIDs: []ids.NodeID{nodes[5].NodeID, nodes[6].NodeID}, Chains: []*tmpnet.Chain{chainC}, } network := &tmpnet.Network{ Nodes: nodes, Subnets: []*tmpnet.Subnet{subnetA, subnetB, subnetC}, } ``` This lets you test: - Shared validators (node 2 validates both A and B) - Isolated subnets (subnet C) - Cross-subnet messaging via shared validators ## Adding Validators to a Running Subnet Test adding validators dynamically to an existing subnet: ```go func addValidatorToSubnet(network *tmpnet.Network, subnet *tmpnet.Subnet) error { // Create a new ephemeral node newNode := tmpnet.NewEphemeralNode(tmpnet.FlagsMap{ config.TrackSubnetsKey: subnet.SubnetID.String(), }) // Start the node err := network.StartNode(context.Background(), newNode) if err != nil { return err } // Add as subnet validator using the subnet wallet // (Implementation details depend on your wallet setup) err = addSubnetValidator(subnet, newNode.NodeID) if err != nil { return err } // Wait for validator to become active return subnet.WaitForActiveValidators(context.Background(), newNode.NodeID) } ``` ## Testing Subnet-to-L1 Conversion Test converting a subnet to an L1 blockchain: ```go func testL1Conversion(t *testing.T) { // Create initial subnet network := createNetworkWithSubnet() defer network.Stop(context.Background()) subnet := network.Subnets[0] // Perform L1 conversion operations // 1. Register L1 validators for _, node := range network.Nodes { err := registerL1Validator(subnet, node) require.NoError(t, err) } // 2. Convert subnet to L1 err := convertSubnetToL1(subnet) require.NoError(t, err) // 3. Wait for validators to activate err = waitForL1Validators(subnet) require.NoError(t, err) // 4. Verify L1 functionality verifyL1Behavior(t, subnet) } ``` ## Cross-Subnet Messaging Test Avalanche Warp Messaging between subnets: ```go func testWarpMessaging(t *testing.T) { // Create network with two subnets network := createMultiSubnetNetwork() defer network.Stop(context.Background()) sourceSubnet := network.Subnets[0] destSubnet := network.Subnets[1] // Send a Warp message from source to destination message := createWarpMessage(sourceSubnet) // Get signatures from source subnet validators signatures := collectWarpSignatures(sourceSubnet, message) // Submit message to destination subnet err := submitWarpMessage(destSubnet, message, signatures) require.NoError(t, err) // Verify message was received and processed verifyWarpMessage(t, destSubnet, message) } ``` ## Subnet Validator Lifecycle Testing Test the complete validator lifecycle on a subnet: ```go func TestSubnetValidatorLifecycle(t *testing.T) { network := setupNetwork(t) defer network.Stop(context.Background()) subnet := network.Subnets[0] // Create a new node to add as validator node := tmpnet.NewEphemeralNode(tmpnet.FlagsMap{ config.TrackSubnetsKey: subnet.SubnetID.String(), }) // Start the node err := network.StartNode(context.Background(), node) require.NoError(t, err) // Add as pending validator t.Run("AddValidator", func(t *testing.T) { err := addSubnetValidator(subnet, node.NodeID, startTime, endTime, weight) require.NoError(t, err) }) // Wait for validation period to start t.Run("WaitForActive", func(t *testing.T) { err := subnet.WaitForActiveValidators(context.Background(), node.NodeID) require.NoError(t, err) }) // Verify validator is active t.Run("VerifyActive", func(t *testing.T) { active := isValidatorActive(subnet, node.NodeID) require.True(t, active) }) // Remove validator t.Run("RemoveValidator", func(t *testing.T) { err := removeSubnetValidator(subnet, node.NodeID) require.NoError(t, err) }) // Verify validator is removed t.Run("VerifyRemoved", func(t *testing.T) { active := isValidatorActive(subnet, node.NodeID) require.False(t, active) }) } ``` ## Testing Subnet Configuration Changes Test how subnet configuration changes affect behavior: ```go func testSubnetConfigUpdate(t *testing.T) { // Initial configuration subnet := &tmpnet.Subnet{ Name: "configurable-subnet", Config: tmpnet.ConfigMap{ "proposerMinBlockDelay": 1000, // 1 second }, Chains: []*tmpnet.Chain{chain}, ValidatorIDs: validatorIDs, } network := &tmpnet.Network{ Nodes: nodes, Subnets: []*tmpnet.Subnet{subnet}, DefaultRuntimeConfig: tmpnet.NodeRuntimeConfig{ Process: &tmpnet.ProcessRuntimeConfig{ AvalancheGoPath: avalanchegoPath, PluginDir: pluginDir, }, }, } // Bootstrap network require.NoError(t, tmpnet.BootstrapNewNetwork(ctx, os.Stdout, network, "")) // Test behavior with initial config measureBlockTime(t, subnet, 1000) // Update configuration subnet.Config["proposerMinBlockDelay"] = 0 // Restart nodes to apply new configuration network.Restart(context.Background()) // Test behavior with updated config measureBlockTime(t, subnet, 0) } ``` ## Tracking Specific Subnets Configure nodes to track specific subnets for testing: ```go // Configure network to track subnet network.DefaultFlags = tmpnet.FlagsMap{ config.TrackSubnetsKey: subnetID.String(), } // Or configure individual nodes node.Flags = tmpnet.FlagsMap{ config.TrackSubnetsKey: fmt.Sprintf("%s,%s", subnet1.String(), subnet2.String()), } ``` ## Testing Subnet Validator Weights Test different validator weight distributions: ```go func testValidatorWeights(t *testing.T) { network := setupNetwork(t) // Add validators with different weights validators := []struct { nodeID ids.NodeID weight uint64 }{ {network.Nodes[0].NodeID, 100}, // 50% of total weight {network.Nodes[1].NodeID, 50}, // 25% of total weight {network.Nodes[2].NodeID, 30}, // 15% of total weight {network.Nodes[3].NodeID, 20}, // 10% of total weight } for _, v := range validators { err := addSubnetValidator(subnet, v.nodeID, startTime, endTime, v.weight) require.NoError(t, err) } // Test consensus with weighted validators testConsensusWithWeights(t, subnet, validators) } ``` ## Ephemeral Subnet Validators Add temporary validators for specific test scenarios: ```go func addEphemeralValidator(network *tmpnet.Network, subnet *tmpnet.Subnet) (*tmpnet.Node, error) { // Create ephemeral node that tracks the subnet ephemeralNode := tmpnet.NewEphemeralNode(tmpnet.FlagsMap{ config.TrackSubnetsKey: subnet.SubnetID.String(), config.SybilProtectionEnabledKey: "false", // For testing only }) // Add to network err := network.AddEphemeralNode(context.Background(), ephemeralNode) if err != nil { return nil, err } // Add as subnet validator with short duration shortDuration := 5 * time.Minute err = addSubnetValidator( subnet, ephemeralNode.NodeID, time.Now(), time.Now().Add(shortDuration), 20, // weight ) return ephemeralNode, err } ``` ## Common Testing Patterns ### Testing Subnet Bootstrap Verify that nodes can bootstrap from a subnet: ```go func testSubnetBootstrap(t *testing.T) { // Create and bootstrap network with subnet network := createNetworkWithSubnet() defer network.Stop(context.Background()) // Create a new node newNode := tmpnet.NewNode() newNode.Flags = tmpnet.FlagsMap{ config.TrackSubnetsKey: subnet.SubnetID.String(), } // Start the node err := network.StartNode(context.Background(), newNode) require.NoError(t, err) // Verify the node bootstrapped the subnet verifySubnetBootstrap(t, newNode, subnet) } ``` ### Testing Subnet Chain Upgrades Test deploying chain upgrades on a subnet: ```go func testChainUpgrade(t *testing.T) { network := setupNetworkWithSubnet(t) // Deploy initial chain version // ... operate chain ... // Stop network network.Stop(context.Background()) // Update chain configuration or VM binary updateChainConfig(network.Subnets[0]) // Restart network err := network.Restart(context.Background()) require.NoError(t, err) // Verify upgrade succeeded verifyChainUpgrade(t, network.Subnets[0]) } ``` ## Troubleshooting ### Subnet Creation Fails **Check:** - Sufficient nodes are specified as validators (minimum 1) - Nodes have generated staking keys - Bootstrap node has sufficient funds for transactions **Debug:** ```bash # Check subnet creation logs grep -i "subnet" ~/.tmpnet/networks/latest/NodeID-*/logs/main.log ``` ### Validators Not Becoming Active **Check:** - Validation period has started - Nodes are tracking the subnet - Subnet validators were added correctly **Debug:** ```bash # Check if node is tracking subnet cat ~/.tmpnet/networks/latest/NodeID-*/flags.json | jq '.["track-subnets"]' ``` ### Cross-Subnet Messaging Issues **Check:** - Both subnets have active validators - Nodes validating both subnets have proper connectivity - Warp messaging is enabled ## Next Steps Choose between local and Kubernetes runtimes Monitor subnet behavior with metrics Detailed configuration options ## Additional Resources - [Subnet Examples in E2E Tests](https://github.com/ava-labs/avalanchego/tree/master/tests/e2e) - [Subnet Documentation](/docs/subnets) - [Warp Messaging](/docs/cross-chain) # Testing Custom VMs (/docs/tooling/tmpnet/guides/testing-custom-vms) --- title: Testing Custom VMs description: Deploy and test your custom Virtual Machine on a local temporary network --- This guide shows you how to use tmpnet to test your custom Virtual Machine (VM) or L1 blockchain before deploying to testnet or mainnet. ## Overview Testing custom VMs with tmpnet allows you to deploy your VM to a multi-node network, test consensus behavior, and iterate quickly before public deployment. ## Prerequisites 1. **tmpnet installed** - See [Installation](/docs/tooling/tmpnet/installation) 2. **VM binary compiled** - Your VM binary in `~/.avalanchego/plugins/` 3. **Genesis file** - Genesis configuration for your chain ## Quick Example: Testing a Custom VM Here's a complete workflow for testing a custom VM: ### 1. Prepare Your VM Binary Build your VM and place it in the plugins directory: ```bash # Build your VM (example) cd /path/to/your-vm go build -o ~/.avalanchego/plugins/your-vm ./cmd/your-vm # Verify the binary exists ls -lh ~/.avalanchego/plugins/your-vm ``` The binary name should match your VM name. ### 2. Create Genesis Configuration Create a genesis file for your chain. The exact format depends on your VM. Example for a simple VM: ```json { "config": { "chainId": 12345, "feeConfig": { "gasLimit": 8000000 } }, "alloc": { "0x8db97C7cEcE249c2b98bDC0226Cc4C2A57BF52FC": { "balance": "0x295BE96E64066972000000" } } } ``` Save this as `genesis.json`. ### 3. Test Your VM Using Go Code Create a test script to deploy your VM: ```go package main import ( "context" "os" "time" "github.com/ava-labs/avalanchego/ids" "github.com/ava-labs/avalanchego/tests/fixture/tmpnet" "github.com/ava-labs/avalanchego/utils/logging" ) func main() { // Read genesis genesisBytes, _ := os.ReadFile("genesis.json") // Define your VM chain chain := &tmpnet.Chain{ VMID: ids.ID{'y', 'o', 'u', 'r', 'v', 'm', 'i', 'd'}, Genesis: genesisBytes, Config: `{"blockGasLimit": 8000000}`, } // Create subnet with your chain subnet := &tmpnet.Subnet{ Name: "my-vm-subnet", Chains: []*tmpnet.Chain{chain}, } // Create 5-node network network := &tmpnet.Network{ Nodes: tmpnet.NewNodesOrPanic(5), Subnets: []*tmpnet.Subnet{subnet}, DefaultRuntimeConfig: tmpnet.NodeRuntimeConfig{ Process: &tmpnet.ProcessRuntimeConfig{ AvalancheGoPath: "./bin/avalanchego", PluginDir: "./build/plugins", }, }, } // All nodes validate the subnet for _, node := range network.Nodes { subnet.ValidatorIDs = append(subnet.ValidatorIDs, node.NodeID) } // Bootstrap network ctx, cancel := context.WithTimeout(context.Background(), 5*time.Minute) defer cancel() err := tmpnet.BootstrapNewNetwork( ctx, logging.NoLog{}, // Or use a real logger network, "", // Empty string uses default path (~/.tmpnet/networks) ) if err != nil { panic(err) } println("Chain ID:", subnet.Chains[0].ChainID.String()) // Test your chain... network.Stop(context.Background()) } ``` **Key steps:** 1. Read your genesis configuration 2. Define a `Chain` with your VM ID and genesis 3. Create a `Subnet` containing your chain 4. Set up `DefaultRuntimeConfig` with paths to avalanchego and plugins 5. Assign validator nodes to the subnet 6. Bootstrap and test ### 4. Interact with Your Chain Once your chain is deployed, interact with it using the chain's API: ```bash # Get the chain ID from the network configuration CHAIN_ID=$(cat ~/.tmpnet/networks/latest/subnets/your-vm-subnet.json | jq -r '.chains[0].chainId') # Get a node URI NODE_URI=$(cat ~/.tmpnet/networks/latest/NodeID-*/process.json | jq -r '.uri' | head -1) # Call your chain's API curl -X POST --data '{ "jsonrpc": "2.0", "method": "your.method", "params": {}, "id": 1 }' -H 'content-type:application/json;' ${NODE_URI}/ext/bc/${CHAIN_ID}/rpc ``` ## Development Workflow ### Iterative Testing **1. Make code changes** **2. Rebuild your VM:** ```bash go build -o ~/.avalanchego/plugins/your-vm ./cmd/your-vm ``` **3. Restart the network:** ```bash tmpnetctl stop-network go run test-vm.go ``` ### Automated Testing Integrate tmpnet into your test suite: ```go func TestYourVM(t *testing.T) { network := setupNetwork(t) defer network.Stop(context.Background()) chain := deployChain(t, network) t.Run("BasicTransaction", func(t *testing.T) { // Test transaction functionality }) t.Run("Consensus", func(t *testing.T) { // Test consensus behavior }) } ``` ### Monitor Logs ```bash # Watch logs from all nodes tail -f ~/.tmpnet/networks/latest/NodeID-*/logs/main.log # Search for errors grep -i error ~/.tmpnet/networks/latest/NodeID-*/logs/main.log ``` ## Common Patterns ### Testing VM Upgrades ```go // Start with v1 network := createNetworkWithVM("your-vm", "v1.0.0") // ... test v1 behavior ... network.Stop(ctx) // Replace plugin binary with v2 // cp your-vm-v2 ~/.avalanchego/plugins/your-vm network.Restart(ctx) // ... verify v2 upgrade ... ``` ### Custom Genesis Allocations Pre-fund specific addresses: ```go genesis := map[string]interface{}{ "alloc": map[string]interface{}{ "0xYourAddress": map[string]interface{}{ "balance": "0x1000000000000000000000", }, }, } genesisBytes, _ := json.Marshal(genesis) chain.Genesis = genesisBytes ``` ### Testing with Debug Logging Enable verbose logging for debugging: ```go network.DefaultFlags = tmpnet.FlagsMap{ "log-level": "debug", "log-display-level": "debug", } ``` ## Troubleshooting ### VM Binary Not Found **Error:** `plugin binary not found` **Solution:** Ensure your VM binary is in the correct location: ```bash ls -lh ~/.avalanchego/plugins/your-vm ``` The plugin directory can be customized with `--plugin-dir` flag. ### Genesis Validation Fails **Error:** `invalid genesis` **Solution:** Verify your genesis format matches your VM's expectations. Check logs: ```bash grep -i genesis ~/.tmpnet/networks/latest/NodeID-*/logs/main.log ``` ### Chain Not Starting **Error:** Chain doesn't appear in network **Solution:** 1. Verify VM binary is executable: `chmod +x ~/.avalanchego/plugins/your-vm` 2. Check node logs for VM loading errors 3. Ensure VM ID is correct and unique ### Subnet Validators Not Active **Error:** Validators don't become active **Solution:** - Ensure sufficient validators are assigned to the subnet - Check that nodes are tracking the subnet - Verify subnet creation succeeded in logs ## Next Steps Advanced subnet testing scenarios Choose between local and Kubernetes runtimes Set up monitoring for your test network ## Additional Resources - [Full tmpnet README](https://github.com/ava-labs/avalanchego/blob/master/tests/fixture/tmpnet/README.md) - [VM Testing Examples](https://github.com/ava-labs/avalanchego/tree/master/tests/e2e/vms) - [Custom VM Documentation](/docs/virtual-machines) # Transaction Utilities (/docs/tooling/tmpnet/guides/transaction-utilities) --- title: Transaction Utilities description: Helper functions for transactions, events, and common testing operations --- This guide covers utility patterns for working with transactions, events, and common operations in tmpnet tests. > Pattern guide only. The snippets mirror helpers used in icm-services (for example `tests/contracts/lib/icm-contracts/lib/subnet-evm/tests/utils`) and avalanchego e2e utilities. Copy from those source files for runnable code; adjust imports/types to your project. ## Transaction Management ### Calculating Transaction Parameters Every transaction needs gas parameters and nonce. Use this helper: ```go title="transaction_utils.go" package testutils import ( "context" "math/big" "github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/ethclient" . "github.com/onsi/gomega" ) // CalculateTxParams calculates gas parameters and nonce for a transaction func CalculateTxParams( ctx context.Context, l1 L1TestInfo, fromAddress common.Address, ) (*big.Int, *big.Int, uint64) { // Get base fee from latest block baseFee, err := l1.RPCClient.EstimateBaseFee(ctx) Expect(err).NotTo(HaveOccurred()) // Get suggested tip gasTipCap, err := l1.RPCClient.SuggestGasTipCap(ctx) Expect(err).NotTo(HaveOccurred()) // Get current nonce nonce, err := l1.RPCClient.NonceAt(ctx, fromAddress, nil) Expect(err).NotTo(HaveOccurred()) // Calculate gas fee cap: baseFee * 2.5 + maxPriorityFee gasFeeCap := new(big.Int).Mul(baseFee, big.NewInt(25)) gasFeeCap.Div(gasFeeCap, big.NewInt(10)) maxPriorityFee := big.NewInt(2_500_000_000) // 2.5 gwei gasFeeCap.Add(gasFeeCap, maxPriorityFee) // Cap the tip at maxPriorityFee if gasTipCap.Cmp(maxPriorityFee) > 0 { gasTipCap = maxPriorityFee } return gasFeeCap, gasTipCap, nonce } ``` ### Creating Transactions #### Native Transfer ```go const NativeTransferGas = uint64(21000) func CreateNativeTransferTransaction( ctx context.Context, l1 L1TestInfo, fromKey *ecdsa.PrivateKey, to common.Address, amount *big.Int, ) *types.Transaction { fromAddress := crypto.PubkeyToAddress(fromKey.PublicKey) gasFeeCap, gasTipCap, nonce := CalculateTxParams(ctx, l1, fromAddress) tx := types.NewTx(&types.DynamicFeeTx{ ChainID: l1.EVMChainID, Nonce: nonce, To: &to, Gas: NativeTransferGas, GasFeeCap: gasFeeCap, GasTipCap: gasTipCap, Value: amount, }) return SignTransaction(tx, fromKey, l1.EVMChainID) } func SendNativeTransfer( ctx context.Context, l1 L1TestInfo, fromKey *ecdsa.PrivateKey, to common.Address, amount *big.Int, ) *types.Receipt { tx := CreateNativeTransferTransaction(ctx, l1, fromKey, to, amount) return SendTransactionAndWaitForSuccess(ctx, l1, tx) } ``` #### Contract Call Transaction ```go func CreateContractCallTransaction( ctx context.Context, l1 L1TestInfo, fromKey *ecdsa.PrivateKey, contract common.Address, callData []byte, gasLimit uint64, ) *types.Transaction { fromAddress := crypto.PubkeyToAddress(fromKey.PublicKey) gasFeeCap, gasTipCap, nonce := CalculateTxParams(ctx, l1, fromAddress) tx := types.NewTx(&types.DynamicFeeTx{ ChainID: l1.EVMChainID, Nonce: nonce, To: &contract, Gas: gasLimit, GasFeeCap: gasFeeCap, GasTipCap: gasTipCap, Data: callData, }) return SignTransaction(tx, fromKey, l1.EVMChainID) } ``` ### Signing Transactions ```go func SignTransaction( tx *types.Transaction, key *ecdsa.PrivateKey, chainID *big.Int, ) *types.Transaction { signer := types.NewLondonSigner(chainID) signedTx, err := types.SignTx(tx, signer, key) Expect(err).NotTo(HaveOccurred()) return signedTx } ``` ### Sending and Waiting ```go func SendTransactionAndWaitForSuccess( ctx context.Context, l1 L1TestInfo, tx *types.Transaction, ) *types.Receipt { err := l1.RPCClient.SendTransaction(ctx, tx) Expect(err).NotTo(HaveOccurred()) return WaitForTransactionSuccess(ctx, l1, tx.Hash()) } func SendTransactionAndWaitForFailure( ctx context.Context, l1 L1TestInfo, tx *types.Transaction, ) *types.Receipt { err := l1.RPCClient.SendTransaction(ctx, tx) Expect(err).NotTo(HaveOccurred()) return WaitForTransactionFailure(ctx, l1, tx.Hash()) } ``` ### Waiting for Receipts ```go func WaitForTransactionSuccess( ctx context.Context, l1 L1TestInfo, txHash common.Hash, ) *types.Receipt { var receipt *types.Receipt Eventually(func() bool { var err error receipt, err = l1.RPCClient.TransactionReceipt(ctx, txHash) return err == nil }, 30*time.Second, 500*time.Millisecond).Should(BeTrue(), "Transaction receipt not found: %s", txHash.Hex()) Expect(receipt.Status).To(Equal(uint64(1)), "Transaction failed: %s", txHash.Hex()) return receipt } func WaitForTransactionFailure( ctx context.Context, l1 L1TestInfo, txHash common.Hash, ) *types.Receipt { var receipt *types.Receipt Eventually(func() bool { var err error receipt, err = l1.RPCClient.TransactionReceipt(ctx, txHash) return err == nil }, 30*time.Second, 500*time.Millisecond).Should(BeTrue()) Expect(receipt.Status).To(Equal(uint64(0)), "Transaction succeeded unexpectedly: %s", txHash.Hex()) return receipt } ``` ## Predicate Transactions (Warp) Predicate transactions include Warp messages in the access list: ```go func CreatePredicateTx( ctx context.Context, l1 L1TestInfo, contractAddress common.Address, signedWarpMessage *avalancheWarp.Message, senderKey *ecdsa.PrivateKey, gasLimit uint64, callData []byte, ) *types.Transaction { fromAddress := crypto.PubkeyToAddress(senderKey.PublicKey) gasFeeCap, gasTipCap, nonce := CalculateTxParams(ctx, l1, fromAddress) // Create predicate access list with Warp message tx := predicateutils.NewPredicateTx( l1.EVMChainID, nonce, &contractAddress, gasLimit, gasFeeCap, gasTipCap, big.NewInt(0), callData, types.AccessList{}, warp.ContractAddress, signedWarpMessage.Bytes(), ) return SignTransaction(tx, senderKey, l1.EVMChainID) } ``` ## Event Parsing ### Extract Events from Logs ```go func GetEventFromLogs[T any]( logs []*types.Log, parser func(*types.Log) (T, error), ) (T, error) { for _, log := range logs { event, err := parser(log) if err == nil { return event, nil } } var zero T return zero, errors.New("event not found in logs") } // Usage example event, err := GetEventFromLogs( receipt.Logs, teleporter.ParseSendCrossChainMessage, ) Expect(err).NotTo(HaveOccurred()) messageID := event.MessageID ``` ### With Transaction Trace Fallback For better debugging when events aren't found: ```go func GetEventFromLogsOrTrace[T any]( ctx context.Context, l1 L1TestInfo, receipt *types.Receipt, parser func(*types.Log) (T, error), ) T { event, err := GetEventFromLogs(receipt.Logs, parser) if err == nil { return event } // Event not found - trace transaction for debugging trace := TraceTransaction(ctx, l1.RPCClient, receipt.TxHash) ginkgo.GinkgoWriter.Printf("Transaction trace:\n%s\n", trace) Fail("Event not found in logs. See trace above.") var zero T return zero } ``` ## Transaction Tracing ### Get Transaction Trace ```go func TraceTransaction( ctx context.Context, client *ethclient.Client, txHash common.Hash, ) string { var result interface{} err := client.Client().CallContext( ctx, &result, "debug_traceTransaction", txHash, map[string]interface{}{ "tracer": "callTracer", }, ) if err != nil { return fmt.Sprintf("Failed to trace: %v", err) } jsonBytes, _ := json.MarshalIndent(result, "", " ") return string(jsonBytes) } func TraceTransactionAndExit( ctx context.Context, client *ethclient.Client, txHash common.Hash, ) { trace := TraceTransaction(ctx, client, txHash) ginkgo.GinkgoWriter.Printf("Transaction trace:\n%s\n", trace) Fail("Transaction trace requested") } ``` ## Contract Deployment ### Deploy Contract ```go func DeployContract( ctx context.Context, l1 L1TestInfo, fromKey *ecdsa.PrivateKey, contractBytecode []byte, ) (common.Address, *types.Receipt) { fromAddress := crypto.PubkeyToAddress(fromKey.PublicKey) gasFeeCap, gasTipCap, nonce := CalculateTxParams(ctx, l1, fromAddress) tx := types.NewTx(&types.DynamicFeeTx{ ChainID: l1.EVMChainID, Nonce: nonce, Gas: 5_000_000, GasFeeCap: gasFeeCap, GasTipCap: gasTipCap, Data: contractBytecode, }) signedTx := SignTransaction(tx, fromKey, l1.EVMChainID) receipt := SendTransactionAndWaitForSuccess(ctx, l1, signedTx) return receipt.ContractAddress, receipt } ``` ### Deploy with Constructor Args ```go func DeployContractWithArgs( ctx context.Context, l1 L1TestInfo, fromKey *ecdsa.PrivateKey, contractBytecode []byte, constructorArgs []byte, ) (common.Address, *types.Receipt) { // Combine bytecode and constructor args data := append(contractBytecode, constructorArgs...) return DeployContract(ctx, l1, fromKey, data) } ``` ## Block and Network Utilities ### Wait for Block Acceptance ```go func WaitForAllValidatorsToAcceptBlock( ctx context.Context, nodeURIs []string, blockchainID ids.ID, blockHeight uint64, ) { for _, nodeURI := range nodeURIs { Eventually(func() bool { client := ethclient.NewClient(nodeURI + "/ext/bc/" + blockchainID.String() + "/rpc") block, err := client.BlockByNumber(ctx, big.NewInt(int64(blockHeight))) if err != nil { return false } return block != nil }, 30*time.Second, 500*time.Millisecond).Should(BeTrue(), "Node %s did not accept block %d", nodeURI, blockHeight) } } ``` ### Advance Proposer VM For networks using Proposer VM: ```go func AdvanceProposerVM( ctx context.Context, l1 L1TestInfo, fundedKey *ecdsa.PrivateKey, numBlocks int, ) { recipient := common.HexToAddress("0x0123456789012345678901234567890123456789") for i := 0; i < numBlocks; i++ { // Send dummy transaction to produce block SendNativeTransfer( ctx, l1, fundedKey, recipient, big.NewInt(1), ) } } ``` ## Balance Checking ### Check Balance ```go func CheckBalance( ctx context.Context, address common.Address, expectedBalance *big.Int, client *ethclient.Client, ) { balance, err := client.BalanceAt(ctx, address, nil) Expect(err).NotTo(HaveOccurred()) Expect(balance).To(Equal(expectedBalance), "Address %s has balance %s, expected %s", address.Hex(), balance.String(), expectedBalance.String()) } ``` ### BigInt Helpers ```go func ExpectBigEqual(a, b *big.Int) { Expect(a.Cmp(b)).To(Equal(0), "Expected %s to equal %s", a.String(), b.String()) } func BigIntSub(a, b *big.Int) *big.Int { return new(big.Int).Sub(a, b) } func BigIntMul(a, b *big.Int) *big.Int { return new(big.Int).Mul(a, b) } func BigIntAdd(a, b *big.Int) *big.Int { return new(big.Int).Add(a, b) } ``` ## URI Conversion ### Convert HTTP to WebSocket/RPC ```go func HttpToWebsocketURI(uri string, blockchainID string) string { return strings.Replace(uri, "http://", "ws://", 1) + "/ext/bc/" + blockchainID + "/ws" } func HttpToRPCURI(uri string, blockchainID string) string { return uri + "/ext/bc/" + blockchainID + "/rpc" } ``` ## Complete Helper Package Example ```go title="testutils/helpers.go" package testutils import ( "context" "crypto/ecdsa" "math/big" "time" "github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/crypto" . "github.com/onsi/gomega" ) type TxHelper struct { L1 L1TestInfo Key *ecdsa.PrivateKey From common.Address } func NewTxHelper(l1 L1TestInfo, key *ecdsa.PrivateKey) *TxHelper { return &TxHelper{ L1: l1, Key: key, From: crypto.PubkeyToAddress(key.PublicKey), } } func (h *TxHelper) SendNative( ctx context.Context, to common.Address, amount *big.Int, ) *types.Receipt { return SendNativeTransfer(ctx, h.L1, h.Key, to, amount) } func (h *TxHelper) CallContract( ctx context.Context, contract common.Address, callData []byte, gasLimit uint64, ) *types.Receipt { tx := CreateContractCallTransaction( ctx, h.L1, h.Key, contract, callData, gasLimit, ) return SendTransactionAndWaitForSuccess(ctx, h.L1, tx) } func (h *TxHelper) GetBalance(ctx context.Context) *big.Int { balance, err := h.L1.RPCClient.BalanceAt(ctx, h.From, nil) Expect(err).NotTo(HaveOccurred()) return balance } ``` ## Usage in Tests ```go var _ = ginkgo.Describe("[Transaction Tests]", func() { var helper *TxHelper ginkgo.BeforeEach(func() { helper = NewTxHelper(l1A, fundedKey) }) ginkgo.It("should transfer native tokens", func() { ctx := context.Background() recipient := common.HexToAddress("0x1234...") amount := big.NewInt(1e18) initialBalance := helper.GetBalance(ctx) receipt := helper.SendNative(ctx, recipient, amount) Expect(receipt.Status).To(Equal(uint64(1))) finalBalance := helper.GetBalance(ctx) // Account for gas cost gasUsed := new(big.Int).Mul( receipt.EffectiveGasPrice, big.NewInt(int64(receipt.GasUsed)), ) expected := BigIntSub( BigIntSub(initialBalance, amount), gasUsed, ) ExpectBigEqual(finalBalance, expected) }) }) ``` ## Best Practices 1. **Always use CalculateTxParams**: Don't hardcode gas values 2. **Use Eventually for receipts**: Network delays are common 3. **Trace failed transactions**: Use TraceTransaction for debugging 4. **Extract events safely**: Use GetEventFromLogsOrTrace 5. **Check transaction status**: Always verify receipt.Status 6. **Handle BigInt carefully**: Use helper functions to avoid mutations ## Next Steps Construct and sign Warp messages Use these utilities for Teleporter testing Back to testing fundamentals # Testing Native Token Staking (/docs/tooling/tmpnet/guides/validator-management-native-staking) --- title: Testing Native Token Staking description: Test complete validator lifecycle with native token staking on Avalanche L1s --- This guide covers testing native token staking validators on L1s, including the complete lifecycle: registration, delegation, rewards, and removal. > Pattern guide only. These examples follow the staking helpers in `icm-services` (see `tests/contracts/lib/icm-contracts/tests/network` for runnable code). They rely on shared utilities for contract bindings, Warp signatures, and tmpnet setup. ## Overview Native token staking allows validators to stake the L1's native currency (like AVAX) to secure the network. This guide covers: - Deploying a native staking manager - Registering validators with stake - Adding and removing delegators - Removing validators with uptime proofs - Testing edge cases and failures ## Prerequisites - Complete [Getting Started](/docs/tooling/tmpnet/guides/getting-started) - Understand [L1 Conversion](/docs/tooling/tmpnet/guides/l1-conversion) - Have an L1 converted to use native staking ## Complete Lifecycle Test ```go title="native_staking_test.go" package staking_test import ( "context" "flag" "math/big" "os" "testing" "time" "github.com/ava-labs/avalanchego/ids" "github.com/ava-labs/avalanchego/tests/fixture/e2e" "github.com/ava-labs/avalanchego/tests/fixture/tmpnet" "github.com/ava-labs/avalanchego/units" nativestaking "github.com/ava-labs/icm-contracts/abi-bindings/go/validator-manager/NativeTokenStakingManager" "github.com/ethereum/go-ethereum/common" "github.com/ethereum/go-ethereum/crypto" "github.com/onsi/ginkgo/v2" . "github.com/onsi/gomega" ) var ( network *tmpnet.Network e2eFlags *e2e.FlagVars l1Info L1TestInfo stakingManagerAddress common.Address fundedKey *ecdsa.PrivateKey ) func TestMain(m *testing.M) { e2eFlags = e2e.RegisterFlags() flag.Parse() os.Exit(m.Run()) } func TestNativeStaking(t *testing.T) { if os.Getenv("RUN_E2E") == "" { t.Skip("RUN_E2E not set") } RegisterFailHandler(ginkgo.Fail) ginkgo.RunSpecs(t, "Native Staking Test Suite") } var _ = ginkgo.BeforeSuite(func() { ctx, cancel := context.WithTimeout(context.Background(), 10*time.Minute) defer cancel() // Create network and convert L1 to native staking network, l1Info, stakingManagerAddress = setupNativeStakingL1(ctx) fundedKey = network.PreFundedKeys[0] }) var _ = ginkgo.AfterSuite(func() { if network != nil { network.Stop(context.Background()) } }) var _ = ginkgo.Describe("[Native Token Staking]", func() { ginkgo.It("should complete full validator lifecycle", ginkgo.Label("staking", "validator"), func() { ctx := context.Background() // Register new validator validationID, node := registerValidator( ctx, network, l1Info, stakingManagerAddress, 100*units.Avax, fundedKey, ) // Add delegator delegationID := addDelegator( ctx, l1Info, stakingManagerAddress, validationID, 50*units.Avax, fundedKey, ) // Wait for active period time.Sleep(2 * time.Second) // Remove delegator removeDelegator( ctx, network, l1Info, stakingManagerAddress, delegationID, fundedKey, ) // Remove validator removeValidator( ctx, network, l1Info, stakingManagerAddress, validationID, 100, // 100% uptime fundedKey, ) }) }) ``` ## Validator Registration ### Step-by-Step Registration Flow The registration process has three phases: 1. **Initialize** - Submit registration on L1 2. **P-Chain** - Register on P-Chain with Warp message 3. **Complete** - Finalize with P-Chain acknowledgment ```go func registerValidator( ctx context.Context, network *tmpnet.Network, l1 L1TestInfo, stakingManagerAddress common.Address, stakeAmount uint64, senderKey *ecdsa.PrivateKey, ) (ids.ID, *tmpnet.Node) { // Create new node to validate node := tmpnet.NewEphemeralNode(tmpnet.FlagsMap{}) err := network.StartNode(ctx, node) Expect(err).NotTo(HaveOccurred()) err = node.WaitForHealthy(ctx) Expect(err).NotTo(HaveOccurred()) // Step 1: Initialize registration on L1 registrationReceipt := initializeValidatorRegistration( ctx, l1, stakingManagerAddress, node, stakeAmount, senderKey, ) // Extract registration details from event validationID, warpMessage := extractRegistrationInfo(registrationReceipt) // Step 2: Register on P-Chain registerOnPChain( ctx, network, l1.SubnetID, node, stakeAmount, warpMessage, senderKey, ) // Step 3: Complete registration with P-Chain proof completeRegistration( ctx, network, l1, stakingManagerAddress, validationID, senderKey, ) // Verify validator is active verifyValidatorActive(ctx, l1, stakingManagerAddress, validationID) return validationID, node } ``` ### Initialize Registration ```go func initializeValidatorRegistration( ctx context.Context, l1 L1TestInfo, stakingManagerAddress common.Address, node *tmpnet.Node, stakeAmount uint64, senderKey *ecdsa.PrivateKey, ) *types.Receipt { stakingManager, err := nativestaking.NewNativeTokenStakingManager( stakingManagerAddress, l1.RPCClient, ) Expect(err).NotTo(HaveOccurred()) // Prepare node PoP (Proof of Possession) nodeID := node.NodeID blsPublicKey := node.BlsPublicKey expiry := uint64(time.Now().Add(24 * time.Hour).Unix()) // Create transaction with staked native tokens opts, err := bind.NewKeyedTransactorWithChainID(senderKey, l1.EVMChainID) Expect(err).NotTo(HaveOccurred()) // Send native tokens as stake opts.Value = new(big.Int).SetUint64(stakeAmount) // Call initializeValidatorRegistration tx, err := stakingManager.InitializeValidatorRegistration( opts, nativestaking.ValidatorRegistrationInput{ NodeID: nodeID.Bytes(), BlsPublicKey: blsPublicKey, RegistrationExpiry: expiry, RemainingBalanceOwner: nativestaking.PChainOwner{ Threshold: 1, Addresses: []common.Address{ crypto.PubkeyToAddress(senderKey.PublicKey), }, }, DisableOwner: nativestaking.PChainOwner{ Threshold: 1, Addresses: []common.Address{ crypto.PubkeyToAddress(senderKey.PublicKey), }, }, }, uint16(20), // Delegation fee: 20% uint64(1), // Min stake duration: 1 second ) Expect(err).NotTo(HaveOccurred()) receipt := waitForSuccess(ctx, l1, tx.Hash()) return receipt } ``` ### Register on P-Chain ```go func registerOnPChain( ctx context.Context, network *tmpnet.Network, subnetID ids.ID, node *tmpnet.Node, stakeAmount uint64, warpMessage []byte, senderKey *ecdsa.PrivateKey, ) { // Get P-Chain wallet pWallet := network.GetPChainWallet(senderKey) // Issue RegisterL1ValidatorTx txID, err := pWallet.IssueRegisterL1ValidatorTx( stakeAmount, node.NodePoP.ProofOfPossession, warpMessage, ) Expect(err).NotTo(HaveOccurred()) // Wait for P-Chain acceptance Eventually(func() bool { status, err := pWallet.GetTxStatus(ctx, txID) if err != nil { return false } return status == status.Committed }, 30*time.Second, 500*time.Millisecond).Should(BeTrue()) } ``` ### Complete Registration ```go func completeRegistration( ctx context.Context, network *tmpnet.Network, l1 L1TestInfo, stakingManagerAddress common.Address, validationID ids.ID, senderKey *ecdsa.PrivateKey, ) { // Get P-Chain info pChainInfo := getPChainInfo(network) // Query P-Chain for registration message unsignedMessage := createL1ValidatorRegistrationMessage( validationID, true, // valid ) // Sign with P-Chain validators aggregator := NewSignatureAggregator( pChainInfo.NodeURIs[0], []ids.ID{constants.PrimaryNetworkID}, ) defer aggregator.Shutdown() signedMessage, err := aggregator.CreateSignedMessage( unsignedMessage, nil, constants.PrimaryNetworkID, 67, ) Expect(err).NotTo(HaveOccurred()) // Complete on L1 with signed message stakingManager, _ := nativestaking.NewNativeTokenStakingManager( stakingManagerAddress, l1.RPCClient, ) tx := createPredicateTx( ctx, l1, stakingManagerAddress, signedMessage, senderKey, func(opts *bind.TransactOpts) (*types.Transaction, error) { return stakingManager.CompleteValidatorRegistration(opts, 0) }, ) err = l1.RPCClient.SendTransaction(ctx, tx) Expect(err).NotTo(HaveOccurred()) receipt := waitForSuccess(ctx, l1, tx.Hash()) // Verify completion event event, err := getEventFromLogs( receipt.Logs, stakingManager.ParseValidatorRegistrationCompleted, ) Expect(err).NotTo(HaveOccurred()) Expect(event.ValidationID).To(Equal(validationID)) } ``` ## Delegation ### Adding a Delegator ```go func addDelegator( ctx context.Context, l1 L1TestInfo, stakingManagerAddress common.Address, validationID ids.ID, delegationAmount uint64, delegatorKey *ecdsa.PrivateKey, ) ids.ID { // Step 1: Initialize delegation delegationID := initializeDelegation( ctx, l1, stakingManagerAddress, validationID, delegationAmount, delegatorKey, ) // Step 2: Complete delegation (similar to validator registration) completeDelegation( ctx, l1, stakingManagerAddress, delegationID, delegatorKey, ) return delegationID } func initializeDelegation( ctx context.Context, l1 L1TestInfo, stakingManagerAddress common.Address, validationID ids.ID, delegationAmount uint64, delegatorKey *ecdsa.PrivateKey, ) ids.ID { stakingManager, _ := nativestaking.NewNativeTokenStakingManager( stakingManagerAddress, l1.RPCClient, ) opts, _ := bind.NewKeyedTransactorWithChainID(delegatorKey, l1.EVMChainID) opts.Value = new(big.Int).SetUint64(delegationAmount) tx, err := stakingManager.InitializeDelegatorRegistration( opts, validationID, ) Expect(err).NotTo(HaveOccurred()) receipt := waitForSuccess(ctx, l1, tx.Hash()) // Extract delegation ID from event event, err := getEventFromLogs( receipt.Logs, stakingManager.ParseDelegatorAdded, ) Expect(err).NotTo(HaveOccurred()) return event.DelegationID } ``` ### Removing a Delegator ```go func removeDelegator( ctx context.Context, network *tmpnet.Network, l1 L1TestInfo, stakingManagerAddress common.Address, delegationID ids.ID, delegatorKey *ecdsa.PrivateKey, ) { // Step 1: Initialize end delegation initializeEndDelegation( ctx, l1, stakingManagerAddress, delegationID, delegatorKey, ) // Step 2: Complete end delegation with uptime proof completeEndDelegation( ctx, network, l1, stakingManagerAddress, delegationID, delegatorKey, ) // Verify delegation removed stakingManager, _ := nativestaking.NewNativeTokenStakingManager( stakingManagerAddress, l1.RPCClient, ) delegation, err := stakingManager.GetDelegation( &bind.CallOpts{}, delegationID, ) Expect(err).NotTo(HaveOccurred()) Expect(delegation.Status).To(Equal(DelegationStatusCompleted)) } ``` ## Validator Removal ### Removing with Uptime Proof ```go func removeValidator( ctx context.Context, network *tmpnet.Network, l1 L1TestInfo, stakingManagerAddress common.Address, validationID ids.ID, uptimePercentage uint64, senderKey *ecdsa.PrivateKey, ) { // Step 1: Initialize validator removal initializeEndValidation( ctx, l1, stakingManagerAddress, validationID, senderKey, ) // Step 2: Complete with uptime proof from P-Chain completeEndValidationWithUptime( ctx, network, l1, stakingManagerAddress, validationID, uptimePercentage, senderKey, ) // Verify validator removed verifyValidatorRemoved(ctx, l1, stakingManagerAddress, validationID) } func completeEndValidationWithUptime( ctx context.Context, network *tmpnet.Network, l1 L1TestInfo, stakingManagerAddress common.Address, validationID ids.ID, uptimePercentage uint64, senderKey *ecdsa.PrivateKey, ) { // Create L1ValidatorWeightMessage with uptime unsignedMessage := createL1ValidatorWeightMessage( validationID, 0, // nonce 0, // weight (removal) uptimePercentage, ) // Sign with P-Chain validators pChainInfo := getPChainInfo(network) aggregator := NewSignatureAggregator( pChainInfo.NodeURIs[0], []ids.ID{constants.PrimaryNetworkID}, ) defer aggregator.Shutdown() signedMessage, err := aggregator.CreateSignedMessage( unsignedMessage, nil, constants.PrimaryNetworkID, 67, ) Expect(err).NotTo(HaveOccurred()) // Complete on L1 stakingManager, _ := nativestaking.NewNativeTokenStakingManager( stakingManagerAddress, l1.RPCClient, ) tx := createPredicateTx( ctx, l1, stakingManagerAddress, signedMessage, senderKey, func(opts *bind.TransactOpts) (*types.Transaction, error) { return stakingManager.CompleteEndValidation(opts, 0) }, ) err = l1.RPCClient.SendTransaction(ctx, tx) Expect(err).NotTo(HaveOccurred()) receipt := waitForSuccess(ctx, l1, tx.Hash()) // Verify completion event event, err := getEventFromLogs( receipt.Logs, stakingManager.ParseValidationPeriodEnded, ) Expect(err).NotTo(HaveOccurred()) Expect(event.ValidationID).To(Equal(validationID)) } ``` ## Testing Edge Cases ### Minimum Stake Requirements ```go ginkgo.It("should enforce minimum stake", ginkgo.Label("staking", "validation"), func() { ctx := context.Background() // Try to register with less than minimum stake stakingManager, _ := nativestaking.NewNativeTokenStakingManager( stakingManagerAddress, l1Info.RPCClient, ) // Get minimum stake minStake, err := stakingManager.MinimumStakeAmount(&bind.CallOpts{}) Expect(err).NotTo(HaveOccurred()) // Try with less than minimum belowMinimum := new(big.Int).Sub(minStake, big.NewInt(1)) opts, _ := bind.NewKeyedTransactorWithChainID(fundedKey, l1Info.EVMChainID) opts.Value = belowMinimum _, err = stakingManager.InitializeValidatorRegistration( opts, validatorInput, 20, 1, ) // Should fail Expect(err).To(HaveOccurred()) }) ``` ### Expired Registration ```go ginkgo.It("should reject expired registration", ginkgo.Label("staking", "validation"), func() { ctx := context.Background() node := createTestNode(ctx, network) // Create registration with past expiry expiry := uint64(time.Now().Add(-1 * time.Hour).Unix()) validatorInput := nativestaking.ValidatorRegistrationInput{ NodeID: node.NodeID.Bytes(), BlsPublicKey: node.BlsPublicKey, RegistrationExpiry: expiry, // ... other fields } // Initialize registration receipt := initializeValidatorRegistration( ctx, l1Info, stakingManagerAddress, validatorInput, 100*units.Avax, fundedKey, ) validationID, warpMessage := extractRegistrationInfo(receipt) // Try to register on P-Chain - should fail due to expiry pWallet := network.GetPChainWallet(fundedKey) _, err := pWallet.IssueRegisterL1ValidatorTx( 100*units.Avax, node.NodePoP.ProofOfPossession, warpMessage, ) Expect(err).To(HaveOccurred()) Expect(err.Error()).To(ContainSubstring("expired")) }) ``` ## Helper Functions ### Verify Validator Status ```go func verifyValidatorActive( ctx context.Context, l1 L1TestInfo, stakingManagerAddress common.Address, validationID ids.ID, ) { stakingManager, _ := nativestaking.NewNativeTokenStakingManager( stakingManagerAddress, l1.RPCClient, ) validator, err := stakingManager.GetValidator(&bind.CallOpts{}, validationID) Expect(err).NotTo(HaveOccurred()) Expect(validator.Status).To(Equal(ValidatorStatusActive)) Expect(validator.Weight).To(BeNumerically(">", 0)) Expect(validator.StartedAt).To(BeNumerically(">", 0)) } ``` ### Check Delegation Rewards ```go func checkDelegationRewards( ctx context.Context, l1 L1TestInfo, stakingManagerAddress common.Address, delegationID ids.ID, expectedMinimum uint64, ) { stakingManager, _ := nativestaking.NewNativeTokenStakingManager( stakingManagerAddress, l1.RPCClient, ) delegation, err := stakingManager.GetDelegation(&bind.CallOpts{}, delegationID) Expect(err).NotTo(HaveOccurred()) // Check rewards accrued Expect(delegation.Rewards).To(BeNumerically(">=", expectedMinimum)) } ``` ## Best Practices 1. **Always complete registration**: Don't leave validators in pending state 2. **Test minimum/maximum stakes**: Verify contract validation works 3. **Handle P-Chain delays**: P-Chain operations can be slow, use appropriate timeouts 4. **Verify uptime calculations**: Test different uptime percentages 5. **Clean up validators**: Remove test validators in AfterEach/AfterSuite 6. **Test delegation limits**: Verify maximum delegators per validator ## Next Steps Convert subnets to L1s with validator managers Test staking and delegation workflows Complete configuration options for tmpnet # Warp Message Construction (/docs/tooling/tmpnet/guides/warp-messages) --- title: Warp Message Construction description: Learn how to construct, sign, and verify Warp messages for cross-chain communication --- Warp messages enable secure cross-chain communication on Avalanche. This guide covers constructing unsigned messages, aggregating signatures, and verifying signed messages. ## Overview Warp message flow: 1. **Extract** unsigned message from transaction logs 2. **Wait** for validator acceptance 3. **Aggregate** signatures from validators 4. **Create** signed message 5. **Include** in predicate transaction on destination ## Extracting Unsigned Messages ### From Transaction Logs ```go func ExtractWarpMessageFromLogs( ctx context.Context, receipt *types.Receipt, source L1TestInfo, ) *avalancheWarp.UnsignedMessage { // Find SendWarpMessage log var warpMessageBytes []byte for _, log := range receipt.Logs { if log.Topics[0] == warpMesssageEventTopic { warpMessageBytes = log.Data break } } Expect(warpMessageBytes).NotTo(BeEmpty()) // Parse unsigned message unsignedMessage, err := avalancheWarp.ParseUnsignedMessage(warpMessageBytes) Expect(err).NotTo(HaveOccurred()) return unsignedMessage } ``` ## Signature Aggregation ### Setting Up Aggregator ```go type SignatureAggregator struct { client *aggregator.SignatureAggregatorClient subnetIDs []ids.ID } func NewSignatureAggregator( nodeURI string, subnetIDs []ids.ID, ) *SignatureAggregator { apiURI := fmt.Sprintf("%s/ext/bc/P", nodeURI) client, err := aggregator.NewSignatureAggregatorClient(apiURI) Expect(err).NotTo(HaveOccurred()) return &SignatureAggregator{ client: client, subnetIDs: subnetIDs, } } func (a *SignatureAggregator) Shutdown() { // Clean up resources } ``` ### Creating Signed Messages ```go func (a *SignatureAggregator) CreateSignedMessage( unsignedMessage *avalancheWarp.UnsignedMessage, justification []byte, subnetID ids.ID, quorumNum uint64, ) (*avalancheWarp.Message, error) { signedMessage, err := a.client.AggregateSignatures( context.Background(), unsignedMessage.ID(), justification, subnetID, quorumNum, ) return signedMessage, err } ``` ## Complete Construction Flow ```go func ConstructSignedWarpMessage( ctx context.Context, sourceReceipt *types.Receipt, source L1TestInfo, destination L1TestInfo, justification []byte, aggregator *SignatureAggregator, ) *avalancheWarp.Message { // Step 1: Extract unsigned message unsignedMessage := ExtractWarpMessageFromLogs(ctx, sourceReceipt, source) // Step 2: Wait for block acceptance WaitForAllValidatorsToAcceptBlock( ctx, source.NodeURIs, source.BlockchainID, sourceReceipt.BlockNumber.Uint64(), ) // Step 3: Aggregate signatures (67% quorum) signedMessage, err := aggregator.CreateSignedMessage( unsignedMessage, justification, source.SubnetID, 67, // warp.WarpDefaultQuorumNumerator ) Expect(err).NotTo(HaveOccurred()) return signedMessage } ``` ## Using Signed Messages ### In Predicate Transactions ```go // Create transaction with Warp message tx := predicateutils.NewPredicateTx( l1.EVMChainID, nonce, &contractAddress, gasLimit, gasFeeCap, gasTipCap, big.NewInt(0), callData, types.AccessList{}, warp.ContractAddress, // Predicate address signedMessage.Bytes(), // Warp message ) ``` ## Best Practices 1. **Always wait for acceptance**: Don't aggregate before validators see the block 2. **Use 67% quorum**: Standard for Warp messages 3. **Clean up aggregator**: Always defer `aggregator.Shutdown()` 4. **Handle errors**: Signature aggregation can fail if nodes are down 5. **Cache aggregators**: Reuse for multiple messages in same test ## Next Steps Use Warp messages for Teleporter Helper functions for transactions # CLI Commands Reference (/docs/tooling/tmpnet/reference/cli-commands) --- title: CLI Commands Reference description: Complete reference for all tmpnetctl commands and options --- This reference covers all commands available in the `tmpnetctl` CLI tool. ## Global Flags These flags are available for all commands: | Flag | Description | Default | |------|-------------|---------| | `--network-dir` | Path to an existing network (needed for stop/restart/check) | `$TMPNET_NETWORK_DIR` if set | | `--log-format` | Logging format (auto, json) | `auto` | | `--help, -h` | Show help | | ## Network Commands ### start-network Start a new temporary network. ```bash tmpnetctl start-network [flags] ``` **Flags:** | Flag | Type | Description | Default | |------|------|-------------|---------| | `--avalanchego-path` | string | Path to avalanchego binary | `$AVALANCHEGO_PATH` (required) | | `--plugin-dir` | string | Directory containing VM plugins | `$AVAGO_PLUGIN_DIR` or `~/.avalanchego/plugins` | | `--node-count` | int | Number of validator nodes | `5` | | `--network-owner` | string | Owner identifier for the network | `tmpnet-owner` | | `--root-dir` | string | Root directory for networks | `~/.tmpnet/networks` | **Example:** ```bash # Start default 5-node network tmpnetctl start-network --avalanchego-path=./bin/avalanchego # Start 3-node network tmpnetctl start-network --avalanchego-path=./bin/avalanchego --node-count=3 # Custom plugin directory tmpnetctl start-network \ --avalanchego-path=./bin/avalanchego \ --plugin-dir=/custom/plugins ``` **Output:** ``` Starting network with 5 nodes ... Started network /home/user/.tmpnet/networks/20240312-143052.123456 (UUID: abc-123...) Configure tmpnetctl to target this network by default: - source /home/user/.tmpnet/networks/20240312-143052.123456/network.env - export TMPNET_NETWORK_DIR=/home/user/.tmpnet/networks/20240312-143052.123456 - export TMPNET_NETWORK_DIR=/home/user/.tmpnet/networks/latest ``` ### stop-network Stop a running network. ```bash tmpnetctl stop-network [flags] ``` **Flags:** | Flag | Type | Description | Default | |------|------|-------------|---------| | `--network-dir` | string | Network directory to stop | `$TMPNET_NETWORK_DIR` (required) | **Example:** ```bash # Stop using TMPNET_NETWORK_DIR export TMPNET_NETWORK_DIR=~/.tmpnet/networks/latest tmpnetctl stop-network # Stop with explicit path tmpnetctl stop-network --network-dir=~/.tmpnet/networks/20240312-143052.123456 ``` ### restart-network Restart a stopped network. ```bash tmpnetctl restart-network [flags] ``` **Flags:** | Flag | Type | Description | Default | |------|------|-------------|---------| | `--network-dir` | string | Network directory to restart | `$TMPNET_NETWORK_DIR` (required) | **Example:** ```bash # Restart using TMPNET_NETWORK_DIR tmpnetctl restart-network # Restart with explicit path tmpnetctl restart-network --network-dir=~/.tmpnet/networks/latest ``` ## Monitoring Commands ### start-metrics-collector Start Prometheus to collect metrics from networks. ```bash tmpnetctl start-metrics-collector [flags] ``` **Required Environment Variables:** - `PROMETHEUS_URL` - Prometheus backend URL - `PROMETHEUS_PUSH_URL` - Prometheus push URL - `PROMETHEUS_USERNAME` - Username for authentication - `PROMETHEUS_PASSWORD` - Password for authentication **Example:** ```bash # Set environment variables export PROMETHEUS_URL="https://prometheus.example.com" export PROMETHEUS_PUSH_URL="https://prometheus.example.com/api/v1/push" export PROMETHEUS_USERNAME="user" export PROMETHEUS_PASSWORD="pass" # Start collector tmpnetctl start-metrics-collector ``` **Output:** ``` Starting Prometheus metrics collector... Metrics collector started successfully ``` ### stop-metrics-collector Stop the running Prometheus metrics collector. ```bash tmpnetctl stop-metrics-collector ``` **Example:** ```bash tmpnetctl stop-metrics-collector ``` ### start-logs-collector Start Promtail to collect logs from networks. ```bash tmpnetctl start-logs-collector [flags] ``` **Required Environment Variables:** - `LOKI_URL` - Loki backend URL - `LOKI_PUSH_URL` - Loki push URL - `LOKI_USERNAME` - Username for authentication - `LOKI_PASSWORD` - Password for authentication **Example:** ```bash # Set environment variables export LOKI_URL="https://loki.example.com" export LOKI_PUSH_URL="https://loki.example.com/loki/api/v1/push" export LOKI_USERNAME="user" export LOKI_PASSWORD="pass" # Start collector tmpnetctl start-logs-collector ``` ### stop-logs-collector Stop the running Promtail logs collector. ```bash tmpnetctl stop-logs-collector ``` **Example:** ```bash tmpnetctl stop-logs-collector ``` ### check-metrics Verify that metrics are being collected for a network. ```bash tmpnetctl check-metrics [flags] ``` **Flags:** | Flag | Type | Description | Default | |------|------|-------------|---------| | `--network-dir` | string | Network directory | `$TMPNET_NETWORK_DIR` (required) | **Required Environment Variables:** - `PROMETHEUS_URL` - `PROMETHEUS_USERNAME` - `PROMETHEUS_PASSWORD` **Example:** ```bash tmpnetctl check-metrics --network-dir=~/.tmpnet/networks/latest ``` **Output:** ``` Checking metrics for network abc-123... ✓ Metrics found for network ``` ### check-logs Verify that logs are being collected for a network. ```bash tmpnetctl check-logs [flags] ``` **Flags:** | Flag | Type | Description | Default | |------|------|-------------|---------| | `--network-dir` | string | Network directory | `$TMPNET_NETWORK_DIR` (required) | **Required Environment Variables:** - `LOKI_URL` - `LOKI_USERNAME` - `LOKI_PASSWORD` **Example:** ```bash tmpnetctl check-logs --network-dir=~/.tmpnet/networks/latest ``` ## Kubernetes Commands ### start-kind-cluster Start a local kind (Kubernetes in Docker) cluster for tmpnet. ```bash tmpnetctl start-kind-cluster [flags] ``` **Flags:** | Flag | Type | Description | Default | |------|------|-------------|---------| | `--kubeconfig` | string | Path to kubeconfig file | `~/.kube/config` | | `--start-metrics-collector` | bool | Start metrics collector | `false` | | `--start-logs-collector` | bool | Start logs collector | `false` | | `--install-chaos-mesh` | bool | Install Chaos Mesh | `false` | **Example:** ```bash # Start basic kind cluster tmpnetctl start-kind-cluster # Start with monitoring tmpnetctl start-kind-cluster \ --start-metrics-collector \ --start-logs-collector # Start with Chaos Mesh for chaos engineering tmpnetctl start-kind-cluster --install-chaos-mesh ``` ## Utility Commands ### version Print tmpnetctl version information. ```bash tmpnetctl version ``` **Example:** ```bash tmpnetctl version ``` **Output:** ``` tmpnetctl version 1.11.0 ``` ### help Show help for tmpnetctl or a specific command. ```bash tmpnetctl help [command] ``` **Example:** ```bash # General help tmpnetctl help # Help for specific command tmpnetctl help start-network ``` ## Common Usage Patterns ### Setting Up a Network Complete workflow for creating and using a network: ```bash # Build binaries ./scripts/build.sh ./scripts/build_tmpnetctl.sh # Start network tmpnetctl start-network --avalanchego-path=./bin/avalanchego # Configure shell export TMPNET_NETWORK_DIR=~/.tmpnet/networks/latest # Use the network... # Stop when done tmpnetctl stop-network ``` ### Using with direnv Simplify commands with direnv: ```bash # Enable direnv cd avalanchego direnv allow # Now you can use simplified commands tmpnetctl start-network # No --avalanchego-path needed tmpnetctl stop-network tmpnetctl restart-network ``` ### Monitoring Workflow Set up monitoring for development: ```bash # Configure monitoring environment export PROMETHEUS_URL="..." export PROMETHEUS_PUSH_URL="..." export PROMETHEUS_USERNAME="..." export PROMETHEUS_PASSWORD="..." export LOKI_URL="..." export LOKI_PUSH_URL="..." export LOKI_USERNAME="..." export LOKI_PASSWORD="..." # Start collectors once tmpnetctl start-metrics-collector tmpnetctl start-logs-collector # Create/destroy networks as needed tmpnetctl start-network --avalanchego-path=./bin/avalanchego # ... test ... tmpnetctl stop-network # Collectors continue running # Stop when done tmpnetctl stop-metrics-collector tmpnetctl stop-logs-collector ``` ### Multiple Networks Manage multiple networks: ```bash # Start first network tmpnetctl start-network --avalanchego-path=./bin/avalanchego NETWORK1=~/.tmpnet/networks/latest # Start second network tmpnetctl start-network --avalanchego-path=./bin/avalanchego NETWORK2=~/.tmpnet/networks/latest # Target specific networks tmpnetctl stop-network --network-dir=$NETWORK1 tmpnetctl restart-network --network-dir=$NETWORK2 ``` ## Environment Configuration ### Recommended Shell Setup Add to your `.bashrc` or `.zshrc`: ```bash # Set default network to latest export TMPNET_NETWORK_DIR=~/.tmpnet/networks/latest # Set avalanchego path export AVALANCHEGO_PATH=~/avalanchego/bin/avalanchego # Add tmpnetctl to PATH (if not using direnv) export PATH=$PATH:~/avalanchego/bin ``` ### Monitoring Environment Create a monitoring env file: ```bash # monitoring.env export PROMETHEUS_URL="https://prometheus.example.com" export PROMETHEUS_PUSH_URL="https://prometheus.example.com/api/v1/push" export PROMETHEUS_USERNAME="user" export PROMETHEUS_PASSWORD="pass" export LOKI_URL="https://loki.example.com" export LOKI_PUSH_URL="https://loki.example.com/loki/api/v1/push" export LOKI_USERNAME="user" export LOKI_PASSWORD="pass" ``` Source when needed: ```bash source monitoring.env tmpnetctl start-metrics-collector tmpnetctl start-logs-collector ``` ## Exit Codes tmpnetctl uses standard exit codes: - `0` - Success - `1` - General error - `2` - Invalid arguments or usage ## Error Handling ### Common Errors **Network directory not found:** ``` Error: network directory not found: /path/to/network ``` **Solution:** Check that `TMPNET_NETWORK_DIR` is set correctly or provide `--network-dir` **Binary not found:** ``` Error: avalanchego binary not found: /path/to/avalanchego ``` **Solution:** Verify `--avalanchego-path` points to a valid binary **Port already in use:** ``` Error: failed to start node: address already in use ``` **Solution:** Stop conflicting processes or use dynamic ports (default) **Missing monitoring credentials:** ``` Error: PROMETHEUS_URL environment variable not set ``` **Solution:** Set required environment variables for monitoring ## See Also - [Configuration Reference](/docs/tooling/tmpnet/reference/configuration) - [Quick Start Guide](/docs/tooling/tmpnet/quick-start) - [Monitoring Guide](/docs/tooling/tmpnet/guides/monitoring) - [Full README](https://github.com/ava-labs/avalanchego/blob/master/tests/fixture/tmpnet/README.md) # Configuration Reference (/docs/tooling/tmpnet/reference/configuration) --- title: Configuration Reference description: Complete reference for tmpnet configuration options --- This reference covers all configuration options available in tmpnet for networks, nodes, subnets, and chains. ## Network Configuration The `Network` type represents a complete temporary network. ### Basic Properties ```go type Network struct { // Owner identifier for the network Owner string // UUID for unique identification across hosts UUID string // Genesis configuration for the network Genesis *genesis.Genesis // Collection of nodes in the network Nodes []*Node // Subnets to create on the network Subnets []*Subnet // Keys pre-funded in genesis PreFundedKeys []*secp256k1.PrivateKey // Default flags applied to all nodes DefaultFlags FlagsMap // Default runtime configuration for nodes DefaultRuntimeConfig NodeRuntimeConfig // Network directory path Dir string } ``` ### Default Flags Common default flags for networks: ```go network.DefaultFlags = tmpnet.FlagsMap{ // Logging "log-level": "info", // trace, debug, info, warn, error, fatal "log-display-level": "info", "log-format": "auto", // auto, plain, colors, json // Network "network-id": "local", "network-max-reconnect-delay": "1s", "public-ip": "127.0.0.1", "public-ip-resolution-service": "", // Staking "staking-enabled": "true", "staking-tls-cert-file": "", // Auto-generated "staking-tls-key-file": "", // Auto-generated // API "http-host": "", "http-port": "0", // Dynamic port "staking-port": "0", // Dynamic port // Performance "snow-sample-size": "2", "snow-quorum-size": "2", "proposervm-use-current-height": "true", } ``` ### Configuration on Disk Network configuration is stored at `[network-dir]/config.json`: ```json { "owner": "test-owner", "uuid": "abc-123-def-456", "defaultFlags": { "log-level": "info", "network-id": "local" }, "defaultRuntimeConfig": { "process": { "avalancheGoPath": "/path/to/avalanchego", "pluginDir": "/path/to/plugins" } }, "preFundedKeys": ["PrivateKey-..."], "subnets": [] } ``` ## Node Configuration The `Node` type represents a single AvalancheGo node. ### Basic Properties ```go type Node struct { // Node identifier derived from staking certificate NodeID ids.NodeID // Node-specific flags (override network defaults) Flags FlagsMap // Optional node-specific runtime configuration RuntimeConfig *NodeRuntimeConfig // URI of the node's API endpoint (set at runtime) URI string // Staking address for P2P (set at runtime) StakingAddress string // Whether this is a temporary/ephemeral node IsEphemeral bool } ``` ### Node-Specific Flags Override network defaults for individual nodes: ```go node.Flags = tmpnet.FlagsMap{ "log-level": "debug", // More verbose than network default "http-port": "9650", // Fixed port instead of dynamic "track-subnets": "subnet-id", // Track specific subnet } ``` ### Node Configuration Files **Runtime Config** (`[node-dir]/config.json`): ```json { "process": { "avalancheGoPath": "/path/to/avalanchego", "pluginDir": "/path/to/plugins" } } ``` **Flags** (`[node-dir]/flags.json`): ```json { "log-level": "info", "http-host": "", "http-port": "0", "staking-port": "0", "network-id": "local", "genesis-file-content": "...", "track-subnets": "" } ``` The `"http-port": "0"` and `"staking-port": "0"` values tell the OS to allocate available ports dynamically. The actual allocated ports are written to `process.json` by avalanchego at startup. **Process Info** (`[node-dir]/process.json`): This file is created by **avalanchego itself** (not tmpnetctl) when the node starts. Tmpnet passes the `--process-context-file` flag to avalanchego, which writes the runtime information to this file. ```json { "pid": 12345, "uri": "http://127.0.0.1:56395", "stakingAddress": "127.0.0.1:56396" } ``` | Field | Description | |-------|-------------| | `pid` | Process ID of the running avalanchego instance | | `uri` | HTTP API endpoint with the dynamically allocated port | | `stakingAddress` | Staking/P2P address with the dynamically allocated port | If you're running avalanchego manually (not via tmpnetctl), you must pass `--process-context-file=/path/to/process.json` for this file to be created. Without this flag, there's no way to discover which ports were allocated. ## Runtime Configuration Runtime configuration controls how nodes are executed. ### Process Runtime For local process-based nodes: ```go type ProcessRuntimeConfig struct { // Path to avalanchego binary AvalancheGoPath string // Plugin directory for custom VMs PluginDir string // Whether to reuse dynamic ports across restarts ReuseDynamicPorts bool } ``` **Usage:** ```go network.DefaultRuntimeConfig = tmpnet.NodeRuntimeConfig{ Process: &tmpnet.ProcessRuntimeConfig{ AvalancheGoPath: "/path/to/avalanchego", PluginDir: "~/.avalanchego/plugins", ReuseDynamicPorts: true, }, } ``` ### Kubernetes Runtime For Kubernetes-based deployment: ```go type KubeRuntimeConfig struct { // Kubeconfig file path ConfigPath string // Kubeconfig context to use ConfigContext string // Target namespace Namespace string // Docker image for nodes Image string // Persistent volume size in GB VolumeSizeGB int // Use exclusive scheduling (one pod per k8s node) UseExclusiveScheduling bool // Custom scheduling labels SchedulingLabelKey string SchedulingLabelValue string // Ingress configuration IngressHost string IngressSecret string } ``` ## FlagsMap `FlagsMap` is a string map for avalanchego flags with special handling for defaults. ### Operations ```go flags := tmpnet.FlagsMap{ "log-level": "info", } // Set a value flags["log-level"] = "debug" // Set only if not already set flags.SetDefault("log-level", "warn") // No effect, already set // Set multiple defaults flags.SetDefaults(tmpnet.FlagsMap{ "log-display-level": "info", "http-port": "0", }) // Get a value logLevel := flags["log-level"] ``` ### Common Flags **Logging:** - `log-level` - Minimum log level (trace, debug, info, warn, error, fatal) - `log-display-level` - Level to display - `log-format` - Format (auto, plain, colors, json) **Network:** - `network-id` - Network identifier - `bootstrap-ips` - Bootstrap node IPs (auto-configured) - `bootstrap-ids` - Bootstrap node IDs (auto-configured) - `public-ip` - Node's public IP **API:** - `http-host` - HTTP host for API - `http-port` - HTTP port (0 for dynamic) - `http-allowed-hosts` - Allowed hosts for API **Staking:** - `staking-enabled` - Enable staking - `staking-port` - Staking port (0 for dynamic) - `staking-tls-cert-file` - TLS certificate path - `staking-tls-key-file` - TLS key path **Consensus:** - `snow-sample-size` - Sample size for consensus - `snow-quorum-size` - Quorum size - `snow-concurrent-repolls` - Concurrent repolls **Subnets:** - `track-subnets` - Comma-separated list of subnet IDs to track **Advanced:** - `proposervm-use-current-height` - Use current height in proposervm - `throttler-inbound-validator-alloc-size` - Validator bandwidth allocation - `consensus-on-accept-gossip-validator-size` - Gossip size ## Subnet Configuration The `Subnet` type represents a custom subnet. ### Basic Properties ```go type Subnet struct { // User-defined name Name string // Subnet ID (set after creation) SubnetID ids.ID // Nodes that validate this subnet ValidatorIDs []ids.NodeID // Chains on this subnet Chains []*Chain // Subnet-specific configuration Config ConfigMap } ``` ### Subnet Config Options ```go subnet.Config = tmpnet.ConfigMap{ // Block proposal delay "proposerMinBlockDelay": 0, // Historical blocks to keep "proposerNumHistoricalBlocks": 50000, // Consensus parameters "consensusParameters": map[string]interface{}{ "k": 20, "alpha": 15, "betaVirtuous": 15, "betaRogue": 20, "concurrentRepolls": 4, "optimalProcessing": 10, "maxOutstandingItems": 256, "maxItemProcessingTime": 120000, }, } ``` ### Subnet Configuration File Stored at `[network-dir]/subnets/[subnet-name].json`: ```json { "name": "my-subnet", "subnetId": "2bRCr6B4MiEfSjidDwxDpdCyviwnfUVqB2HGwhm947w9YYqb7r", "validatorIds": ["NodeID-..."], "config": { "proposerMinBlockDelay": 0 }, "chains": [...] } ``` ## Chain Configuration The `Chain` type represents a blockchain on a subnet. ### Basic Properties ```go type Chain struct { // Chain ID (set after creation) ChainID ids.ID // Virtual Machine ID VMID ids.ID // Genesis bytes for the chain Genesis []byte // Chain-specific configuration (JSON string) Config string // Pre-funded key for the chain PreFundedKey *secp256k1.PrivateKey // Arguments to get VM version VersionArgs []string } ``` ### Chain Configuration Example ```go chain := &tmpnet.Chain{ VMID: myVMID, Genesis: genesisBytes, Config: `{ "blockGasLimit": 8000000, "minGasPrice": 25000000000, "priorityRegossipFrequency": 1000000000 }`, PreFundedKey: testKey, VersionArgs: []string{"--version"}, } ``` ## Directory Structure Complete directory layout for a tmpnet network: ``` ~/.tmpnet/ ├── prometheus/ # Prometheus working directory │ └── file_sd_configs/ │ └── [network-uuid]-[node-id].json ├── promtail/ # Promtail working directory │ └── file_sd_configs/ │ └── [network-uuid]-[node-id].json └── networks/ └── [timestamp]/ # Network directory ├── config.json # Network configuration ├── genesis.json # Genesis file ├── network.env # Shell environment ├── metrics.txt # Grafana link ├── subnets/ │ └── [subnet-name].json # Subnet configuration └── [node-id]/ # Node directory ├── config.json # Runtime configuration ├── flags.json # Node flags ├── process.json # Process info ├── staking.crt # Staking certificate ├── staking.key # Staking key ├── signer.key # BLS signing key ├── logs/ │ └── main.log ├── db/ # Node database └── plugins/ # VM plugins ``` ## Environment Variables tmpnet uses these environment variables: | Variable | Purpose | Default | |----------|---------|---------| | `TMPNET_NETWORK_DIR` | Target network directory | None (must specify) | | `TMPNET_ROOT_NETWORK_DIR` | Root for new networks | `~/.tmpnet/networks` | | `AVALANCHEGO_PATH` | Path to avalanchego binary | None (must specify) | | `AVAGO_PLUGIN_DIR` | Plugin directory | `~/.avalanchego/plugins` | | `STACK_TRACE_ERRORS` | Enable stack traces | Not set | | `PROMETHEUS_URL` | Prometheus backend URL | None | | `PROMETHEUS_PUSH_URL` | Prometheus push URL | None | | `PROMETHEUS_USERNAME` | Prometheus username | None | | `PROMETHEUS_PASSWORD` | Prometheus password | None | | `LOKI_URL` | Loki backend URL | None | | `LOKI_PUSH_URL` | Loki push URL | None | | `LOKI_USERNAME` | Loki username | None | | `LOKI_PASSWORD` | Loki password | None | | `GRAFANA_URI` | Custom Grafana dashboard URI | Default Grafana instance | ## Configuration Precedence Configuration is applied in this order (later overrides earlier): 1. **tmpnet defaults** - Built-in defaults 2. **Network defaults** - `network.DefaultFlags` 3. **Node-specific** - `node.Flags` 4. **Runtime-generated** - Bootstrap IPs/IDs, genesis content, etc. Example: ```go // 1. tmpnet default: log-level = "info" // 2. Network default: network.DefaultFlags["log-level"] = "debug" // 3. Node-specific override: node.Flags["log-level"] = "trace" // Final result: node runs with log-level = "trace" ``` ## Best Practices 1. **Use defaults for common settings** - Set `network.DefaultFlags` for settings shared by all nodes 2. **Override per-node when needed** - Use `node.Flags` only for node-specific configuration 3. **Use dynamic ports** - Set ports to "0" for automatic allocation 4. **Enable verbose logging during development** - Use "debug" or "trace" log levels 5. **Configure subnet tracking** - Set `track-subnets` for nodes that need to sync subnets 6. **Use environment variables** - Store credentials in environment variables, not in code ## See Also - [CLI Commands Reference](/docs/tooling/tmpnet/reference/cli-commands) - [Runtime Environments Guide](/docs/tooling/tmpnet/guides/runtimes) - [Full README](https://github.com/ava-labs/avalanchego/blob/master/tests/fixture/tmpnet/README.md) # Account Management (/docs/tooling/avalanche-sdk/client/accounts) --- title: Account Management icon: users description: Learn how to create and manage accounts in the Avalanche Client SDK with support for EVM, X-Chain, and P-Chain operations. --- ## Overview Avalanche accounts work across all three chains—P-Chain, X-Chain, and C-Chain—with a single account. Each account provides both EVM addresses (for C-Chain) and XP addresses (for X/P-Chain), so you can interact with the entire Avalanche network without managing separate accounts. ## Account Structure Every Avalanche account has an EVM account for C-Chain and an optional XP account for X/P-Chain: ```typescript type AvalancheAccount = { evmAccount: Account; // C-Chain xpAccount?: XPAccount; // X/P-Chain getEVMAddress: () => Address; getXPAddress: (chain?: "X" | "P" | "C", hrp?: string) => XPAddress; }; ``` ### Quick Start ```typescript import { privateKeyToAvalancheAccount } from "@avalanche-sdk/client/accounts"; import { createAvalancheWalletClient } from "@avalanche-sdk/client"; import { avalanche } from "@avalanche-sdk/client/chains"; const account = privateKeyToAvalancheAccount("0x..."); const walletClient = createAvalancheWalletClient({ account, chain: avalanche, transport: { type: "http" }, }); // Get addresses for all chains const evmAddress = account.getEVMAddress(); // 0x742d35Cc... const xChainAddress = account.getXPAddress("X"); // X-avax1... const pChainAddress = account.getXPAddress("P"); // P-avax1... ``` ## Account Types ### Local Accounts Local accounts store keys on your machine and sign transactions before broadcasting. Use these for server-side apps, bots, or when you need full control. - [Private Key Accounts](accounts/local/private-key) - Simple and direct - [Mnemonic Accounts](accounts/local/mnemonic) - Easy recovery with seed phrases - [HD Key Accounts](accounts/local/hd-key) - Advanced key derivation ### JSON-RPC Accounts JSON-RPC accounts use external wallets (MetaMask, Core, etc.) for signing. Perfect for browser-based dApps where users control their own keys. [Learn more about JSON-RPC accounts →](accounts/json-rpc) ## Working with Accounts ### EVM Account The `evmAccount` handles all C-Chain operations—smart contracts, ERC-20 transfers, and standard EVM interactions. ```typescript const evmAccount = account.evmAccount; console.log(evmAccount.address); // 0x742d35Cc... ``` ### XP Account The `xpAccount` handles X-Chain and P-Chain operations—UTXO transactions, asset transfers, and staking. ```typescript if (account.xpAccount) { const xpAccount = account.xpAccount; console.log(xpAccount.publicKey); } ``` ### Getting Addresses ```typescript // C-Chain address const evmAddress = account.getEVMAddress(); // 0x742d35Cc... // X/P-Chain addresses const xChainAddress = account.getXPAddress("X"); // X-avax1... const pChainAddress = account.getXPAddress("P"); // P-avax1... // Network-specific (mainnet vs testnet) const mainnet = account.getXPAddress("X", "avax"); const testnet = account.getXPAddress("X", "fuji"); ``` ## Creating Accounts ### Private Key ```typescript import { privateKeyToAvalancheAccount } from "@avalanche-sdk/client/accounts"; const account = privateKeyToAvalancheAccount("0x..."); ``` [Private Key Accounts →](accounts/local/private-key) ### Mnemonic ```typescript import { mnemonicsToAvalancheAccount } from "@avalanche-sdk/client/accounts"; const account = mnemonicsToAvalancheAccount("abandon abandon abandon..."); ``` [Mnemonic Accounts →](accounts/local/mnemonic) ### HD Key ```typescript import { hdKeyToAvalancheAccount, HDKey } from "@avalanche-sdk/client/accounts"; const hdKey = HDKey.fromMasterSeed(seed); const account = hdKeyToAvalancheAccount(hdKey, { accountIndex: 0 }); ``` [HD Key Accounts →](accounts/local/hd-key) ## Address Formats - **C-Chain:** `0x...` (Ethereum-compatible) - **X/P-Chain:** `avax1...` or `X-avax1...` / `P-avax1...` (Bech32-encoded) [Network-Specific Addresses →](accounts/local/addresses) ## Security **Never expose private keys or mnemonics in client-side code or commit them to version control. Use environment variables.** ```typescript // ✅ Good const account = privateKeyToAvalancheAccount(process.env.PRIVATE_KEY!); // ❌ Bad const account = privateKeyToAvalancheAccount("0x1234..."); ``` ## Comparison Table | Feature | Private Key | Mnemonic | HD Key | JSON-RPC | | ----------------- | ----------- | --------- | -------- | ------------ | | **Ease of Use** | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐ | ⭐⭐⭐ | ⭐⭐⭐⭐⭐ | | **Recovery** | ❌ | ✅ | ✅ | ❌ | | **Multi-Account** | ❌ | ✅ | ✅ | ❌ | | **Security** | ⚠️ High | ✅ High | ✅ High | ✅ Very High | | **Use Case** | Server/Bots | User Apps | Advanced | User Apps | ## Next Steps - [Private Key Accounts](accounts/local/private-key) - Simple and direct - [Mnemonic Accounts](accounts/local/mnemonic) - Easy recovery - [HD Key Accounts](accounts/local/hd-key) - Advanced key derivation - [JSON-RPC Accounts](accounts/json-rpc) - Browser wallet integration - [Account Utilities](accounts/local/utilities) - Helper functions - [Wallet Operations](methods/wallet-methods/wallet) - Send transactions # API Clients (/docs/tooling/avalanche-sdk/client/clients/api-clients) --- title: API Clients --- ## Overview API clients provide access to node-level operations. They're included with the main Avalanche Client and handle administrative tasks, node information, health monitoring, and indexed blockchain queries. ## Accessing from Avalanche Client All API clients are available on the main Avalanche Client: ```typescript import { createAvalancheClient } from "@avalanche-sdk/client"; import { avalanche } from "@avalanche-sdk/client/chains"; const client = createAvalancheClient({ chain: avalanche, transport: { type: "http" }, }); // Admin API - Node configuration and profiling const admin = client.admin; // Info API - Node and network information const info = client.info; // Health API - Node health monitoring const health = client.health; // ProposerVM API - ProposerVM operations per chain const proposervmPChain = client.proposerVM.pChain; const proposervmXChain = client.proposerVM.xChain; const proposervmCChain = client.proposerVM.cChain; // Index API - Indexed blockchain queries const indexPChainBlock = client.indexBlock.pChain; const indexCChainBlock = client.indexBlock.cChain; const indexXChainBlock = client.indexBlock.xChain; const indexXChainTx = client.indexTx.xChain; ``` ## Admin API Client Node configuration, aliases, logging, and profiling. ### From Avalanche Client ```typescript const admin = client.admin; // Example: Set logger level await admin.setLoggerLevel({ loggerName: "C", logLevel: "DEBUG", }); ``` ### Create Standalone Client ```typescript import { createAdminApiClient } from "@avalanche-sdk/client"; import { avalanche } from "@avalanche-sdk/client/chains"; const adminClient = createAdminApiClient({ chain: avalanche, transport: { type: "http", url: "https://api.avax.network/ext/admin", }, }); await adminClient.alias({ endpoint: "bc/X", alias: "myAlias", }); ``` [View all Admin API methods →](methods/public-methods/api#admin-api-client) ## Info API Client Node and network information, statistics, and status. ### From Avalanche Client ```typescript const info = client.info; // Example: Get network info const networkID = await info.getNetworkID(); const version = await info.getNodeVersion(); ``` ### Create Standalone Client ```typescript import { createInfoApiClient } from "@avalanche-sdk/client"; import { avalanche } from "@avalanche-sdk/client/chains"; const infoClient = createInfoApiClient({ chain: avalanche, transport: { type: "http" }, }); const networkID = await infoClient.getNetworkID(); const version = await infoClient.getNodeVersion(); ``` [View all Info API methods →](methods/public-methods/api#info-api-client) ## Health API Client Node health monitoring and status checks. ### From Avalanche Client ```typescript const health = client.health; // Example: Check node health const status = await health.health({}); const isAlive = await health.liveness(); ``` ### Create Standalone Client ```typescript import { createHealthApiClient } from "@avalanche-sdk/client"; import { avalanche } from "@avalanche-sdk/client/chains"; const healthClient = createHealthApiClient({ chain: avalanche, transport: { type: "http" }, }); const health = await healthClient.health({}); const liveness = await healthClient.liveness(); ``` [View all Health API methods →](methods/public-methods/api#health-api-client) ## ProposerVM API Client ProposerVM operations for each chain. ### From Avalanche Client ```typescript // Access ProposerVM for each chain const proposervmPChain = client.proposerVM.pChain; const proposervmXChain = client.proposerVM.xChain; const proposervmCChain = client.proposerVM.cChain; ``` ### Create Standalone Client ```typescript import { createProposervmApiClient } from "@avalanche-sdk/client"; import { avalanche } from "@avalanche-sdk/client/chains"; // P-Chain ProposerVM const proposervmPChain = createProposervmApiClient({ chain: avalanche, transport: { type: "http" }, clientType: "proposervmPChain", }); // X-Chain ProposerVM const proposervmXChain = createProposervmApiClient({ chain: avalanche, transport: { type: "http" }, clientType: "proposervmXChain", }); // C-Chain ProposerVM const proposervmCChain = createProposervmApiClient({ chain: avalanche, transport: { type: "http" }, clientType: "proposervmCChain", }); // Example: Get proposed height const pChainHeight = await proposervmPChain.getProposedHeight(); ``` ## Index API Clients Fast indexed queries for blockchain data. ### From Avalanche Client ```typescript // Block indexes const indexPChainBlock = client.indexBlock.pChain; const indexCChainBlock = client.indexBlock.cChain; const indexXChainBlock = client.indexBlock.xChain; // Transaction index const indexXChainTx = client.indexTx.xChain; // Example: Get last accepted block const lastBlock = await indexPChainBlock.getLastAccepted({ encoding: "hex", }); ``` ### Create Standalone Client ```typescript import { createIndexApiClient } from "@avalanche-sdk/client"; import { avalanche } from "@avalanche-sdk/client/chains"; // P-Chain block index const indexPChainBlock = createIndexApiClient({ chain: avalanche, transport: { type: "http" }, clientType: "indexPChainBlock", }); // C-Chain block index const indexCChainBlock = createIndexApiClient({ chain: avalanche, transport: { type: "http" }, clientType: "indexCChainBlock", }); // X-Chain block index const indexXChainBlock = createIndexApiClient({ chain: avalanche, transport: { type: "http" }, clientType: "indexXChainBlock", }); // X-Chain transaction index const indexXChainTx = createIndexApiClient({ chain: avalanche, transport: { type: "http" }, clientType: "indexXChainTx", }); // Example: Get container by index const block = await indexPChainBlock.getContainerByIndex({ index: 12345, encoding: "hex", }); ``` [View all Index API methods →](methods/public-methods/api#index-api-clients) ## Quick Examples ### Node Health Check ```typescript const client = createAvalancheClient({ chain: avalanche, transport: { type: "http" }, }); const health = await client.health.health({}); const liveness = await client.health.liveness(); console.log("Node healthy:", health.healthy); ``` ### Get Node Information ```typescript const version = await client.info.getNodeVersion(); const networkID = await client.info.getNetworkID(); const nodeID = await client.info.getNodeID(); console.log(`Node ${nodeID.nodeID} v${version} on network ${networkID}`); ``` ### Query Indexed Blocks ```typescript const lastBlock = await client.indexBlock.pChain.getLastAccepted({ encoding: "hex", }); const block = await client.indexBlock.cChain.getContainerByIndex({ index: 12345, encoding: "hex", }); ``` ## When to Use - **Admin API**: Node configuration, profiling, logging (requires admin access) - **Info API**: Node and network information - **Health API**: Health monitoring and status checks - **Index API**: Fast indexed queries for blocks and transactions - **ProposerVM API**: ProposerVM operations per chain **Note:** Admin API operations require administrative access and may not be available on public endpoints. ## Next Steps - **[API Methods Reference](methods/public-methods/api)** - Complete method documentation - **[Avalanche Client](clients/avalanche-client)** - Main client operations - **[Wallet Client](clients/wallet-client)** - Transaction operations # Avalanche Client (/docs/tooling/avalanche-sdk/client/clients/avalanche-client) --- title: Avalanche Client --- ## Overview The Avalanche Client (also known as the Public Client) is the main client for read-only operations across all Avalanche chains. It provides a unified interface for querying data from P-Chain, X-Chain, C-Chain, and various API endpoints. **When to use:** Use the Avalanche Client when you need to query blockchain data but don't need to send transactions. ## Installation & Setup For setup instructions, see the [Getting Started](/avalanche-sdk/client-sdk/getting-started) guide. ```typescript import { createAvalancheClient } from "@avalanche-sdk/client"; import { avalanche } from "@avalanche-sdk/client/chains"; const client = createAvalancheClient({ chain: avalanche, transport: { type: "http" }, }); ``` ## Available Clients The Avalanche Client automatically provides access to all chain-specific and API clients: ```typescript // Chain clients client.pChain; // P-Chain operations (validators, staking, subnets) client.xChain; // X-Chain operations (assets, UTXOs) client.cChain; // C-Chain operations (EVM, atomic transactions) // API clients client.admin; // Admin API operations client.info; // Info API operations client.health; // Health API operations client.proposerVM.pChain; // ProposerVM API for P Chain client.proposerVM.xChain; // ProposerVM API for X Chain client.proposerVM.cChain; // ProposerVM API for C Chain client.indexBlock.pChain; // P-Chain block index client.indexBlock.cChain; // C-Chain block index client.indexBlock.xChain; // X-Chain block index client.indexTx.xChain; // X-Chain transaction index ``` ## Available Methods The Avalanche Client extends viem's Public Client and provides additional Avalanche-specific methods: ### Avalanche-Specific Methods - **Public Methods**: `baseFee`, `getChainConfig`, `maxPriorityFeePerGas`, `feeConfig`, `getActiveRulesAt` For complete documentation, see [Public Methods Reference](/avalanche-sdk/client-sdk/methods/public-methods/public). ### Chain-Specific Methods Access methods through chain clients: - **P-Chain Methods**: See [P-Chain Client](/avalanche-sdk/client-sdk/clients/p-chain-client) and [P-Chain Methods Reference](/avalanche-sdk/client-sdk/methods/public-methods/p-chain) - **X-Chain Methods**: See [X-Chain Client](/avalanche-sdk/client-sdk/clients/x-chain-client) and [X-Chain Methods Reference](/avalanche-sdk/client-sdk/methods/public-methods/x-chain) - **C-Chain Methods**: See [C-Chain Client](/avalanche-sdk/client-sdk/clients/c-chain-client) and [C-Chain Methods Reference](/avalanche-sdk/client-sdk/methods/public-methods/c-chain) ### viem Public Client Methods The client extends viem's Public Client, providing access to all standard EVM actions: - `getBalance`, `getBlock`, `getBlockNumber`, `getTransaction`, `getTransactionReceipt` - `readContract`, `call`, `estimateGas`, `getCode`, `getStorageAt` - And many more... See the [viem documentation](https://viem.sh/docs/getting-started) for all available EVM actions. ## Common Operations ### Query P-Chain Data ```typescript // Get current block height const height = await client.pChain.getHeight(); // Get current validators const validators = await client.pChain.getCurrentValidators({ subnetID: "11111111111111111111111111111111LpoYY", }); // Get subnet information const subnet = await client.pChain.getSubnet({ subnetID: "11111111111111111111111111111111LpoYY", }); // Get balance const balance = await client.pChain.getBalance({ addresses: ["P-custom18jma8ppw3nhx5r4ap8clazz0dps7rv5u9xde7p"], }); ``` ### Query X-Chain Data ```typescript // Get balance for specific asset const balance = await client.xChain.getBalance({ addresses: ["X-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5"], assetID: "AVAX", }); // Get all balances const allBalances = await client.xChain.getAllBalances({ addresses: ["X-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5"], }); // Get asset information const asset = await client.xChain.getAssetDescription({ assetID: "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z", }); ``` ### Query C-Chain Data ```typescript // Get EVM balance (viem action) const balance = await client.getBalance({ address: "0x742d35Cc6634C0532925a3b8D4C9db96C4b4d8b6", }); // Get transaction receipt (viem action) const receipt = await client.getTransactionReceipt({ hash: "0x...", }); // Get base fee (Avalanche-specific) const baseFee = await client.baseFee(); // Get chain config const chainConfig = await client.getChainConfig(); // Get atomic transaction const atomicTx = await client.cChain.getAtomicTx({ txID: "2QouvMUbQ6oy7yQ9tLvL3L8tGQG2QK1wJ1q1wJ1q1wJ1q1wJ1q1wJ1q1wJ1", }); ``` ### Query API Data ```typescript // Admin API - Get peers const peers = await client.admin.getPeers(); // Info API - Get node version const version = await client.info.getNodeVersion(); // Health API - Get health status const health = await client.health.health(); // Index API - Get block by index const block = await client.indexPChainBlock.getContainerByIndex({ index: 12345, }); ``` ## Error Handling Always handle errors appropriately: ```typescript import { BaseError } from "viem"; try { const balance = await client.getBalance({ address: "0x...", }); } catch (error) { if (error instanceof BaseError) { console.error("RPC Error:", error.message); } else { console.error("Unknown error:", error); } } ``` ## When to Use This Client - ✅ Querying blockchain data - ✅ Reading balances and transaction history - ✅ Checking validator information - ✅ Monitoring network status - ✅ Inspecting smart contract state **Don't use this client for:** - ❌ Sending transactions (use [Wallet Client](/avalanche-sdk/client-sdk/clients/wallet-client)) - ❌ Signing messages (use [Wallet Client](/avalanche-sdk/client-sdk/clients/wallet-client)) - ❌ Cross-chain transfers (use [Wallet Client](/avalanche-sdk/client-sdk/clients/wallet-client)) ## Best Practices ### Use Specific Clients ```typescript // Good: Use P-Chain client for platform operations const validators = await client.pChain.getCurrentValidators({}); // Good: Use X-Chain client for asset operations const balance = await client.xChain.getBalance({ addresses: ["X-avax..."], assetID: "AVAX", }); // Good: Use C-Chain client for EVM operations const atomicTx = await client.cChain.getAtomicTx({ txID: "0x...", }); ``` ### Using viem Actions Since the Avalanche Client extends viem's Public Client, you have access to all viem actions: ```typescript // Use viem's readContract action const result = await client.readContract({ address: "0x...", abi: contractABI, functionName: "balanceOf", args: ["0x..."], }); // Use viem's getTransaction action const tx = await client.getTransaction({ hash: "0x...", }); // Use viem's estimateGas action const gas = await client.estimateGas({ to: "0x...", value: parseEther("0.001"), }); ``` See the [viem documentation](https://viem.sh/docs/getting-started) for all available actions. ## Next Steps - **[Wallet Client](/avalanche-sdk/client-sdk/clients/wallet-client)** - Transaction signing and sending - **[P-Chain Client](/avalanche-sdk/client-sdk/clients/p-chain-client)** - Detailed P-Chain operations - **[X-Chain Client](/avalanche-sdk/client-sdk/clients/x-chain-client)** - Asset and UTXO operations - **[C-Chain Client](/avalanche-sdk/client-sdk/clients/c-chain-client)** - EVM and atomic operations - **[Public Methods Reference](/avalanche-sdk/client-sdk/methods/public-methods/public)** - Complete public method documentation # C-Chain Client (/docs/tooling/avalanche-sdk/client/clients/c-chain-client) --- title: C-Chain Client --- ## Overview The C-Chain (Contract Chain) Client provides an interface for interacting with Avalanche's Contract Chain, which is an instance of the Ethereum Virtual Machine (EVM) with additional Avalanche-specific features like cross-chain atomic transactions. **When to use:** Use the C-Chain Client for EVM operations and atomic transactions (cross-chain transfers). ## Installation & Setup For setup instructions, see the [Getting Started](/avalanche-sdk/client-sdk/getting-started) guide. ```typescript import { createAvalancheClient } from "@avalanche-sdk/client"; import { avalanche } from "@avalanche-sdk/client/chains"; const client = createAvalancheClient({ chain: avalanche, transport: { type: "http" }, }); const cChainClient = client.cChain; ``` Or create a standalone C-Chain client: ```typescript import { createCChainClient } from "@avalanche-sdk/client"; const cChainClient = createCChainClient({ chain: avalanche, transport: { type: "http" }, }); ``` ## Available Methods The C-Chain Client provides methods for: - **Atomic Transaction Operations**: `getAtomicTx`, `getAtomicTxStatus` - **UTXO Operations**: `getUTXOs` - **Transaction Operations**: `issueTx` Additionally, the C-Chain Client extends viem's Public Client, providing access to all standard EVM actions such as `getBalance`, `getBlock`, `readContract`, `call`, and more. For complete method documentation with signatures, parameters, and examples, see the [C-Chain Methods Reference](/avalanche-sdk/client-sdk/methods/public-methods/c-chain). ## Common Use Cases ### Query Atomic Transactions ```typescript // Get atomic transaction details const atomicTx = await client.cChain.getAtomicTx({ txID: "2QouvMUbQ6oy7yQ9tLvL3L8tGQG2QK1wJ1q1wJ1q1wJ1q1wJ1q1wJ1q1wJ1", }); console.log("Source chain:", atomicTx.sourceChain); console.log("Destination chain:", atomicTx.destinationChain); console.log("Transfers:", atomicTx.transfers); // Get atomic transaction status const status = await client.cChain.getAtomicTxStatus({ txID: "2QouvMUbQ6oy7yQ9tLvL3L8tGQG2QK1wJ1q1wJ1q1wJ1q1wJ1q1wJ1q1wJ1", }); console.log("Status:", status.status); ``` ### Query UTXOs ```typescript // Get UTXOs for C-Chain addresses const utxos = await client.cChain.getUTXOs({ addresses: ["0x742d35Cc6634C0532925a3b8D4C9db96C4b4d8b6"], limit: 100, }); console.log("Number of UTXOs:", utxos.utxos.length); ``` ## Using viem Actions The C-Chain Client extends viem's Public Client, so you have access to all standard EVM actions: ```typescript // Get EVM balance const balance = await client.getBalance({ address: "0x742d35Cc6634C0532925a3b8D4C9db96C4b4d8b6", }); // Get transaction receipt const receipt = await client.getTransactionReceipt({ hash: "0x...", }); // Get block number const blockNumber = await client.getBlockNumber(); // Read smart contract const result = await client.readContract({ address: "0x...", abi: contractABI, functionName: "balanceOf", args: ["0x..."], }); // Get block information const block = await client.getBlock({ blockNumber: blockNumber, }); ``` See the [viem documentation](https://viem.sh/docs/getting-started) for all available EVM actions. ## Wallet Operations For transaction operations (sending transactions, writing contracts), use the wallet client: ```typescript import { createAvalancheWalletClient } from "@avalanche-sdk/client"; import { privateKeyToAvalancheAccount } from "@avalanche-sdk/client/accounts"; import { parseEther } from "@avalanche-sdk/client/utils"; const account = privateKeyToAvalancheAccount("0x..."); const walletClient = createAvalancheWalletClient({ account, chain: avalanche, transport: { type: "http" }, }); // Send AVAX const txHash = await walletClient.send({ to: "0x742d35Cc6634C0532925a3b8D4C9db96C4b4d8b6", value: parseEther("0.001"), }); console.log("Transaction hash:", txHash); ``` ### Cross-Chain Operations ```typescript // Export from C-Chain to P-Chain const exportTx = await walletClient.cChain.prepareExportTxn({ destinationChain: "P", to: account.getXPAddress("P"), amount: "0.001", }); const exportTxHash = await walletClient.sendXPTransaction(exportTx); console.log("Export transaction:", exportTxHash); // Import to C-Chain from P-Chain const importTx = await walletClient.cChain.prepareImportTxn({ to: account.getEVMAddress(), amount: "0.001", sourceChain: "P", }); const importTxHash = await walletClient.sendXPTransaction(importTx); console.log("Import transaction:", importTxHash); ``` For complete wallet operations documentation, see [C-Chain Wallet Methods](/avalanche-sdk/client-sdk/methods/wallet-methods/c-chain-wallet). ## Next Steps - **[C-Chain Methods Reference](/avalanche-sdk/client-sdk/methods/public-methods/c-chain)** - Complete method documentation - **[C-Chain Wallet Methods](/avalanche-sdk/client-sdk/methods/wallet-methods/c-chain-wallet)** - Transaction preparation and sending - **[Wallet Client](/avalanche-sdk/client-sdk/clients/wallet-client)** - Complete wallet operations - **[P-Chain Client](/avalanche-sdk/client-sdk/clients/p-chain-client)** - Validator operations - **[X-Chain Client](/avalanche-sdk/client-sdk/clients/x-chain-client)** - Asset operations # Clients (/docs/tooling/avalanche-sdk/client/clients) --- title: Clients --- ## Overview The SDK provides different client types for interacting with Avalanche. Each client is optimized for specific use cases. ## Client Architecture ```typescript Avalanche Client (Public) ├── P-Chain Client ├── X-Chain Client ├── C-Chain Client ├── Admin API Client ├── Info API Client ├── Health API Client ├── ProposerVM Client └── Index API Clients Avalanche Wallet Client ├── All Public Client Methods ├── P-Chain Wallet Operations ├── X-Chain Wallet Operations ├── C-Chain Wallet Operations └── ERC20 Token Operations ``` ## Client Types ### Main Clients - **[Avalanche Client](clients/avalanche-client)** - Read-only operations for all chains - **[Avalanche Wallet Client](clients/wallet-client)** - Transaction signing and sending ### Chain-Specific Clients - **[P-Chain Client](clients/p-chain-client)** - Validator and staking operations - **[X-Chain Client](clients/x-chain-client)** - Asset transfers and UTXO operations - **[C-Chain Client](clients/c-chain-client)** - EVM and atomic transaction operations ### API Clients - **[Admin API Client](clients/api-clients#admin-api-client)** - Administrative node operations - **[Info API Client](clients/api-clients#info-api-client)** - Node information and network statistics - **[Health API Client](clients/api-clients#health-api-client)** - Node health monitoring - **[ProposerVM Client](clients/api-clients#proposervm-client)** - ProposerVM operations - **[Index API Clients](clients/api-clients#index-api-clients)** - Indexed blockchain data queries ## Configuration All clients accept a common configuration: ```typescript interface AvalancheClientConfig { transport: Transport; // Required: HTTP, WebSocket, or Custom chain?: Chain; // Optional: Network configuration account?: Account | Address; // Optional: For wallet operations apiKey?: string; // Optional: For authenticated endpoints rlToken?: string; // Optional: Rate limit token key?: string; // Optional: Client key identifier name?: string; // Optional: Client name pollingInterval?: number; // Optional: Polling interval in ms (default: chain.blockTime / 3) cacheTime?: number; // Optional: Cache time in ms (default: chain.blockTime / 3) batch?: { multicall?: boolean | MulticallBatchOptions }; // Optional: Batch settings ccipRead?: | { request?: ( params: CcipRequestParameters ) => Promise; } | false; // Optional: CCIP Read config experimental_blockTag?: BlockTag; // Optional: Default block tag (default: 'latest') rpcSchema?: RpcSchema; // Optional: Typed JSON-RPC schema type?: string; // Optional: Client type } ``` ### Configuration Options | Option | Type | Required | Default | Description | | ----------------- | -------------------- | -------- | --------------------- | ------------------------------------------------- | | `transport` | `Transport` | ✅ Yes | - | Transport configuration (HTTP, WebSocket, Custom) | | `chain` | `Chain` | No | - | Network configuration (mainnet/testnet) | | `account` | `Account \| Address` | No | - | Account for signing operations | | `apiKey` | `string` | No | - | API key for authenticated endpoints | | `rlToken` | `string` | No | - | Rate limit token | | `key` | `string` | No | - | Client key identifier | | `name` | `string` | No | - | Client name | | `pollingInterval` | `number` | No | `chain.blockTime / 3` | Polling interval in milliseconds | | `cacheTime` | `number` | No | `chain.blockTime / 3` | Cache time in milliseconds | | `batch` | `object` | No | - | Batch settings (multicall configuration) | | `ccipRead` | `object \| false` | No | - | CCIP Read configuration | | `rpcSchema` | `RpcSchema` | No | - | Typed JSON-RPC schema | | `type` | `string` | No | - | Client type identifier | ## Usage Examples ### Public Client ```typescript import { createAvalancheClient } from "@avalanche-sdk/client"; import { avalanche } from "@avalanche-sdk/client/chains"; const client = createAvalancheClient({ chain: avalanche, transport: { type: "http" }, }); // Read data from all chains const pHeight = await client.pChain.getHeight(); const balance = await client.getBalance({ address: "0x..." }); ``` ### Wallet Client ```typescript import { createAvalancheWalletClient } from "@avalanche-sdk/client"; import { privateKeyToAvalancheAccount } from "@avalanche-sdk/client/accounts"; import { avalanche } from "@avalanche-sdk/client/chains"; import { avaxToWei } from "@avalanche-sdk/client/utils"; const account = privateKeyToAvalancheAccount("0x..."); const walletClient = createAvalancheWalletClient({ account, chain: avalanche, transport: { type: "http" }, }); // Send transaction const txHash = await walletClient.send({ to: "0x742d35Cc6634C0532925a3b8D4C9db96C4b4d8b6", amount: avaxToWei(0.001), }); ``` ### Accessing Sub-Clients ```typescript const client = createAvalancheClient({ chain: avalanche, transport: { type: "http" }, }); // Chain clients client.pChain; // P-Chain operations client.xChain; // X-Chain operations client.cChain; // C-Chain operations // API clients client.admin; // Admin API client.info; // Info API client.health; // Health API client.proposerVM.pChain; // ProposerVM API for P Chain client.proposerVM.xChain; // ProposerVM API for X Chain client.proposerVM.cChain; // ProposerVM API for C Chain client.indexBlock.pChain; // P-Chain block index client.indexBlock.cChain; // C-Chain block index client.indexBlock.xChain; // X-Chain block index client.indexTx.xChain; // X-Chain transaction index ``` ## Next Steps - **[Avalanche Client](clients/avalanche-client)** - Read-only operations - **[Avalanche Wallet Client](clients/wallet-client)** - Transaction operations - **[Chain-Specific Clients](clients/p-chain-client)** - P, X, and C-Chain clients - **[API Clients](clients/api-clients)** - Admin, Info, Health, ProposerVM, and Index APIs # P-Chain Client (/docs/tooling/avalanche-sdk/client/clients/p-chain-client) --- title: P-Chain Client --- ## Overview The P-Chain (Platform Chain) Client provides an interface for interacting with Avalanche's Platform Chain, which is responsible for coordinating validators, managing subnets, creating blockchains, and handling staking operations. **When to use:** Use the P-Chain Client for validator operations, staking, subnet management, and blockchain creation. ## Installation & Setup For setup instructions, see the [Getting Started](/avalanche-sdk/client-sdk/getting-started) guide. ```typescript import { createAvalancheClient } from "@avalanche-sdk/client"; import { avalanche } from "@avalanche-sdk/client/chains"; const client = createAvalancheClient({ chain: avalanche, transport: { type: "http" }, }); const pChainClient = client.pChain; ``` Or create a standalone P-Chain client: ```typescript import { createPChainClient } from "@avalanche-sdk/client"; const pChainClient = createPChainClient({ chain: avalanche, transport: { type: "http" }, }); ``` ## Available Methods The P-Chain Client provides methods for: - **Balance Operations**: `getBalance`, `getUTXOs` - **Validator Operations**: `getCurrentValidators`, `getValidatorsAt`, `sampleValidators`, `getL1Validator` - **Staking Operations**: `getStake`, `getTotalStake`, `getMinStake` - **Subnet Operations**: `getSubnet`, `getSubnets`, `getStakingAssetID` - **Blockchain Operations**: `getBlockchains`, `getBlockchainStatus`, `validatedBy`, `validates` - **Block Operations**: `getHeight`, `getBlock`, `getBlockByHeight`, `getProposedHeight`, `getTimestamp` - **Transaction Operations**: `getTx`, `getTxStatus`, `issueTx` - **Fee Operations**: `getFeeConfig`, `getFeeState` - **Supply Operations**: `getCurrentSupply` - **Reward Operations**: `getRewardUTXOs` For complete method documentation with signatures, parameters, and examples, see the [P-Chain Methods Reference](/avalanche-sdk/client-sdk/methods/public-methods/p-chain). ## Common Use Cases ### Query Validators ```typescript // Get current validators const validators = await client.pChain.getCurrentValidators({}); console.log("Total validators:", validators.validators.length); // Get validators at specific height const validatorsAt = await client.pChain.getValidatorsAt({ height: 1000001, subnetID: "11111111111111111111111111111111LpoYY", }); ``` ### Query Staking Information ```typescript // Get minimum stake requirements const minStake = await client.pChain.getMinStake({ subnetID: "11111111111111111111111111111111LpoYY", }); console.log("Min validator stake:", minStake.minValidatorStake); console.log("Min delegator stake:", minStake.minDelegatorStake); // Get total stake for a subnet const totalStake = await client.pChain.getTotalStake({ subnetID: "11111111111111111111111111111111LpoYY", }); // Get stake for specific addresses const stake = await client.pChain.getStake({ addresses: ["P-custom18jma8ppw3nhx5r4ap8clazz0dps7rv5u9xde7p"], subnetID: "11111111111111111111111111111111LpoYY", }); ``` ### Query Subnet Information ```typescript // Get subnet information const subnet = await client.pChain.getSubnet({ subnetID: "11111111111111111111111111111111LpoYY", }); console.log("Is permissioned:", subnet.isPermissioned); console.log("Control keys:", subnet.controlKeys); // Get all blockchains in the network const blockchains = await client.pChain.getBlockchains(); // Get blockchains validated by a subnet const validatedBlockchains = await client.pChain.validates({ subnetID: "11111111111111111111111111111111LpoYY", }); ``` ### Query Balance and UTXOs ```typescript // Get balance for addresses const balance = await client.pChain.getBalance({ addresses: ["P-custom18jma8ppw3nhx5r4ap8clazz0dps7rv5u9xde7p"], }); console.log("Total balance:", balance.balance); console.log("Unlocked:", balance.unlocked); // Get UTXOs const utxos = await client.pChain.getUTXOs({ addresses: ["P-custom18jma8ppw3nhx5r4ap8clazz0dps7rv5u9xde7p"], limit: 100, }); ``` ### Query Fee Information ```typescript // Get fee configuration const feeConfig = await client.pChain.getFeeConfig(); console.log("Fee weights:", feeConfig.weights); console.log("Min price:", feeConfig.minPrice); // Get current fee state const feeState = await client.pChain.getFeeState(); console.log("Current fee price:", feeState.price); console.log("Fee capacity:", feeState.capacity); ``` ## Wallet Operations For transaction operations (preparing and sending transactions), use the wallet client: ```typescript import { createAvalancheWalletClient } from "@avalanche-sdk/client"; import { privateKeyToAvalancheAccount } from "@avalanche-sdk/client/accounts"; const account = privateKeyToAvalancheAccount("0x..."); const walletClient = createAvalancheWalletClient({ account, chain: avalanche, transport: { type: "http" }, }); // Prepare and send base transaction const baseTxn = await walletClient.pChain.prepareBaseTxn({ outputs: [ { addresses: [account.getXPAddress("P")], amount: 0.00001, }, ], }); const txID = await walletClient.sendXPTransaction(baseTxn); console.log("Transaction sent:", txID); ``` For complete wallet operations documentation, see [P-Chain Wallet Methods](/avalanche-sdk/client-sdk/methods/wallet-methods/p-chain-wallet). ## Next Steps - **[P-Chain Methods Reference](/avalanche-sdk/client-sdk/methods/public-methods/p-chain)** - Complete method documentation - **[P-Chain Wallet Methods](/avalanche-sdk/client-sdk/methods/wallet-methods/p-chain-wallet)** - Transaction preparation and signing - **[Wallet Client](/avalanche-sdk/client-sdk/clients/wallet-client)** - Complete wallet operations - **[X-Chain Client](/avalanche-sdk/client-sdk/clients/x-chain-client)** - Asset transfers - **[C-Chain Client](/avalanche-sdk/client-sdk/clients/c-chain-client)** - EVM operations # Avalanche Wallet Client (/docs/tooling/avalanche-sdk/client/clients/wallet-client) --- title: Avalanche Wallet Client --- ## Overview The Avalanche Wallet Client extends the Public Client with full transaction signing and sending capabilities. It enables cross-chain operations, atomic transactions, and comprehensive wallet management across all Avalanche chains. **When to use:** Use the Wallet Client when you need to sign and send transactions, sign messages, or manage accounts. ## Installation & Setup For setup instructions, see the [Getting Started](/avalanche-sdk/client-sdk/getting-started) guide. ```typescript import { createAvalancheWalletClient } from "@avalanche-sdk/client"; import { avalanche } from "@avalanche-sdk/client/chains"; import { privateKeyToAvalancheAccount } from "@avalanche-sdk/client/accounts"; const account = privateKeyToAvalancheAccount("0x..."); const walletClient = createAvalancheWalletClient({ account, // Hoist the account, otherwise we can pass a custom provider for injected provider or pass a account for each method chain: avalanche, transport: { type: "http" }, }); ``` ## Available Wallet Operations The Wallet Client provides access to: ```typescript // Chain wallet operations walletClient.pChain; // P-Chain wallet operations walletClient.xChain; // X-Chain wallet operations walletClient.cChain; // C-Chain wallet operations // Core wallet methods walletClient.send(); // Send transactions walletClient.sendXPTransaction(); // Send XP transactions walletClient.signXPMessage(); // Sign XP messages walletClient.signXPTransaction(); // Sign XP transactions walletClient.waitForTxn(); // Wait for transaction confirmation walletClient.getAccountPubKey(); // Get account public key ``` For complete method documentation, see: - **[Wallet Methods](/avalanche-sdk/client-sdk/methods/wallet-methods/wallet)** - Core wallet operations - **[P-Chain Wallet Methods](/avalanche-sdk/client-sdk/methods/wallet-methods/p-chain-wallet)** - P-Chain transactions - **[X-Chain Wallet Methods](/avalanche-sdk/client-sdk/methods/wallet-methods/x-chain-wallet)** - X-Chain transactions - **[C-Chain Wallet Methods](/avalanche-sdk/client-sdk/methods/wallet-methods/c-chain-wallet)** - C-Chain transactions ## Common Operations ### Send AVAX on C-Chain ```typescript import { avaxToNanoAvax } from "@avalanche-sdk/client/utils"; const hash = await walletClient.send({ to: "0x742d35Cc6634C0532925a3b8D4C9db96C4b4d8b6", value: avaxToNanoAvax(0.001), // 0.001 AVAX }); console.log("Transaction hash:", hash); ``` ### P-Chain Wallet Operations ```typescript // Prepare and send base transaction const baseTxn = await walletClient.pChain.prepareBaseTxn({ outputs: [ { addresses: [account.getXPAddress("P")], amount: avaxToNanoAvax(0.00001), }, ], }); const txID = await walletClient.sendXPTransaction(baseTxn); console.log("P-Chain transaction:", txID); ``` ### X-Chain Wallet Operations ```typescript // Prepare and send base transaction const xChainTx = await walletClient.xChain.prepareBaseTxn({ outputs: [ { addresses: ["X-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5"], amount: avaxToNanoAvax(1), // 1 AVAX }, ], }); const txID = await walletClient.sendXPTransaction(xChainTx); console.log("X-Chain transaction:", txID); ``` ### Sign Messages ```typescript // Sign XP message const signedMessage = await walletClient.signXPMessage({ message: "Hello Avalanche", }); console.log("Signed message:", signedMessage); ``` ### Wait for Transaction Confirmation ```typescript try { await walletClient.waitForTxn({ txID: "0x...", chainAlias: "P", }); console.log("Transaction confirmed!"); } catch (error) { console.error("chain confirmation failed:", error); } ``` ## When to Use This Client - ✅ Sending transactions - ✅ Signing messages and transactions - ✅ Cross-chain transfers - ✅ Managing accounts - ✅ All wallet operations ## Next Steps - **[Wallet Methods Reference](/avalanche-sdk/client-sdk/methods/wallet-methods/wallet)** - Complete wallet method documentation - **[P-Chain Wallet Methods](/avalanche-sdk/client-sdk/methods/wallet-methods/p-chain-wallet)** - P-Chain transaction operations - **[X-Chain Wallet Methods](/avalanche-sdk/client-sdk/methods/wallet-methods/x-chain-wallet)** - X-Chain transaction operations - **[C-Chain Wallet Methods](/avalanche-sdk/client-sdk/methods/wallet-methods/c-chain-wallet)** - C-Chain transaction operations - **[Account Management](/avalanche-sdk/client-sdk/accounts)** - Account types and management # X-Chain Client (/docs/tooling/avalanche-sdk/client/clients/x-chain-client) --- title: X-Chain Client --- ## Overview The X-Chain (Exchange Chain) Client provides an interface for interacting with Avalanche's Exchange Chain, which handles asset creation, trading, transfers, and UTXO management. **When to use:** Use the X-Chain Client for asset operations, UTXO management, and X-Chain transaction queries. ## Installation & Setup For setup instructions, see the [Getting Started](/avalanche-sdk/client-sdk/getting-started) guide. ```typescript import { createAvalancheClient } from "@avalanche-sdk/client"; import { avalanche } from "@avalanche-sdk/client/chains"; const client = createAvalancheClient({ chain: avalanche, transport: { type: "http" }, }); const xChainClient = client.xChain; ``` Or create a standalone X-Chain client: ```typescript import { createXChainClient } from "@avalanche-sdk/client"; const xChainClient = createXChainClient({ chain: avalanche, transport: { type: "http" }, }); ``` ## Available Methods The X-Chain Client provides methods for: - **Balance Operations**: `getBalance`, `getAllBalances` - **Asset Operations**: `getAssetDescription`, `buildGenesis` - **UTXO Operations**: `getUTXOs` - **Block Operations**: `getHeight`, `getBlock`, `getBlockByHeight` - **Transaction Operations**: `getTx`, `getTxStatus`, `getTxFee`, `issueTx` For complete method documentation with signatures, parameters, and examples, see the [X-Chain Methods Reference](/avalanche-sdk/client-sdk/methods/public-methods/x-chain). ## Common Use Cases ### Query Balances ```typescript // Get balance for specific asset const balance = await client.xChain.getBalance({ addresses: ["X-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5"], assetID: "AVAX", }); console.log("Balance:", balance.balance); // Get all balances for all assets const allBalances = await client.xChain.getAllBalances({ addresses: ["X-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5"], }); console.log("All balances:", allBalances.balances); ``` ### Query Asset Information ```typescript // Get asset description const asset = await client.xChain.getAssetDescription({ assetID: "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z", }); console.log("Asset name:", asset.name); console.log("Asset symbol:", asset.symbol); console.log("Denomination:", asset.denomination); ``` ### Query UTXOs ```typescript // Get UTXOs for address const utxos = await client.xChain.getUTXOs({ addresses: ["X-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5"], sourceChain: "P", // Optional: specify source chain limit: 100, }); console.log("Number of UTXOs:", utxos.utxos.length); // Paginate through UTXOs if needed if (utxos.endIndex) { const moreUtxos = await client.xChain.getUTXOs({ addresses: ["X-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5"], startIndex: utxos.endIndex, limit: 100, }); } ``` ### Query Transaction Information ```typescript // Get transaction const tx = await client.xChain.getTx({ txID: "11111111111111111111111111111111LpoYY", encoding: "hex", }); // Get transaction status const status = await client.xChain.getTxStatus({ txID: "11111111111111111111111111111111LpoYY", }); console.log("Transaction status:", status.status); // Get transaction fees const txFee = await client.xChain.getTxFee(); console.log("Transaction fee:", txFee.txFee); console.log("Create asset fee:", txFee.createAssetTxFee); ``` ### Query Block Information ```typescript // Get current height const height = await client.xChain.getHeight(); console.log("Current X-Chain height:", height); // Get block by height const block = await client.xChain.getBlockByHeight({ height: Number(height), encoding: "hex", }); // Get block by ID const blockById = await client.xChain.getBlock({ blockID: "d7WYmb8VeZNHsny3EJCwMm6QA37s1EHwMxw1Y71V3FqPZ5EFG", encoding: "hex", }); ``` ## Wallet Operations For transaction operations (preparing and sending transactions), use the wallet client: ```typescript import { createAvalancheWalletClient } from "@avalanche-sdk/client"; import { privateKeyToAvalancheAccount } from "@avalanche-sdk/client/accounts"; const account = privateKeyToAvalancheAccount("0x..."); const walletClient = createAvalancheWalletClient({ account, chain: avalanche, transport: { type: "http" }, }); // Prepare and send base transaction const baseTxn = await walletClient.xChain.prepareBaseTxn({ outputs: [ { addresses: ["X-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5"], amount: 1, // 1 AVAX }, ], }); const txID = await walletClient.sendXPTransaction(baseTxn); console.log("Transaction sent:", txID); ``` For complete wallet operations documentation, see [X-Chain Wallet Methods](/avalanche-sdk/client-sdk/methods/wallet-methods/x-chain-wallet). ## Next Steps - **[X-Chain Methods Reference](/avalanche-sdk/client-sdk/methods/public-methods/x-chain)** - Complete method documentation - **[X-Chain Wallet Methods](/avalanche-sdk/client-sdk/methods/wallet-methods/x-chain-wallet)** - Transaction preparation and signing - **[Wallet Client](/avalanche-sdk/client-sdk/clients/wallet-client)** - Complete wallet operations - **[P-Chain Client](/avalanche-sdk/client-sdk/clients/p-chain-client)** - Validator and staking operations - **[C-Chain Client](/avalanche-sdk/client-sdk/clients/c-chain-client)** - EVM operations # Utilities (/docs/tooling/avalanche-sdk/client/utils) --- title: "Utilities" icon: "tools" --- ## Overview The Avalanche SDK provides utility functions for AVAX unit conversion, CB58 encoding/decoding, transaction serialization, and UTXO operations. All viem utilities are also re-exported for EVM operations. **Note:** All utility functions are synchronous unless marked as `async`. Handle errors appropriately when working with blockchain data. ## Importing Utilities Import utilities from `@avalanche-sdk/client/utils`: ```typescript import { // AVAX conversions avaxToNanoAvax, nanoAvaxToAvax, avaxToWei, weiToAvax, weiToNanoAvax, nanoAvaxToWei, // CB58 encoding CB58ToHex, hexToCB58, // Transaction serialization getTxFromBytes, getUnsignedTxFromBytes, // UTXO operations getUtxoFromBytes, getUtxosForAddress, buildUtxoBytes, } from "@avalanche-sdk/client/utils"; // Viem utilities are also available import { hexToBytes, bytesToHex, isAddress } from "@avalanche-sdk/client/utils"; ``` ## AVAX Unit Conversion Avalanche uses different units for different chains: - **AVAX**: Human-readable unit (1 AVAX) - **nanoAVAX (nAVAX)**: Smallest unit on P-Chain and X-Chain and C-Chain atomics (1 AVAX = 10^9 nAVAX) - **wei**: Used on C-Chain (1 AVAX = 10^18 wei) ### avaxToNanoAvax Converts AVAX to nanoAVAX for P-Chain/X-Chain or C-Chain Atomic operations. **Function Signature:** ```typescript function avaxToNanoAvax(amount: number): bigint; ``` **Parameters:** | Name | Type | Required | Description | | -------- | -------- | -------- | -------------- | | `amount` | `number` | Yes | Amount in AVAX | **Returns:** | Type | Description | | -------- | ------------------ | | `bigint` | Amount in nanoAVAX | **Example:** ```typescript import { avaxToNanoAvax } from "@avalanche-sdk/client/utils"; const nanoAvax = avaxToNanoAvax(1.5); console.log(nanoAvax); // 1500000000n // Use in P-Chain transaction const tx = await walletClient.pChain.prepareBaseTxn({ outputs: [ { addresses: ["P-avax1..."], amount: Number(nanoAvax), }, ], }); ``` ### nanoAvaxToAvax Converts nanoAVAX back to AVAX for display purposes. **Function Signature:** ```typescript function nanoAvaxToAvax(amount: bigint): number; ``` **Parameters:** | Name | Type | Required | Description | | -------- | -------- | -------- | ------------------ | | `amount` | `bigint` | Yes | Amount in nanoAVAX | **Returns:** | Type | Description | | -------- | -------------- | | `number` | Amount in AVAX | **Example:** ```typescript import { nanoAvaxToAvax } from "@avalanche-sdk/client/utils"; const balance = await walletClient.pChain.getBalance({ addresses: ["P-avax1..."], }); const avax = nanoAvaxToAvax(BigInt(balance.balance || 0)); console.log(`Balance: ${avax} AVAX`); ``` ### avaxToWei Converts AVAX to wei for C-Chain operations. **Function Signature:** ```typescript function avaxToWei(amount: number): bigint; ``` **Parameters:** | Name | Type | Required | Description | | -------- | -------- | -------- | -------------- | | `amount` | `number` | Yes | Amount in AVAX | **Returns:** | Type | Description | | -------- | ------------- | | `bigint` | Amount in wei | **Example:** ```typescript import { avaxToWei } from "@avalanche-sdk/client/utils"; const wei = avaxToWei(1.5); console.log(wei); // 1500000000000000000n // Use in C-Chain transaction const txHash = await walletClient.cChain.sendTransaction({ to: "0x...", value: wei, }); ``` ### weiToAvax Converts wei back to AVAX for display. **Function Signature:** ```typescript function weiToAvax(amount: bigint): bigint; ``` **Parameters:** | Name | Type | Required | Description | | -------- | -------- | -------- | ------------- | | `amount` | `bigint` | Yes | Amount in wei | **Returns:** | Type | Description | | -------- | -------------------------- | | `bigint` | Amount in AVAX (as bigint) | **Example:** ```typescript import { weiToAvax } from "@avalanche-sdk/client/utils"; const balance = await walletClient.cChain.getBalance({ address: "0x...", }); const avax = weiToAvax(balance); console.log(`Balance: ${avax} AVAX`); ``` ### weiToNanoAvax Converts wei to nanoAVAX for cross-chain operations. **Function Signature:** ```typescript function weiToNanoAvax(amount: bigint): bigint; ``` **Parameters:** | Name | Type | Required | Description | | -------- | -------- | -------- | ------------- | | `amount` | `bigint` | Yes | Amount in wei | **Returns:** | Type | Description | | -------- | ------------------ | | `bigint` | Amount in nanoAVAX | **Example:** ```typescript import { weiToNanoAvax } from "@avalanche-sdk/client/utils"; const cChainBalance = await walletClient.cChain.getBalance({ address: "0x...", }); // Convert to nanoAVAX for P-Chain transfer const nanoAvax = weiToNanoAvax(cChainBalance); ``` ### nanoAvaxToWei Converts nanoAVAX to wei for cross-chain operations. **Function Signature:** ```typescript function nanoAvaxToWei(amount: bigint): bigint; ``` **Parameters:** | Name | Type | Required | Description | | -------- | -------- | -------- | ------------------ | | `amount` | `bigint` | Yes | Amount in nanoAVAX | **Returns:** | Type | Description | | -------- | ------------- | | `bigint` | Amount in wei | **Example:** ```typescript import { nanoAvaxToWei } from "@avalanche-sdk/client/utils"; const pChainBalance = await walletClient.pChain.getBalance({ addresses: ["P-avax1..."], }); // Convert to wei for C-Chain transfer const wei = nanoAvaxToWei(BigInt(pChainBalance.balance || 0)); ``` ## CB58 Encoding/Decoding CB58 is Avalanche's base58 encoding format used for transaction IDs, asset IDs, and addresses. ### CB58ToHex Converts CB58-encoded strings to hexadecimal format. **Function Signature:** ```typescript function CB58ToHex(cb58: string): Hex; ``` **Parameters:** | Name | Type | Required | Description | | ------ | -------- | -------- | ------------------- | | `cb58` | `string` | Yes | CB58 encoded string | **Returns:** | Type | Description | | ----- | ----------------------------------- | | `Hex` | Hexadecimal string with `0x` prefix | **Example:** ```typescript import { CB58ToHex } from "@avalanche-sdk/client/utils"; const txId = "mYxFK3CWs6iMFFaRx4wmVLDUtnktzm2o9Mhg9AG6JSzRijy5V"; const hex = CB58ToHex(txId); console.log(hex); // 0x... // Use with hex-based APIs const tx = await client.pChain.getAtomicTx({ txID: hex }); ``` ### hexToCB58 Converts hexadecimal strings to CB58 format. **Function Signature:** ```typescript function hexToCB58(hex: Hex): string; ``` **Parameters:** | Name | Type | Required | Description | | ----- | ----- | -------- | ----------------------------------- | | `hex` | `Hex` | Yes | Hexadecimal string with `0x` prefix | **Returns:** | Type | Description | | -------- | ------------------- | | `string` | CB58 encoded string | **Example:** ```typescript import { hexToCB58 } from "@avalanche-sdk/client/utils"; const hex = "0x1234567890abcdef"; const cb58 = hexToCB58(hex); console.log(cb58); // CB58 encoded string ``` ## Transaction Serialization ### getTxFromBytes Parses signed transaction bytes to extract the transaction and credentials. **Function Signature:** ```typescript function getTxFromBytes( txBytes: string, chainAlias: "P" | "X" | "C" ): [Common.Transaction, Credential[]]; ``` **Parameters:** | Name | Type | Required | Description | | ------------ | ------------------- | -------- | ------------------------------- | | `txBytes` | `string` | Yes | Transaction bytes as hex string | | `chainAlias` | `"P" \| "X" \| "C"` | Yes | Chain alias | **Returns:** | Type | Description | | ------------------------------------ | -------------------------------------------------- | | `[Common.Transaction, Credential[]]` | Tuple containing transaction and credentials array | **Example:** ```typescript import { getTxFromBytes } from "@avalanche-sdk/client/utils"; const txHex = "0x1234567890abcdef..."; const [tx, credentials] = getTxFromBytes(txHex, "P"); console.log("Transaction ID:", tx.getId().toString()); console.log("Signatures:", credentials.length); ``` ### getUnsignedTxFromBytes Parses unsigned transaction bytes to get an unsigned transaction object. **Function Signature:** ```typescript function getUnsignedTxFromBytes( txBytes: string, chainAlias: "P" | "X" | "C" ): UnsignedTx; ``` **Parameters:** | Name | Type | Required | Description | | ------------ | ------------------- | -------- | ------------------------------- | | `txBytes` | `string` | Yes | Transaction bytes as hex string | | `chainAlias` | `"P" \| "X" \| "C"` | Yes | Chain alias | **Returns:** | Type | Description | | ------------ | ---------------------------------- | | `UnsignedTx` | Parsed unsigned transaction object | **Example:** ```typescript import { getUnsignedTxFromBytes } from "@avalanche-sdk/client/utils"; const txHex = "0x1234567890abcdef..."; const unsignedTx = getUnsignedTxFromBytes(txHex, "P"); console.log("Transaction ID:", unsignedTx.txID); console.log("Transaction bytes:", unsignedTx.toBytes()); ``` ## UTXO Operations ### getUtxoFromBytes Parses UTXO bytes to get a UTXO object. **Function Signature:** ```typescript function getUtxoFromBytes( utxoBytesOrHex: string | Uint8Array, chainAlias: "P" | "X" | "C" ): Utxo; ``` **Parameters:** | Name | Type | Required | Description | | ---------------- | ---------------------- | -------- | -------------------------------------- | | `utxoBytesOrHex` | `string \| Uint8Array` | Yes | UTXO bytes as hex string or Uint8Array | | `chainAlias` | `"P" \| "X" \| "C"` | Yes | Chain alias | **Returns:** | Type | Description | | ------ | ------------------ | | `Utxo` | Parsed UTXO object | **Example:** ```typescript import { getUtxoFromBytes } from "@avalanche-sdk/client/utils"; const utxoHex = "0x1234567890abcdef..."; const utxo = getUtxoFromBytes(utxoHex, "P"); console.log("UTXO ID:", utxo.utxoID); console.log("Asset ID:", utxo.assetID); console.log("Output:", utxo.output); ``` ### getUtxosForAddress Fetches all UTXOs for a given address on a specific chain. This function handles pagination automatically. **Function Signature:** ```typescript function getUtxosForAddress( client: AvalancheWalletCoreClient, params: { address: string; chainAlias: "P" | "X" | "C"; sourceChain?: string; } ): Promise; ``` **Parameters:** | Name | Type | Required | Description | | -------- | --------------------------- | -------- | -------------------------- | | `client` | `AvalancheWalletCoreClient` | Yes | The wallet client instance | | `params` | `object` | Yes | Parameters object | **params object:** | Name | Type | Required | Description | | ------------- | ------------------- | -------- | --------------------------------------- | | `address` | `string` | Yes | Address to query | | `chainAlias` | `"P" \| "X" \| "C"` | Yes | Chain alias | | `sourceChain` | `string` | No | Source chain ID for import transactions | **Returns:** | Type | Description | | ----------------- | --------------------- | | `Promise` | Array of UTXO objects | **Example:** ```typescript import { getUtxosForAddress } from "@avalanche-sdk/client/utils"; import { createAvalancheWalletClient } from "@avalanche-sdk/client"; import { avalanche } from "@avalanche-sdk/client/chains"; const walletClient = createAvalancheWalletClient({ account: myAccount, chain: avalanche, transport: { type: "http" }, }); const utxos = await getUtxosForAddress(walletClient, { address: "P-avax1...", chainAlias: "P", }); console.log(`Found ${utxos.length} UTXOs`); ``` ### buildUtxoBytes Builds UTXO bytes from parameters. Useful for reconstructing UTXOs or creating test data. **Function Signature:** ```typescript function buildUtxoBytes( txHash: string, outputIndex: number, assetId: string, amount: string, addresses: string[], locktime: string, threshold: number ): `0x${string}`; ``` **Parameters:** | Name | Type | Required | Description | | ------------- | ---------- | -------- | ------------------------------------------- | | `txHash` | `string` | Yes | Transaction hash in CB58 format | | `outputIndex` | `number` | Yes | Output index in the transaction | | `assetId` | `string` | Yes | Asset ID in CB58 format | | `amount` | `string` | Yes | Amount as string | | `addresses` | `string[]` | Yes | Array of addresses that can spend this UTXO | | `locktime` | `string` | Yes | UNIX timestamp locktime in seconds | | `threshold` | `number` | Yes | Signature threshold | **Returns:** | Type | Description | | ------------------- | ------------------------ | | `` `0x${string}` `` | UTXO bytes as hex string | **Example:** ```typescript import { buildUtxoBytes } from "@avalanche-sdk/client/utils"; const utxoBytes = buildUtxoBytes( "mYxFK3CWs6iMFFaRx4wmVLDUtnktzm2o9Mhg9AG6JSzRijy5V", 0, "U8iRqJoiJm8xZHAacmvYyZVwqQx6uDNtQeP3CQ6fcgQk3JqnK", "111947", ["P-fuji1nv6w7m6egkwhkcvz96ze3qmzyk5gt6csqz7ejq"], "0", 1 ); console.log("UTXO bytes:", utxoBytes); ``` ## Viem Utilities The SDK re-exports all utilities from viem for EVM operations. See the [viem utilities documentation](https://viem.sh/docs/utilities) for complete reference. **Common Categories:** - **Encoding/Decoding**: `bytesToHex`, `hexToBytes`, `stringToHex` - **ABI Operations**: `encodeAbiParameters`, `decodeAbiParameters`, `parseAbiItem` - **Address Operations**: `getAddress`, `isAddress`, `checksumAddress` - **Number Operations**: `bytesToBigInt`, `hexToNumber`, `numberToHex` - **Hash Operations**: `keccak256`, `sha256`, `ripemd160` - **Signature Operations**: `recoverAddress`, `verifyMessage` ## Common Patterns ### Converting Between Units ```typescript import { avaxToNanoAvax, nanoAvaxToAvax, avaxToWei, weiToAvax, } from "@avalanche-sdk/client/utils"; // P-Chain: AVAX → nanoAVAX const nanoAvax = avaxToNanoAvax(1.5); // C-Chain: AVAX → wei const wei = avaxToWei(1.5); // Display: nanoAVAX → AVAX const avax = nanoAvaxToAvax(nanoAvax); ``` ### Working with Transaction IDs ```typescript import { CB58ToHex, hexToCB58 } from "@avalanche-sdk/client/utils"; // Convert CB58 to hex for API calls const txId = "mYxFK3CWs6iMFFaRx4wmVLDUtnktzm2o9Mhg9AG6JSzRijy5V"; const hex = CB58ToHex(txId); // Convert hex back to CB58 for display const cb58 = hexToCB58(hex); ``` ### Parsing Transactions ```typescript import { getTxFromBytes } from "@avalanche-sdk/client/utils"; const txHex = "0x..."; const [tx, credentials] = getTxFromBytes(txHex, "P"); // Access transaction details const txId = tx.getId().toString(); const numSignatures = credentials.length; ``` ## Next Steps - **[Account Management](accounts)** - Working with accounts - **[Transaction Signing](methods/wallet-methods/wallet)** - Signing and sending transactions - **[Chain Clients](clients)** - Chain-specific operations - **[Viem Documentation](https://viem.sh/docs/utilities)** - Complete viem utilities reference # Chain Configuration (/docs/tooling/avalanche-sdk/interchain/chains) --- title: Chain Configuration icon: link --- ## Overview Chain configurations define the blockchain networks for interchain operations. Each chain includes network details and interchain contract addresses. ## Available Chains ### avalancheFuji Avalanche Fuji testnet configuration. ```typescript import { avalancheFuji } from "@avalanche-sdk/interchain/chains"; ``` ### dispatch Dispatch subnet configuration. ```typescript import { dispatch } from "@avalanche-sdk/interchain/chains"; ``` ## ChainConfig Type ```typescript interface ChainConfig { id: number; name: string; network: string; nativeCurrency: { name: string; symbol: string; decimals: number; }; rpcUrls: { default: { http: string[]; }; }; blockchainId: string; interchainContracts: { teleporterRegistry: Address; teleporterManager: Address; }; } ``` ## Using Chains ```typescript import { createICMClient } from "@avalanche-sdk/interchain"; import { avalancheFuji, dispatch } from "@avalanche-sdk/interchain/chains"; const icm = createICMClient(wallet, avalancheFuji, dispatch); // Or specify per call await icm.sendMsg({ sourceChain: avalancheFuji, destinationChain: dispatch, message: "Hello!", }); ``` ## Custom Chains You can define custom chains for interchain operations by extending the base chain configuration with interchain-specific properties. Custom chains are useful when working with custom subnets or L1 chains that support Teleporter. ### Defining a Custom Chain Use `defineChain` from `@avalanche-sdk/client` to create a chain configuration, then cast it to `ChainConfig` to add interchain contract addresses: ```typescript import { defineChain } from "@avalanche-sdk/client"; import type { ChainConfig } from "@avalanche-sdk/interchain/chains"; export const myCustomChain = defineChain({ id: 12345, // Your chain ID name: "My Custom Chain", network: "my-custom-chain", nativeCurrency: { decimals: 18, name: "Token", symbol: "TKN", }, rpcUrls: { default: { http: ["https://api.example.com/ext/bc/C/rpc"], }, }, blockExplorers: { default: { name: "Explorer", url: "https://explorer.example.com", }, }, // Interchain-specific properties blockchainId: "0x...", // Your blockchain ID (hex-encoded) interchainContracts: { teleporterRegistry: "0x...", // Teleporter registry contract address teleporterManager: "0x...", // Teleporter manager contract address }, }) as ChainConfig; ``` ### Required Properties | Property | Type | Description | | --------------------- | -------- | ------------------------------------------------ | | `id` | `number` | Unique chain identifier (EVM chain ID) | | `name` | `string` | Human-readable chain name | | `network` | `string` | Network identifier (used for wallet connections) | | `nativeCurrency` | `object` | Native token configuration | | `rpcUrls` | `object` | RPC endpoint URLs | | `blockchainId` | `string` | Avalanche blockchain ID (hex-encoded) | | `interchainContracts` | `object` | Teleporter contract addresses | ### Interchain Contracts The `interchainContracts` object must include: - **`teleporterRegistry`**: Address of the Teleporter registry contract on this chain - **`teleporterManager`**: Address of the Teleporter manager contract on this chain These contracts enable cross-chain messaging and token transfers. Ensure they are deployed and configured on your chain before using interchain operations. ### Example: Custom Subnet Chain ```typescript import { defineChain } from "@avalanche-sdk/client"; import type { ChainConfig } from "@avalanche-sdk/interchain/chains"; export const mySubnet = defineChain({ id: 54321, name: "My Subnet", network: "my-subnet", nativeCurrency: { decimals: 18, name: "Avalanche", symbol: "AVAX", }, rpcUrls: { default: { http: ["https://subnets.avax.network/mysubnet/mainnet/rpc"], }, }, blockExplorers: { default: { name: "Subnet Explorer", url: "https://subnets.avax.network/mysubnet", }, }, blockchainId: "0x1234567890abcdef1234567890abcdef12345678", interchainContracts: { teleporterRegistry: "0xF86Cb19Ad8405AEFa7d09C778215D2Cb6eBfB228", teleporterManager: "0x253b2784c75e510dD0fF1da844684a1aC0aa5fcf", }, }) as ChainConfig; ``` ### Using Custom Chains Once defined, use your custom chain with ICM and ICTT clients: ```typescript import { createICMClient } from "@avalanche-sdk/interchain"; import { myCustomChain } from "./chains/myCustomChain"; import { avalancheFuji } from "@avalanche-sdk/interchain/chains"; const icm = createICMClient(wallet, avalancheFuji, myCustomChain); // Send message to custom chain await icm.sendMsg({ sourceChain: avalancheFuji, destinationChain: myCustomChain, message: "Hello from Fuji to My Custom Chain!", }); ``` ### Tips - **Blockchain ID**: Use the hex-encoded blockchain ID from your chain's configuration. You can find this in your chain's genesis data or by querying the chain's info API. - **Contract Addresses**: Ensure Teleporter contracts are deployed on your chain before using interchain operations. Contact your chain operator or refer to your chain's documentation for the correct addresses. - **RPC URLs**: Provide reliable RPC endpoints. Consider using multiple endpoints for redundancy. - **Testnet vs Mainnet**: Use the `testnet` property to mark testnet chains, which helps with wallet integrations and explorer links. For more information on chain configuration, see the [Viem chains documentation](https://viem.sh/docs/chains/introduction). # Interchain Messaging (/docs/tooling/avalanche-sdk/interchain/icm) --- title: Interchain Messaging icon: message-square --- ## Overview Send arbitrary messages between Avalanche chains and subnets using the Teleporter protocol. Messages are encoded as strings and delivered cross-chain. ## Create Client ```typescript import { createICMClient } from "@avalanche-sdk/interchain"; import { createAvalancheWalletClient } from "@avalanche-sdk/client"; import { privateKeyToAvalancheAccount } from "@avalanche-sdk/client/accounts"; import { avalancheFuji, dispatch } from "@avalanche-sdk/interchain/chains"; // Setup wallet const account = privateKeyToAvalancheAccount("0x..."); const wallet = createAvalancheWalletClient({ account, chain: avalancheFuji, transport: { type: "http" }, }); const icm = createICMClient(wallet); // Or with default chains const icm = createICMClient(wallet, sourceChain, destinationChain); ``` The wallet client's chain configuration must match the `sourceChain` used in your interchain operations. For example, if you're sending messages from Fuji testnet, ensure your wallet client is configured with the Fuji chain. Mismatched chains will result in an "invalid sender" error. ## Methods Send a cross-chain message # Methods (/docs/tooling/avalanche-sdk/interchain/icm/methods) --- title: Methods icon: code --- ## sendMsg Sends a cross-chain message to the specified destination chain. ### Parameters | Parameter | Type | Required | Description | | ------------------------- | --------------------------------------------- | -------- | -------------------------------------------- | | `message` | `string` | Yes | Message content to send | | `sourceChain` | `ChainConfig` | Yes\* | Source chain configuration | | `destinationChain` | `ChainConfig` | Yes\* | Destination chain configuration | | `recipientAddress` | `0x${string}` | No | Recipient address (defaults to zero address) | | `feeInfo` | `{ feeTokenAddress: string, amount: bigint }` | No | Fee token and amount | | `requiredGasLimit` | `bigint` | No | Gas limit for execution (default: 100000) | | `allowedRelayerAddresses` | `string[]` | No | Allowed relayer addresses | \* Required if not set in client constructor ### Returns | Type | Description | | ----------------- | ---------------- | | `Promise` | Transaction hash | ### Example ```typescript import { createICMClient } from "@avalanche-sdk/interchain"; import { avalancheFuji, dispatch } from "@avalanche-sdk/interchain/chains"; const icm = createICMClient(wallet); // Simple message const hash = await icm.sendMsg({ sourceChain: avalancheFuji, destinationChain: dispatch, message: "Hello from Avalanche!", }); // With options const hash = await icm.sendMsg({ sourceChain: avalancheFuji, destinationChain: dispatch, message: "Hello from Avalanche!", recipientAddress: "0x742d35Cc6634C0532925a3b8D4C9db96C4b4d8b6", feeInfo: { feeTokenAddress: "0x0000000000000000000000000000000000000000", amount: 0n, }, requiredGasLimit: 200000n, }); ``` # Deployment Methods (/docs/tooling/avalanche-sdk/interchain/ictt/deployment) --- title: Deployment Methods icon: package --- ## deployERC20Token Deploys a new ERC20 token on the source chain. ### Parameters | Parameter | Type | Required | Description | | --------------- | -------------- | -------- | -------------------------------------------------------- | | `walletClient` | `WalletClient` | Yes | Wallet client for signing | | `sourceChain` | `ChainConfig` | Yes\* | Source chain configuration | | `name` | `string` | Yes | Token name | | `symbol` | `string` | Yes | Token symbol | | `initialSupply` | `number` | Yes | Initial token supply | | `recipient` | `Address` | No | Recipient of initial supply (defaults to wallet address) | \* Required if not set in client constructor ### Returns | Type | Description | | ---------------------------------------------------------------- | ------------------------------------- | | `Promise<{ txHash: 0x${string}, contractAddress: 0x${string} }>` | Transaction hash and contract address | ### Example ```typescript const { txHash, contractAddress } = await ictt.deployERC20Token({ walletClient: wallet, sourceChain: avalancheFuji, name: "My Token", symbol: "MTK", initialSupply: 1000000, }); ``` ## deployTokenHomeContract Deploys a token home contract on the source chain. ### Parameters | Parameter | Type | Required | Description | | -------------------------- | -------------- | -------- | -------------------------- | | `walletClient` | `WalletClient` | Yes | Wallet client for signing | | `sourceChain` | `ChainConfig` | Yes\* | Source chain configuration | | `erc20TokenAddress` | `Address` | Yes | ERC20 token address | | `minimumTeleporterVersion` | `number` | Yes | Minimum Teleporter version | | `tokenHomeCustomByteCode` | `string` | No | Custom bytecode | | `tokenHomeCustomABI` | `ABI` | No | Custom ABI | ### Returns | Type | Description | | ---------------------------------------------------------------- | ------------------------------------- | | `Promise<{ txHash: 0x${string}, contractAddress: 0x${string} }>` | Transaction hash and contract address | ### Example ```typescript const { txHash, contractAddress } = await ictt.deployTokenHomeContract({ walletClient: wallet, sourceChain: avalancheFuji, erc20TokenAddress: tokenAddress, minimumTeleporterVersion: 1, }); ``` ## deployTokenRemoteContract Deploys a token remote contract on the destination chain. ### Parameters | Parameter | Type | Required | Description | | --------------------------- | -------------- | -------- | ------------------------------- | | `walletClient` | `WalletClient` | Yes | Wallet client for signing | | `sourceChain` | `ChainConfig` | Yes\* | Source chain configuration | | `destinationChain` | `ChainConfig` | Yes\* | Destination chain configuration | | `tokenHomeContract` | `Address` | Yes | Token home contract address | | `tokenRemoteCustomByteCode` | `string` | No | Custom bytecode | | `tokenRemoteCustomABI` | `ABI` | No | Custom ABI | ### Returns | Type | Description | | ---------------------------------------------------------------- | ------------------------------------- | | `Promise<{ txHash: 0x${string}, contractAddress: 0x${string} }>` | Transaction hash and contract address | ### Example ```typescript const { txHash, contractAddress } = await ictt.deployTokenRemoteContract({ walletClient: wallet, sourceChain: avalancheFuji, destinationChain: dispatch, tokenHomeContract: tokenHomeAddress, }); ``` ## registerRemoteWithHome Registers the token remote contract with the token home contract. ### Parameters | Parameter | Type | Required | Description | | --------------------- | -------------- | -------- | ------------------------------- | | `walletClient` | `WalletClient` | Yes | Wallet client for signing | | `sourceChain` | `ChainConfig` | Yes\* | Source chain configuration | | `destinationChain` | `ChainConfig` | Yes\* | Destination chain configuration | | `tokenRemoteContract` | `Address` | Yes | Token remote contract address | | `feeTokenAddress` | `Address` | No | Fee token address | | `feeAmount` | `number` | No | Fee amount | ### Returns | Type | Description | | ---------------------------------- | ---------------- | | `Promise<{ txHash: 0x${string} }>` | Transaction hash | ### Example ```typescript const { txHash } = await ictt.registerRemoteWithHome({ walletClient: wallet, sourceChain: avalancheFuji, destinationChain: dispatch, tokenRemoteContract: tokenRemoteAddress, }); ``` # Interchain Token Transfers (/docs/tooling/avalanche-sdk/interchain/ictt) --- title: Interchain Token Transfers icon: coins --- ## Overview Transfer ERC20 tokens between Avalanche chains using the Teleporter protocol. Requires deploying token home and remote contracts for each token pair. ## Create Client ```typescript import { createICTTClient } from "@avalanche-sdk/interchain"; const ictt = createICTTClient(); // Or with default chains const ictt = createICTTClient(sourceChain, destinationChain); ``` ## Workflow 1. **Deploy ERC20 Token** - Deploy your token on the source chain 2. **Deploy Token Home** - Deploy home contract on source chain 3. **Deploy Token Remote** - Deploy remote contract on destination chain 4. **Register Remote** - Register remote with home contract 5. **Approve Token** - Approve home contract to spend tokens 6. **Send Tokens** - Transfer tokens cross-chain The wallet client's chain configuration must match the `sourceChain` used in your interchain operations. For example, if you're sending messages from Fuji testnet, ensure your wallet client is configured with the Fuji chain. Mismatched chains will result in an "invalid sender" error. ## Methods Deploy tokens and contracts Transfer tokens cross-chain # Transfer Methods (/docs/tooling/avalanche-sdk/interchain/ictt/transfers) --- title: Transfer Methods icon: arrow-right --- ## approveToken Approves the token home contract to spend tokens on the source chain. ### Parameters | Parameter | Type | Required | Description | | ------------------- | -------------- | -------- | --------------------------------- | | `walletClient` | `WalletClient` | Yes | Wallet client for signing | | `sourceChain` | `ChainConfig` | Yes\* | Source chain configuration | | `tokenHomeContract` | `Address` | Yes | Token home contract address | | `tokenAddress` | `Address` | Yes | ERC20 token address | | `amountInBaseUnit` | `number` | Yes | Amount to approve (in base units) | \* Required if not set in client constructor ### Returns | Type | Description | | ---------------------------------- | ---------------- | | `Promise<{ txHash: 0x${string} }>` | Transaction hash | ### Example ```typescript const { txHash } = await ictt.approveToken({ walletClient: wallet, sourceChain: avalancheFuji, tokenHomeContract: tokenHomeAddress, tokenAddress: tokenAddress, amountInBaseUnit: 1000, }); ``` ## sendToken Sends tokens from the source chain to the destination chain. ### Parameters | Parameter | Type | Required | Description | | --------------------- | -------------- | -------- | -------------------------------------- | | `walletClient` | `WalletClient` | Yes | Wallet client for signing | | `sourceChain` | `ChainConfig` | Yes\* | Source chain configuration | | `destinationChain` | `ChainConfig` | Yes\* | Destination chain configuration | | `tokenHomeContract` | `Address` | Yes | Token home contract address | | `tokenRemoteContract` | `Address` | Yes | Token remote contract address | | `recipient` | `Address` | Yes | Recipient address on destination chain | | `amountInBaseUnit` | `number` | Yes | Amount to send (in base units) | | `feeTokenAddress` | `Address` | No | Fee token address | | `feeAmount` | `number` | No | Fee amount | \* Required if not set in client constructor ### Returns | Type | Description | | ---------------------------------- | ---------------- | | `Promise<{ txHash: 0x${string} }>` | Transaction hash | ### Example ```typescript const { txHash } = await ictt.sendToken({ walletClient: wallet, sourceChain: avalancheFuji, destinationChain: dispatch, tokenHomeContract: tokenHomeAddress, tokenRemoteContract: tokenRemoteAddress, recipient: "0x742d35Cc6634C0532925a3b8D4C9db96C4b4d8b6", amountInBaseUnit: 100, }); ``` ## Complete Workflow ```typescript import { createICTTClient } from "@avalanche-sdk/interchain"; import { avalancheFuji, dispatch } from "@avalanche-sdk/interchain/chains"; const ictt = createICTTClient(avalancheFuji, dispatch); // 1. Deploy token const { contractAddress: tokenAddress } = await ictt.deployERC20Token({ walletClient: wallet, sourceChain: avalancheFuji, name: "My Token", symbol: "MTK", initialSupply: 1000000, }); // 2. Deploy home contract const { contractAddress: tokenHomeAddress } = await ictt.deployTokenHomeContract({ walletClient: wallet, sourceChain: avalancheFuji, erc20TokenAddress: tokenAddress, minimumTeleporterVersion: 1, }); // 3. Deploy remote contract const { contractAddress: tokenRemoteAddress } = await ictt.deployTokenRemoteContract({ walletClient: wallet, sourceChain: avalancheFuji, destinationChain: dispatch, tokenHomeContract: tokenHomeAddress, }); // 4. Register remote with home await ictt.registerRemoteWithHome({ walletClient: wallet, sourceChain: avalancheFuji, destinationChain: dispatch, tokenRemoteContract: tokenRemoteAddress, }); // 5. Approve tokens await ictt.approveToken({ walletClient: wallet, sourceChain: avalancheFuji, tokenHomeContract: tokenHomeAddress, tokenAddress: tokenAddress, amountInBaseUnit: 1000, }); // 6. Send tokens const { txHash } = await ictt.sendToken({ walletClient: wallet, sourceChain: avalancheFuji, destinationChain: dispatch, tokenHomeContract: tokenHomeAddress, tokenRemoteContract: tokenRemoteAddress, recipient: "0x742d35Cc6634C0532925a3b8D4C9db96C4b4d8b6", amountInBaseUnit: 100, }); ``` # Building Warp Messages (/docs/tooling/avalanche-sdk/interchain/warp/building) --- title: Building Warp Messages icon: hammer --- ## Building Messages Build unsigned Warp messages for signing and broadcasting. ## RegisterL1ValidatorMessage ### Methods | Method | Parameters | Returns | Description | | ------------ | ------------------------------------------------------ | ---------------------------- | ----------------- | | `fromValues` | `nodeID: string, publicKey: string, signature: string` | `RegisterL1ValidatorMessage` | Build from values | | `toHex` | - | `string` | Convert to hex | ### Example ```typescript import { RegisterL1ValidatorMessage } from "@avalanche-sdk/interchain/warp"; const msg = RegisterL1ValidatorMessage.fromValues( "NodeID-7Xhw2mDxuDS44j42TCB6U5579esbSt3Lg", "0x...", "0x..." ); const hex = msg.toHex(); ``` ## L1ValidatorWeightMessage ### Methods | Method | Parameters | Returns | Description | | ------------ | --------------------------------------------------- | -------------------------- | ----------------- | | `fromValues` | `nodeID: string, weight: bigint, startTime: bigint` | `L1ValidatorWeightMessage` | Build from values | | `toHex` | - | `string` | Convert to hex | ### Example ```typescript import { L1ValidatorWeightMessage } from "@avalanche-sdk/interchain/warp"; const msg = L1ValidatorWeightMessage.fromValues( "NodeID-7Xhw2mDxuDS44j42TCB6U5579esbSt3Lg", 4n, 41n ); const hex = msg.toHex(); ``` ## AddressedCall Build AddressedCall payload from a message. ### Methods | Method | Parameters | Returns | Description | | ------------ | ----------------------------------------- | --------------- | ----------------- | | `fromValues` | `sourceAddress: Address, payload: string` | `AddressedCall` | Build from values | | `toHex` | - | `string` | Convert to hex | ### Example ```typescript import { AddressedCall, L1ValidatorWeightMessage, } from "@avalanche-sdk/interchain/warp"; const msg = L1ValidatorWeightMessage.fromValues( "NodeID-7Xhw2mDxuDS44j42TCB6U5579esbSt3Lg", 4n, 41n ); const addressedCall = AddressedCall.fromValues( "0x35F884853114D298D7aA8607f4e7e0DB52205f07", msg.toHex() ); ``` ## WarpUnsignedMessage Build unsigned Warp message from AddressedCall. ### Methods | Method | Parameters | Returns | Description | | ------------ | ------------------------------------------------------------------------ | --------------------- | ----------------- | | `fromValues` | `networkID: number, sourceChainID: string, addressedCallPayload: string` | `WarpUnsignedMessage` | Build from values | | `toHex` | - | `string` | Convert to hex | ### Example ```typescript import { AddressedCall, L1ValidatorWeightMessage, WarpUnsignedMessage, } from "@avalanche-sdk/interchain/warp"; // Build message const msg = L1ValidatorWeightMessage.fromValues( "NodeID-7Xhw2mDxuDS44j42TCB6U5579esbSt3Lg", 4n, 41n ); // Build AddressedCall const addressedCall = AddressedCall.fromValues( "0x35F884853114D298D7aA8607f4e7e0DB52205f07", msg.toHex() ); // Build unsigned Warp message const warpUnsignedMsg = WarpUnsignedMessage.fromValues( 1, "251q44yFiimeVSHaQbBk69TzoeYqKu9VagGtLVqo92LphUxjmR", addressedCall.toHex() ); const hex = warpUnsignedMsg.toHex(); ``` # Warp Messages (/docs/tooling/avalanche-sdk/interchain/warp) --- title: Warp Messages icon: layers --- ## Overview Warp messages enable cross-chain communication in Avalanche. Parse and build Warp protocol messages for validator registration, weight updates, and subnet conversions. ## Supported Message Types - **RegisterL1ValidatorMessage** - Register L1 validators - **L1ValidatorWeightMessage** - Update validator weights - **L1ValidatorRegistrationMessage** - L1 validator registration - **SubnetToL1ConversionMessage** - Subnet to L1 conversion ## Quick Start ```typescript import { WarpMessage, RegisterL1ValidatorMessage, } from "@avalanche-sdk/interchain/warp"; // Parse a signed Warp message const signedWarpMsg = WarpMessage.fromHex(signedWarpMsgHex); // Parse specific message type const registerMsg = RegisterL1ValidatorMessage.fromHex(signedWarpMsgHex); ``` ## Methods Parse Warp messages Build Warp messages # Parsing Warp Messages (/docs/tooling/avalanche-sdk/interchain/warp/parsing) --- title: Parsing Warp Messages icon: search --- ## WarpMessage Parse a signed Warp message from hex. ### Methods | Method | Parameters | Returns | Description | | --------- | ------------- | ------------- | ------------------------- | | `fromHex` | `hex: string` | `WarpMessage` | Parse signed Warp message | ### Example ```typescript import { WarpMessage } from "@avalanche-sdk/interchain/warp"; const signedWarpMsgHex = "0x..."; const warpMsg = WarpMessage.fromHex(signedWarpMsgHex); // Access message properties console.log(warpMsg.networkID); console.log(warpMsg.sourceChainID); console.log(warpMsg.addressedCallPayload); console.log(warpMsg.signatures); ``` ## RegisterL1ValidatorMessage Parse a Register L1 Validator message. ### Methods | Method | Parameters | Returns | Description | | --------- | ------------- | ---------------------------- | -------------- | | `fromHex` | `hex: string` | `RegisterL1ValidatorMessage` | Parse from hex | ### Example ```typescript import { RegisterL1ValidatorMessage } from "@avalanche-sdk/interchain/warp"; const msg = RegisterL1ValidatorMessage.fromHex(signedWarpMsgHex); console.log(msg.nodeID); console.log(msg.publicKey); console.log(msg.signature); ``` ## L1ValidatorWeightMessage Parse an L1 Validator Weight message. ### Methods | Method | Parameters | Returns | Description | | --------- | ------------- | -------------------------- | -------------- | | `fromHex` | `hex: string` | `L1ValidatorWeightMessage` | Parse from hex | ### Example ```typescript import { L1ValidatorWeightMessage } from "@avalanche-sdk/interchain/warp"; const msg = L1ValidatorWeightMessage.fromHex(signedWarpMsgHex); console.log(msg.nodeID); console.log(msg.weight); console.log(msg.startTime); ``` ## L1ValidatorRegistrationMessage Parse an L1 Validator Registration message. ### Methods | Method | Parameters | Returns | Description | | --------- | ------------- | -------------------------------- | -------------- | | `fromHex` | `hex: string` | `L1ValidatorRegistrationMessage` | Parse from hex | ## SubnetToL1ConversionMessage Parse a Subnet to L1 Conversion message. ### Methods | Method | Parameters | Returns | Description | | --------- | ------------- | ----------------------------- | -------------- | | `fromHex` | `hex: string` | `SubnetToL1ConversionMessage` | Parse from hex | ### Example ```typescript import { SubnetToL1ConversionMessage } from "@avalanche-sdk/interchain/warp"; const msg = SubnetToL1ConversionMessage.fromHex(signedWarpMsgHex); console.log(msg.subnetID); console.log(msg.assetID); console.log(msg.initialSupply); ``` # JSON-RPC Accounts (/docs/tooling/avalanche-sdk/client/accounts/json-rpc) --- title: JSON-RPC Accounts icon: globe description: Learn how to use JSON-RPC accounts with browser wallets like MetaMask and Core in the Avalanche Client SDK. --- ## Overview A JSON-RPC Account is an Account whose signing keys are stored on an external Wallet. It **defers** signing of transactions & messages to the target Wallet over JSON-RPC. Examples of such wallets include Browser Extension Wallets (like MetaMask or Core) or Mobile Wallets over WalletConnect. ## Supported Wallets ### Core Browser Extension Core is Avalanche's official browser extension wallet that provides native support for Avalanche networks (C/P/X-Chains). ```typescript import "@avalanche-sdk/client/window"; import { createAvalancheWalletClient } from "@avalanche-sdk/client"; import { avalanche } from "@avalanche-sdk/client/chains"; try { // Check if Core extension is available const provider = window.avalanche; if (!provider) { throw new Error( "Core extension not found. Please install Core. https://core.app" ); } // Create wallet client with Core provider const walletClient = createAvalancheWalletClient({ chain: avalanche, transport: { type: "custom", provider, }, }); } catch (error) { console.error("Failed to initialize Core provider:", error); } ``` ### MetaMask MetaMask can be used with Avalanche networks through custom network configuration for C-Chain evm operations. ```typescript import "@avalanche-sdk/client/window"; import { createAvalancheWalletClient, custom } from "@avalanche-sdk/client"; import { avalanche } from "@avalanche-sdk/client/chains"; // Use MetaMask provider const provider = window.ethereum; if (!provider) { throw new Error("MetaMask not found. Please install MetaMask."); } const walletClient = createAvalancheWalletClient({ chain: avalanche, transport: { type: "custom", provider, }, }); ``` ## Basic Usage ### 1. Request Account Connection ```typescript try { // Request accounts from the wallet const accounts: string[] = await walletClient.requestAddresses(); const address: string = accounts[0]; // Get the first account console.log("Connected address:", address); } catch (error) { console.error("Failed to request addresses:", error); } ``` ### 2. Send Transactions ```typescript try { // Send a transaction (will prompt user to sign) const txHash: string = await walletClient.send({ to: "0x742d35Cc6634C0532925a3b8D4C9db96C4b4d8b6", amount: 0.001, }); console.log("Transaction hash:", txHash); } catch (error) { console.error("Failed to send transaction:", error); } ``` ### 3. Switch Networks ```typescript import { avalanche, avalancheFuji } from "@avalanche-sdk/client/chains"; try { // Switch to Avalanche mainnet await walletClient.switchChain({ id: avalanche.id, }); console.log("Switched to Avalanche mainnet"); // Switch to Fuji testnet await walletClient.switchChain({ id: avalancheFuji.id, }); console.log("Switched to Fuji testnet"); } catch (error) { console.error("Failed to switch chain:", error); } ``` ## React Integration Example Here's a complete React component for wallet connection using Core: ```typescript import { createAvalancheWalletClient } from "@avalanche-sdk/client"; import { avalanche, avalancheFuji } from "@avalanche-sdk/client/chains"; import { useState, useCallback } from "react"; import "@avalanche-sdk/client/window"; export function ConnectWallet() { const [connected, setConnected] = useState(false); const [address, setAddress] = useState(null); const [chain, setChain] = useState<"mainnet" | "fuji">("fuji"); const selectedChain = chain === "fuji" ? avalancheFuji : avalanche; const connect = useCallback(async () => { try { const provider = window.avalanche; if (!provider) { throw new Error("Core extension not found. Please install Core."); } const walletClient = createAvalancheWalletClient({ chain: selectedChain, transport: { type: "custom", provider }, }); const accounts = await walletClient.requestAddresses(); const addr = accounts[0]; setAddress(addr); setConnected(true); } catch (error) { console.error("Connection failed:", error); } }, [selectedChain]); const sendTransaction = useCallback(async () => { if (!connected || !address) return; try { const provider = (window as any).avalanche; const walletClient = createAvalancheWalletClient({ chain: selectedChain, transport: { type: "custom", provider }, }); const txHash = await walletClient.send({ to: "0x742d35Cc6634C0532925a3b8D4C9db96C4b4d8b6", amount: 0.001, }); console.log("Transaction sent:", txHash); } catch (error) { console.error("Transaction failed:", error); } }, [connected, address, selectedChain]); return (

Wallet Connection

{!connected ? ( ) : (

Connected: {address}

Network: {selectedChain.name}

)}
); } ``` ## Cross-Chain Operations JSON-RPC accounts support cross-chain operations through the Avalanche Client SDK: ```typescript // P-Chain export transaction const pChainExportTxn = await walletClient.pChain.prepareExportTxn({ destinationChain: "C", fromAddress: address, exportedOutput: { addresses: [address], amount: avaxToWei(0.001), }, }); const txHash = await walletClient.sendXPTransaction(pChainExportTxn); ``` ## Best Practices ### 1. Check Provider Availability ```typescript // Always check if the provider is available if (typeof window !== "undefined" && window.avalanche) { // Core is available } else if (typeof window !== "undefined" && window.ethereum) { // MetaMask is available } else { // No wallet provider found } ``` ### 2. Handle Network Switching ```typescript // Check if wallet is on the correct network const currentChainId = await walletClient.getChainId(); if (currentChainId !== avalanche.id) { await walletClient.switchChain({ id: avalanche.id }); } ``` ### 3. Graceful Error Handling ```typescript const handleWalletError = (error: any) => { switch (error.code) { case 4001: return "User rejected the request"; case -32002: return "Request already pending"; case -32602: return "Invalid parameters"; default: return error.message || "Unknown error occurred"; } }; ``` ## Troubleshooting ### Common Issues **Provider Not Found** ```typescript // Check if provider exists if (!window.avalanche && !window.ethereum) { throw new Error("No wallet provider found. Please install Core or MetaMask."); } ``` **Wrong Network** ```typescript // Ensure wallet is on the correct network const chainId = await walletClient.getChainId(); if (chainId !== avalanche.id) { await walletClient.switchChain({ id: avalanche.id }); } ``` **User Rejection** ```typescript try { await walletClient.send({ to: address, amount: 0.001 }); } catch (error) { if (error.code === 4001) { console.log("User rejected transaction"); } } ``` ## Next Steps - **[Local Accounts](accounts/local)** - Learn about local account management - **[Wallet Operations](methods/wallet-methods/wallet)** - Learn how to send transactions - **[Cross-Chain Transfers](methods/wallet-methods/wallet#cross-chain-transfers)** - Moving assets between chains # Network-Specific Addresses (/docs/tooling/avalanche-sdk/client/accounts/local/addresses) --- title: "Network-Specific Addresses" icon: "map-pin" --- ## Overview Avalanche uses different address formats for each chain. EVM addresses work the same across networks, but XP addresses use network-specific HRPs (Human-Readable Prefixes). ## Address Formats ### EVM Addresses (C-Chain) EVM addresses are the same across all networks: ```typescript const evmAddress = account.getEVMAddress(); // 0x742d35Cc... ``` ### XP Addresses (X/P-Chain) XP addresses use network-specific HRPs: ```typescript // Mainnet const mainnetX = account.getXPAddress("X", "avax"); // X-avax1... const mainnetP = account.getXPAddress("P", "avax"); // P-avax1... // Testnet (Fuji) const fujiX = account.getXPAddress("X", "fuji"); // X-fuji1... const fujiP = account.getXPAddress("P", "fuji"); // P-fuji1... ``` ## Network Configuration ```typescript // Mainnet addresses const mainnet = { evm: account.getEVMAddress(), xChain: account.getXPAddress("X", "avax"), pChain: account.getXPAddress("P", "avax"), }; // Testnet addresses const testnet = { evm: account.getEVMAddress(), xChain: account.getXPAddress("X", "fuji"), pChain: account.getXPAddress("P", "fuji"), }; ``` ## Next Steps - **[Account Utilities](accounts/local/utilities)** - Account validation and utilities - **[Using Accounts with Clients](accounts/local/clients)** - Client integration patterns # Using Accounts with Clients (/docs/tooling/avalanche-sdk/client/accounts/local/clients) --- title: "Using Accounts with Clients" icon: "link" --- ## Overview Accounts work with both public clients (read-only) and wallet clients (transactions). You can hoist the account into the client or pass it to each method. ## Public Client (Read-Only) ```typescript import { createAvalancheClient } from "@avalanche-sdk/client"; import { avalanche } from "@avalanche-sdk/client/chains"; import { privateKeyToAvalancheAccount } from "@avalanche-sdk/client/accounts"; const client = createAvalancheClient({ chain: avalanche, transport: { type: "http" }, }); const account = privateKeyToAvalancheAccount("0x..."); // Read operations const balance = await client.getBalance({ address: account.getEVMAddress() }); const height = await client.pChain.getHeight(); ``` ## Wallet Client (Transactions) ```typescript import { createAvalancheWalletClient } from "@avalanche-sdk/client"; import { avalanche } from "@avalanche-sdk/client/chains"; import { privateKeyToAvalancheAccount } from "@avalanche-sdk/client/accounts"; import { avaxToWei } from "@avalanche-sdk/client/utils"; const account = privateKeyToAvalancheAccount("0x..."); // Hoist account (recommended) const walletClient = createAvalancheWalletClient({ account, // Account is hoisted chain: avalanche, transport: { type: "http" }, }); // C-Chain transaction const txHash = await walletClient.send({ to: "0x742d35Cc6634C0532925a3b8D4C9db96C4b4d8b6", amount: avaxToWei(0.001), }); // X/P-Chain transaction const xpTx = await walletClient.xChain.prepareBaseTxn({ outputs: [{ addresses: [account.getXPAddress("X")], amount: 1 }], }); await walletClient.sendXPTransaction(xpTx); ``` ## Account Hoisting You can hoist the account into the client (recommended) or pass it to each method: ```typescript // Hoisted (recommended) const walletClient = createAvalancheWalletClient({ account, // No need to pass to each method chain: avalanche, transport: { type: "http" }, }); await walletClient.send({ to: "0x...", amount: 0.001 }); // Or pass per method const walletClient = createAvalancheWalletClient({ chain: avalanche, transport: { type: "http" }, }); await walletClient.send({ account, to: "0x...", amount: 0.001 }); ``` ## Cross-Chain Operations Cross-chain transfers use the export/import pattern. Export from the source chain, wait for confirmation, then import to the destination chain. [Learn more about cross-chain transfers →](methods/wallet-methods/wallet#cross-chain-transfers) ## Next Steps - **[Wallet Operations](methods/wallet-methods/wallet)** - Learn how to send transactions - **[P-Chain Operations](methods/public-methods/p-chain)** - Validator and staking operations - **[X-Chain Operations](methods/public-methods/x-chain)** - Asset transfers and UTXO operations - **[C-Chain Operations](methods/public-methods/c-chain)** - EVM and smart contract operations # HD Key Accounts (/docs/tooling/avalanche-sdk/client/accounts/local/hd-key) --- title: "HD Key Accounts" icon: "tree" --- ## Overview HD Key Accounts create Avalanche accounts from hierarchical deterministic (HD) keys with custom derivation paths. This allows for advanced key management and multiple account generation from a single seed. ## Creating HD Key Accounts ### Basic Usage ```typescript import { hdKeyToAvalancheAccount, HDKey } from "@avalanche-sdk/client/accounts"; import type { AvalancheAccount } from "@avalanche-sdk/client/accounts"; const seed: Uint8Array = new Uint8Array(64); // Your seed const hdKey: HDKey = HDKey.fromMasterSeed(seed); const account: AvalancheAccount = hdKeyToAvalancheAccount(hdKey); ``` ### Parameters - **`hdKey: HDKey`** - The HD key instance (required) - **`options?: HDKeyToAvalancheAccountOptions`** - Custom derivation path options (optional) ### Options ```typescript interface HDKeyToAvalancheAccountOptions { accountIndex?: number; // Account index (default: 0) addressIndex?: number; // Address index (default: 0) changeIndex?: number; // Change index (default: 0) xpAccountIndex?: number; // XP account index (default: 0) xpAddressIndex?: number; // XP address index (default: 0) xpChangeIndex?: number; // XP change index (default: 0) path?: string; // Custom derivation path for EVM Account xpPath?: string; // Custom derivation path for XP Account } ``` ## Derivation Paths ### Default Derivation Paths HD Key accounts use BIP-44 derivation paths to generate deterministic keys from a seed. By default, the SDK uses separate paths for EVM (C-Chain) and XP (X/P-Chain) accounts: **EVM (C-Chain) Path:** ``` m/44'/60'/{accountIndex}'/{changeIndex}/{addressIndex} ``` **XP (X/P-Chain) Path:** ``` m/44'/9000'/{xpAccountIndex}'/{xpChangeIndex}/{xpAddressIndex} ``` **Path Components:** - `m` - Master key - `44'` - BIP-44 purpose (hardened) - `60'` (EVM) or `9000'` (XP) - Coin type (hardened) - `60` is the standard Ethereum coin type (used for C-Chain) - `9000` is the Avalanche coin type (used for X/P-Chain) - `{accountIndex}'` - Account index (hardened, default: 0) - `{changeIndex}` - Change index (default: 0) - `0` is typically used for external addresses - `1` is typically used for change addresses - `{addressIndex}` - Address index (default: 0) **Default Values:** When no options are provided, both paths default to `m/44'/60'/0'/0/0` (EVM) and `m/44'/9000'/0'/0/0` (XP). ### How Index Options Affect Paths The following table shows how different index combinations affect the derivation paths: | Option | EVM Path | XP Path | Notes | | ---------------------------------- | ------------------ | -------------------- | ------------------------------- | | Default (no options) | `m/44'/60'/0'/0/0` | `m/44'/9000'/0'/0/0` | Both use index 0 | | `accountIndex: 1` | `m/44'/60'/1'/0/0` | `m/44'/9000'/1'/0/0` | Both use account index 1 | | `addressIndex: 2` | `m/44'/60'/0'/0/2` | `m/44'/9000'/0'/0/2` | Both use address index 2 | | `changeIndex: 1` | `m/44'/60'/0'/1/0` | `m/44'/9000'/0'/1/0` | Both use change index 1 | | `accountIndex: 1, addressIndex: 2` | `m/44'/60'/1'/0/2` | `m/44'/9000'/1'/0/2` | Combined indices | | `xpAccountIndex: 2` | `m/44'/60'/0'/0/0` | `m/44'/9000'/2'/0/0` | XP uses different account index | | `xpAddressIndex: 3` | `m/44'/60'/0'/0/0` | `m/44'/9000'/0'/0/3` | XP uses different address index | | `xpChangeIndex: 1` | `m/44'/60'/0'/0/0` | `m/44'/9000'/0'/1/0` | XP uses different change index | **Important Notes:** - When you specify `accountIndex`, `addressIndex`, or `changeIndex`, they apply to EVM path. - XP-specific options (`xpAccountIndex`, `xpAddressIndex`, `xpChangeIndex`) only affect the XP path, allowing you to use different indices for XP accounts while keeping EVM indices separate. ### Custom Path Override **Important Limitation**: When the `path` or `xpPath` option is provided, it overrides the EVM or XP derivation paths. When you provide a custom `path` or `xpPath`, it completely replaces the default path calculation for EVM and XP accounts: ```typescript import { hdKeyToAvalancheAccount } from "@avalanche-sdk/client/accounts"; import type { AvalancheAccount } from "@avalanche-sdk/client/accounts"; // ⚠️ WARNING: This path will be used for EVM accounts const account: AvalancheAccount = hdKeyToAvalancheAccount(hdKey, { path: "m/44'/60'/0'/0/0", // EVM will use this path }); // ⚠️ WARNING: This path will be used for XP accounts const account: AvalancheAccount = hdKeyToAvalancheAccount(hdKey, { xpPath: "m/44'/60'/0'/0/0", // XP Account will use this path }); // The following options are IGNORED when path is provided: // accountIndex, addressIndex, changeIndex // xpAccountIndex, xpAddressIndex, xpChangeIndex ``` **When to Use Custom Paths:** - When you need to match a specific wallet's derivation path - When migrating from another wallet implementation - When you need full control over the derivation path **When NOT to Use Custom Paths:** - When you want different paths for EVM and XP accounts (use index options instead) - When you want to leverage the default BIP-44 compliant paths - In most standard use cases ### Examples ```typescript import { hdKeyToAvalancheAccount, HDKey } from "@avalanche-sdk/client/accounts"; const hdKey = HDKey.fromMasterSeed(seed); // Default paths (EVM: m/44'/60'/0'/0/0, XP: m/44'/9000'/0'/0/0) const account = hdKeyToAvalancheAccount(hdKey); // Different indices const account1 = hdKeyToAvalancheAccount(hdKey, { accountIndex: 1 }); const account2 = hdKeyToAvalancheAccount(hdKey, { addressIndex: 1 }); // Separate EVM and XP indices const account3 = hdKeyToAvalancheAccount(hdKey, { accountIndex: 0, // EVM xpAccountIndex: 2, // XP }); // Custom path (⚠️ applies to both EVM and XP) const account4 = hdKeyToAvalancheAccount(hdKey, { path: "m/44'/60'/0'/0/0", }); ``` ## Multiple Accounts Generate multiple accounts from the same HD key: ```typescript import { hdKeyToAvalancheAccount, HDKey } from "@avalanche-sdk/client/accounts"; const hdKey = HDKey.fromMasterSeed(seed); const account1 = hdKeyToAvalancheAccount(hdKey, { addressIndex: 0 }); const account2 = hdKeyToAvalancheAccount(hdKey, { addressIndex: 1 }); const account3 = hdKeyToAvalancheAccount(hdKey, { addressIndex: 2 }); ``` ## Using with Wallet Client ```typescript import { createAvalancheWalletClient } from "@avalanche-sdk/client"; import { avalanche } from "@avalanche-sdk/client/chains"; const walletClient = createAvalancheWalletClient({ account, chain: avalanche, transport: { type: "http" }, }); const txHash = await walletClient.send({ to: "0x742d35Cc6634C0532925a3b8D4C9db96C4b4d8b6", amount: avaxToWei(0.001), }); ``` ## Security **Never expose seeds in client-side code or commit them to version control. Use environment variables.** ```typescript // ✅ Good const seed = new Uint8Array(Buffer.from(process.env.SEED!, "hex")); const hdKey = HDKey.fromMasterSeed(seed); // ❌ Bad const seed = new Uint8Array(64); // Hardcoded seed ``` ## Next Steps - **[Account Utilities](utilities)** - Account validation and utilities - **[Using Accounts with Clients](clients)** - Client integration patterns # Local Accounts (/docs/tooling/avalanche-sdk/client/accounts/local) --- title: Local Accounts icon: shield description: Learn how to create and manage local accounts in the Avalanche Client SDK with private keys, mnemonics, and HD keys. --- ## Overview Local accounts store keys on your machine and sign transactions before broadcasting. Use these for server-side apps, bots, or when you need full control. **Security:** Never expose private keys or mnemonics in client-side code or commit them to version control. Use environment variables. ## Quick Start ```typescript import { privateKeyToAvalancheAccount } from "@avalanche-sdk/client/accounts"; const account = privateKeyToAvalancheAccount(process.env.PRIVATE_KEY!); console.log(account.getEVMAddress()); // 0x742d35Cc... console.log(account.getXPAddress("X")); // X-avax1... ``` ## Account Types ### Private Key Simplest option—create an account directly from a private key. ```typescript import { privateKeyToAvalancheAccount } from "@avalanche-sdk/client/accounts"; const account = privateKeyToAvalancheAccount("0x..."); ``` [Private Key Accounts →](local/private-key) ### Mnemonic User-friendly option—create an account from a seed phrase. ```typescript import { mnemonicsToAvalancheAccount } from "@avalanche-sdk/client/accounts"; const account = mnemonicsToAvalancheAccount("abandon abandon abandon..."); ``` [Mnemonic Accounts →](local/mnemonic) ### HD Key Advanced option—create accounts from HD keys with custom derivation paths. ```typescript import { hdKeyToAvalancheAccount, HDKey } from "@avalanche-sdk/client/accounts"; const hdKey = HDKey.fromMasterSeed(seed); const account = hdKeyToAvalancheAccount(hdKey, { accountIndex: 0 }); ``` [HD Key Accounts →](local/hd-key) ## Instantiation ### Setup Wallet Client ```typescript import { createAvalancheWalletClient } from "@avalanche-sdk/client"; import { avalanche } from "@avalanche-sdk/client/chains"; import { privateKeyToAvalancheAccount } from "@avalanche-sdk/client/accounts"; import { avaxToWei } from "@avalanche-sdk/client/utils"; const account = privateKeyToAvalancheAccount(process.env.PRIVATE_KEY!); const walletClient = createAvalancheWalletClient({ account, // Hoist account to avoid passing it to each method chain: avalanche, transport: { type: "http" }, }); // Use wallet methods const txHash = await walletClient.send({ to: "0x742d35Cc6634C0532925a3b8D4C9db96C4b4d8b6", amount: avaxToWei(0.001), }); // Use public methods const balance = await walletClient.getBalance({ address: account.getEVMAddress(), }); ``` ## Account Generation ### Generate Private Key ```typescript import { generatePrivateKey, privateKeyToAvalancheAccount, } from "@avalanche-sdk/client/accounts"; import type { AvalancheAccount } from "@avalanche-sdk/client/accounts"; const privateKey: string = generatePrivateKey(); const account: AvalancheAccount = privateKeyToAvalancheAccount(privateKey); ``` ### Generate Mnemonic ```typescript import { generateMnemonic, mnemonicsToAvalancheAccount, } from "@avalanche-sdk/client/accounts"; import type { AvalancheAccount } from "@avalanche-sdk/client/accounts"; const mnemonic: string = generateMnemonic(); const account: AvalancheAccount = mnemonicsToAvalancheAccount(mnemonic); ``` ## Address Management ### Get All Addresses ```typescript const addresses = { evm: account.getEVMAddress(), xChain: account.getXPAddress("X"), pChain: account.getXPAddress("P"), }; console.log("EVM Address:", addresses.evm); console.log("X-Chain Address:", addresses.xChain); console.log("P-Chain Address:", addresses.pChain); ``` **Learn more:** For network-specific addresses and detailed address management examples, see [Network-Specific Addresses](local/addresses). ## Security **Never expose private keys or mnemonics in client-side code or commit them to version control. Always use environment variables.** ```typescript // ✅ Good: Use environment variables const account = privateKeyToAvalancheAccount(process.env.PRIVATE_KEY!); // ❌ Bad: Hardcoded private key const account = privateKeyToAvalancheAccount("0x1234..."); ``` ## Learn More - [Account Generation](#account-generation) - Create secure accounts - [Address Management](#address-management) - Multi-network addresses - [Using with Clients](local/clients) - Client integration - [HD Key Accounts](local/hd-key) - Hierarchical deterministic accounts - [Account Utilities](local/utilities) - Validation and helpers - [Network-Specific Addresses](local/addresses) - Advanced address management # Mnemonic Accounts (/docs/tooling/avalanche-sdk/client/accounts/local/mnemonic) --- title: "Mnemonic Accounts" icon: "seedling" --- ## Overview Mnemonic Accounts create Avalanche accounts from mnemonic phrases (seed words). Mnemonics provide a human-readable way to backup and restore accounts, making them ideal for user-facing applications. **Best for:** - User-facing applications - Mobile wallets - Desktop wallets - Applications requiring easy account recovery ## Creating Mnemonic Accounts ### Basic Usage ```typescript import { mnemonicsToAvalancheAccount } from "@avalanche-sdk/client/accounts"; import type { AvalancheAccount } from "@avalanche-sdk/client/accounts"; const mnemonic = "abandon abandon abandon abandon abandon abandon abandon abandon abandon abandon abandon about"; const account: AvalancheAccount = mnemonicsToAvalancheAccount(mnemonic); console.log("EVM Address:", account.getEVMAddress()); console.log("X-Chain Address:", account.getXPAddress("X")); console.log("P-Chain Address:", account.getXPAddress("P")); ``` ### Parameters - **`mnemonic: string`** - The mnemonic phrase (required) - **`options?: MnemonicToAccountOptions`** - Optional derivation path configuration ### Options Interface ```typescript interface MnemonicToAccountOptions { accountIndex?: number; // Account index (default: 0) addressIndex?: number; // Address index (default: 0) changeIndex?: number; // Change index (default: 0) xpAccountIndex?: number; // XP account index (default: 0) xpAddressIndex?: number; // XP address index (default: 0) xpChangeIndex?: number; // XP change index (default: 0) path?: string; // Custom derivation path for EVM Account xpPath?: string; // Custom derivation path for XP Account } ``` ## Generating Mnemonics ### Generate Random Mnemonic ```typescript import { generateMnemonic, mnemonicsToAvalancheAccount, } from "@avalanche-sdk/client/accounts"; import type { AvalancheAccount } from "@avalanche-sdk/client/accounts"; const mnemonic: string = generateMnemonic(); console.log("Generated mnemonic:", mnemonic); const account: AvalancheAccount = mnemonicsToAvalancheAccount(mnemonic); console.log("EVM address:", account.getEVMAddress()); console.log("X-Chain address:", account.getXPAddress("X")); console.log("P-Chain address:", account.getXPAddress("P")); // ⚠️ IMPORTANT: Store mnemonic securely // Never log it in production or commit to version control ``` ### Generate Mnemonic in Different Languages ```typescript import { generateMnemonic, english, spanish, japanese, } from "@avalanche-sdk/client/accounts"; // Generate mnemonic in different languages const englishMnemonic = generateMnemonic(english); const spanishMnemonic = generateMnemonic(spanish); const japaneseMnemonic = generateMnemonic(japanese); // Also available: french, italian, portuguese, czech, korean, simplifiedChinese, traditionalChinese ``` ## Derivation Paths ### Default Derivation Paths The Avalanche Client SDK uses different derivation paths for EVM and XP accounts: ```typescript // EVM (C-Chain) derivation path // Standard BIP44 path for Ethereum const evmPath = "m/44'/60'/0'/0/0"; // m/44'/60'/{accountIndex}'/{changeIndex}/{addressIndex} // XP (X/P-Chain) derivation path // Standard BIP44 path for Avalanche const xpPath = "m/44'/9000'/0'/0/0"; // m/44'/9000'/{xpAccountIndex}'/{xpChangeIndex}/{xpAddressIndex} ``` ### Custom Derivation Paths ```typescript import { mnemonicsToAvalancheAccount } from "@avalanche-sdk/client/accounts"; import type { AvalancheAccount } from "@avalanche-sdk/client/accounts"; const mnemonic = "abandon abandon abandon..."; // Create account with custom derivation paths const account: AvalancheAccount = mnemonicsToAvalancheAccount(mnemonic, { accountIndex: 0, addressIndex: 0, changeIndex: 0, path: "m/44'/60'/0'/0/0", // Custom path }); ``` ### Multiple Accounts from Same Mnemonic ```typescript import { mnemonicsToAvalancheAccount } from "@avalanche-sdk/client/accounts"; const mnemonic = "abandon abandon abandon..."; // Different account indices const account0 = mnemonicsToAvalancheAccount(mnemonic, { accountIndex: 0 }); const account1 = mnemonicsToAvalancheAccount(mnemonic, { accountIndex: 1 }); // Different address indices const addr0 = mnemonicsToAvalancheAccount(mnemonic, { addressIndex: 0 }); const addr1 = mnemonicsToAvalancheAccount(mnemonic, { addressIndex: 1 }); ``` ## Getting Addresses ```typescript const account = mnemonicsToAvalancheAccount(mnemonic); // All chain addresses const evmAddress = account.getEVMAddress(); const xChainAddress = account.getXPAddress("X"); const pChainAddress = account.getXPAddress("P"); // Network-specific const mainnet = account.getXPAddress("X", "avax"); const testnet = account.getXPAddress("X", "fuji"); ``` ## Using with Wallet Client ```typescript import { createAvalancheWalletClient } from "@avalanche-sdk/client"; import { avalanche } from "@avalanche-sdk/client/chains"; import { mnemonicsToAvalancheAccount } from "@avalanche-sdk/client/accounts"; const account = mnemonicsToAvalancheAccount(process.env.MNEMONIC!); const walletClient = createAvalancheWalletClient({ account, chain: avalanche, transport: { type: "http" }, }); // C-Chain transaction const txHash = await walletClient.send({ to: "0x742d35Cc6634C0532925a3b8D4C9db96C4b4d8b6", amount: 0.001, }); // X/P-Chain transaction const xpTx = await walletClient.xChain.prepareBaseTxn({ outputs: [{ addresses: [account.getXPAddress("X")], amount: 1 }], }); await walletClient.sendXPTransaction(xpTx); ``` ## Security **Never expose mnemonics in client-side code or commit them to version control. Use environment variables.** ```typescript // ✅ Good const account = mnemonicsToAvalancheAccount(process.env.MNEMONIC!); // ❌ Bad const account = mnemonicsToAvalancheAccount("abandon abandon abandon..."); ``` ## Next Steps - **[HD Key Accounts](accounts/local/hd-key)** - Learn about hierarchical deterministic accounts - **[Account Utilities](accounts/local/utilities)** - Account validation and utilities - **[Using Accounts with Clients](accounts/local/clients)** - Client integration patterns - **[Network-Specific Addresses](accounts/local/addresses)** - Multi-network support # Private Key Accounts (/docs/tooling/avalanche-sdk/client/accounts/local/private-key) --- title: "Private Key Accounts" icon: "key" --- ## Overview Private Key Accounts provide the simplest way to create an Avalanche account from a single private key. They support both EVM (C-Chain) and X/P (X-Chain/P-Chain) operations with unified address management. **Best for:** - Server-side applications - Automated bots and services - Testing and development - Scripts and tools **Security:** Private keys must be kept secure. Never expose private keys in client-side code or commit them to version control. Use environment variables in production. ## Creating Private Key Accounts ### Basic Usage ```typescript import { privateKeyToAvalancheAccount } from "@avalanche-sdk/client/accounts"; import type { AvalancheAccount } from "@avalanche-sdk/client/accounts"; const privateKey = "0x1234...your_private_key_here"; const account: AvalancheAccount = privateKeyToAvalancheAccount(privateKey); console.log("EVM Address:", account.getEVMAddress()); console.log("X-Chain Address:", account.getXPAddress("X")); console.log("P-Chain Address:", account.getXPAddress("P")); ``` ### Working with Environment Variables ```typescript import { privateKeyToAvalancheAccount } from "@avalanche-sdk/client/accounts"; const account = privateKeyToAvalancheAccount(process.env.PRIVATE_KEY!); ``` ## Generating Private Keys ### Generate Random Private Key ```typescript import { generatePrivateKey, privateKeyToAvalancheAccount, } from "@avalanche-sdk/client/accounts"; import type { AvalancheAccount } from "@avalanche-sdk/client/accounts"; const privateKey: string = generatePrivateKey(); console.log("Generated private key:", privateKey); const account: AvalancheAccount = privateKeyToAvalancheAccount(privateKey); console.log("EVM Address:", account.getEVMAddress()); console.log("X-Chain Address:", account.getXPAddress("X")); console.log("P-Chain Address:", account.getXPAddress("P")); ``` ## Account Properties ### EVM Account Each Avalanche account contains an EVM account for C-Chain operations: ```typescript import { privateKeyToAvalancheAccount } from "@avalanche-sdk/client/accounts"; import type { AvalancheAccount } from "@avalanche-sdk/client/accounts"; const account: AvalancheAccount = privateKeyToAvalancheAccount("0x..."); // Access EVM account properties const evmAccount = account.evmAccount; console.log("Address:", evmAccount.address); console.log("Type:", evmAccount.type); // "local" console.log("Source:", evmAccount.source); // "privateKey" // Get public key (if available) if (evmAccount.publicKey) { console.log("Public Key:", evmAccount.publicKey); } ``` ### XP Account Each Avalanche account contains an XP account for X-Chain and P-Chain operations: ```typescript import { privateKeyToAvalancheAccount } from "@avalanche-sdk/client/accounts"; import type { AvalancheAccount, LocalXPAccount, } from "@avalanche-sdk/client/accounts"; const account: AvalancheAccount = privateKeyToAvalancheAccount("0x..."); // Access XP account properties if (account.xpAccount) { const xpAccount: LocalXPAccount = account.xpAccount; console.log("Public Key:", xpAccount.publicKey); console.log("Type:", xpAccount.type); // "local" console.log("Source:", xpAccount.source); // "privateKey" } ``` ## Address Management ### Get All Addresses ```typescript import { privateKeyToAvalancheAccount } from "@avalanche-sdk/client/accounts"; import type { AvalancheAccount } from "@avalanche-sdk/client/accounts"; const account: AvalancheAccount = privateKeyToAvalancheAccount("0x..."); // Get all addresses const addresses = { evm: account.getEVMAddress(), // "0x742d35Cc6634C0532925a3b8D4C9db96C4b4d8b6" xChain: account.getXPAddress("X"), // "X-avax1..." pChain: account.getXPAddress("P"), // "P-avax1..." base: account.getXPAddress(), // "avax1..." (without chain prefix) }; console.log("All Addresses:", addresses); ``` ### Network-Specific Addresses ```typescript import { privateKeyToAvalancheAccount } from "@avalanche-sdk/client/accounts"; import type { AvalancheAccount } from "@avalanche-sdk/client/accounts"; const account: AvalancheAccount = privateKeyToAvalancheAccount("0x..."); // Mainnet addresses (default) const mainnetAddresses = { evm: account.getEVMAddress(), xChain: account.getXPAddress("X", "avax"), pChain: account.getXPAddress("P", "avax"), }; // Testnet (Fuji) addresses const testnetAddresses = { evm: account.getEVMAddress(), xChain: account.getXPAddress("X", "fuji"), pChain: account.getXPAddress("P", "fuji"), }; console.log("Mainnet:", mainnetAddresses); console.log("Testnet:", testnetAddresses); ``` ## Using with Wallet Client ```typescript import { createAvalancheWalletClient } from "@avalanche-sdk/client"; import { avalanche } from "@avalanche-sdk/client/chains"; import { privateKeyToAvalancheAccount } from "@avalanche-sdk/client/accounts"; const account = privateKeyToAvalancheAccount(process.env.PRIVATE_KEY!); const walletClient = createAvalancheWalletClient({ account, chain: avalanche, transport: { type: "http" }, }); // C-Chain transaction const txHash = await walletClient.send({ to: "0x742d35Cc6634C0532925a3b8D4C9db96C4b4d8b6", amount: 0.001, }); // X/P-Chain transaction const xpTx = await walletClient.xChain.prepareBaseTxn({ outputs: [{ addresses: [account.getXPAddress("X")], amount: 1 }], }); await walletClient.sendXPTransaction(xpTx); ``` ## Message Signing ```typescript import { signMessage } from "@avalanche-sdk/client/accounts"; import { privateKeyToAvalancheAccount } from "@avalanche-sdk/client/accounts"; const account = privateKeyToAvalancheAccount("0x..."); // EVM message signing const evmSignature = await signMessage({ account: account.evmAccount, message: "Hello Avalanche!", }); // XP message signing if (account.xpAccount) { const xpSignature = await account.xpAccount.signMessage("Hello Avalanche!"); const isValid = account.xpAccount.verify("Hello Avalanche!", xpSignature); } ``` ## Security **Never expose private keys in client-side code or commit them to version control. Use environment variables.** ```typescript // ✅ Good const account = privateKeyToAvalancheAccount(process.env.PRIVATE_KEY!); // ❌ Bad const account = privateKeyToAvalancheAccount("0x1234..."); ``` ## Next Steps - **[Mnemonic Accounts](mnemonic)** - Learn about mnemonic-based accounts - **[HD Key Accounts](hd-key)** - Learn about hierarchical deterministic accounts - **[Account Utilities](utilities)** - Account validation and utilities - **[Network-Specific Addresses](addresses)** - Working with different network addresses # Account Utilities (/docs/tooling/avalanche-sdk/client/accounts/local/utilities) --- title: "Account Utilities" icon: "hammer" --- ## Overview Account utilities provide validation, parsing, and helper functions for working with Avalanche accounts and addresses. ## Account Validation ### Parse Avalanche Account ```typescript function parseAvalancheAccount( account: Address | AvalancheAccount | undefined ): AvalancheAccount | undefined; ``` **Parameters:** - `account` (`Address | AvalancheAccount | undefined`): The account or address to parse. Can be an EVM address string, an existing `AvalancheAccount`, or `undefined`. **Returns:** - `AvalancheAccount | undefined`: Returns an `AvalancheAccount` when an address or account is provided, or `undefined` when the input is `undefined`. **Behavior:** The function behaves differently based on the input type: 1. **When an address string is passed**: Returns an `AvalancheAccount` with only `evmAccount` populated. The `xpAccount` property will be `undefined` because an address string alone doesn't contain the private key information needed to derive XP account details. 2. **When an AvalancheAccount is passed**: Returns the account as-is without modification. This is useful for normalizing function parameters that accept both addresses and accounts. 3. **When undefined is passed**: Returns `undefined`. This allows for optional account parameters in functions. **Limitations:** - When parsing from an address string, the returned account will only have `evmAccount` populated. The `xpAccount` property will be `undefined`, which means: - You cannot perform X-Chain or P-Chain operations that require signing (e.g., `account.xpAccount.signMessage()`) - You can still use the account for read-only operations or C-Chain (EVM) operations - To get XP account functionality, you need to create an account from a private key or mnemonic or use a custom provider **Example:** ```typescript import { parseAvalancheAccount } from "@avalanche-sdk/client/accounts"; import type { AvalancheAccount, Address } from "@avalanche-sdk/client/accounts"; // Parse from address string (only evmAccount populated) const address: Address = "0x742d35Cc6634C0532925a3b8D4C9db96C4b4d8b6"; const account = parseAvalancheAccount(address); // account.xpAccount is undefined - can only use for C-Chain operations // Parse existing account (returns as-is) const fullAccount: AvalancheAccount = /* ... */; const normalized = parseAvalancheAccount(fullAccount); // Returns as-is // Handle undefined const optional = parseAvalancheAccount(undefined); // Returns undefined ``` ## Address Utilities ### Private Key to XP Address ```typescript function privateKeyToXPAddress(privateKey: string, hrp: string): XPAddress; ``` **Parameters:** - `privateKey` (`string`): The private key with `0x` prefix. - `hrp` (`string`): The human-readable prefix for the address. Use `"avax"` for mainnet or `"fuji"` for testnet. **Returns:** - `XPAddress`: The Bech32-encoded XP address as a string. **Example:** ```typescript import { privateKeyToXPAddress } from "@avalanche-sdk/client/accounts"; const mainnet = privateKeyToXPAddress("0x...", "avax"); const testnet = privateKeyToXPAddress("0x...", "fuji"); ``` ### Public Key to XP Address ```typescript function publicKeyToXPAddress(publicKey: string, hrp: string): XPAddress; ``` **Parameters:** - `publicKey` (`string`): The public key with `0x` prefix. - `hrp` (`string`): The human-readable prefix for the address. Use `"avax"` for mainnet or `"fuji"` for testnet. **Returns:** - `XPAddress`: The Bech32-encoded XP address as a string. **Example:** ```typescript import { publicKeyToXPAddress } from "@avalanche-sdk/client/accounts"; const xpAddress = publicKeyToXPAddress("0x...", "avax"); ``` ### Private Key to XP Public Key ```typescript function privateKeyToXPPublicKey(privateKey: string): string; ``` **Parameters:** - `privateKey` (`string`): The private key with `0x` prefix. **Returns:** - `string`: The compressed public key in hex format with `0x` prefix. **Example:** ```typescript import { privateKeyToXPPublicKey, publicKeyToXPAddress, } from "@avalanche-sdk/client/accounts"; const publicKey = privateKeyToXPPublicKey("0x..."); const xpAddress = publicKeyToXPAddress(publicKey, "avax"); ``` ## Message Signing ### XP Message Signing ```typescript function xpSignMessage(message: string, privateKey: string): Promise; ``` **Parameters:** - `message` (`string`): The message to sign. - `privateKey` (`string`): The private key with `0x` prefix to sign with. **Returns:** - `Promise`: A promise that resolves to a base58-encoded signature string. **Example:** ```typescript import { xpSignMessage } from "@avalanche-sdk/client/accounts"; const signature = await xpSignMessage("Hello Avalanche!", "0x..."); // Returns base58-encoded signature (e.g., "2k5Jv...") ``` ### XP Transaction Signing ```typescript function xpSignTransaction( txHash: string | Uint8Array, privateKey: string | Uint8Array ): Promise; ``` **Parameters:** - `txHash` (`string | Uint8Array`): The transaction hash to sign. Can be a hex string with `0x` prefix or a `Uint8Array`. - `privateKey` (`string | Uint8Array`): The private key to sign with. Can be a hex string with `0x` prefix or a `Uint8Array`. **Returns:** - `Promise`: A promise that resolves to a hex-encoded signature string with `0x` prefix. **Example:** ```typescript import { xpSignTransaction } from "@avalanche-sdk/client/accounts"; const signature = await xpSignTransaction("0x...", "0x..."); // Returns hex-encoded signature (e.g., "0x1234...") ``` ### Signature Verification ```typescript function xpVerifySignature( signature: string, message: string, publicKey: string ): boolean; ``` **Parameters:** - `signature` (`string`): The signature to verify in hex format with `0x` prefix. - `message` (`string`): The message that was signed. - `publicKey` (`string`): The public key to verify with in hex format with `0x` prefix. **Returns:** - `boolean`: `true` if the signature is valid, `false` otherwise. **Note:** This function expects hex format signatures. For base58 signatures created with `xpSignMessage`, use `xpAccount.verify()` instead. **Example:** ```typescript import { xpVerifySignature } from "@avalanche-sdk/client/accounts"; const isValid = xpVerifySignature("0x...", "Hello Avalanche!", "0x..."); // For base58 signatures from xpSignMessage, use xpAccount.verify() instead ``` ### Public Key Recovery ```typescript function xpRecoverPublicKey(message: string, signature: string): string; ``` **Parameters:** - `message` (`string`): The message that was signed. - `signature` (`string`): The signature in hex format with `0x` prefix. **Returns:** - `string`: The recovered public key as a hex string with `0x` prefix. **Example:** ```typescript import { xpRecoverPublicKey } from "@avalanche-sdk/client/accounts"; const publicKey = xpRecoverPublicKey("Hello Avalanche!", "0x..."); ``` ## XP Account Creation ### Private Key to XP Account ```typescript function privateKeyToXPAccount(privateKey: string): XPAccount; ``` **Parameters:** - `privateKey` (`string`): The private key with `0x` prefix. **Returns:** - `XPAccount`: An XP account object with the following properties: - `publicKey` (`string`): The compressed public key in hex format. - `signMessage(message: string)` (`Promise`): Signs a message and returns a base58-encoded signature. - `signTransaction(txHash: string | Uint8Array)` (`Promise`): Signs a transaction hash and returns a hex-encoded signature. - `verify(message: string, signature: string)` (`boolean`): Verifies a message signature. - `type` (`"local"`): The account type. - `source` (`"privateKey"`): The account source. **Note:** Creates an XP-only account from a private key. Lighter weight than `privateKeyToAvalancheAccount` since it skips EVM account initialization. Use this when you only need X-Chain or P-Chain operations. **Limitations**: No EVM account—can't use for C-Chain operations. If you need both XP and EVM functionality, use `privateKeyToAvalancheAccount` instead. **Example:** ```typescript import { privateKeyToXPAccount } from "@avalanche-sdk/client/accounts"; const xpAccount = privateKeyToXPAccount("0x..."); // Sign message const signature = await xpAccount.signMessage("Hello Avalanche!"); const isValid = xpAccount.verify("Hello Avalanche!", signature); // Sign transaction const txSignature = await xpAccount.signTransaction("0x..."); ``` ## Next Steps - **[Using Accounts with Clients](accounts/local/clients)** - Client integration patterns - **[Wallet Operations](methods/wallet-methods/wallet)** - Learn how to send transactions - **[Account Management](accounts)** - Overview of account management # API Methods (/docs/tooling/avalanche-sdk/client/methods/public-methods/api) --- title: API Methods description: Complete reference for Admin, Info, Health, Index, and ProposerVM API methods --- ## Overview The Avalanche Client SDK provides access to node-level API methods through specialized API clients. These include administrative operations, informational queries, health monitoring, indexed blockchain data access, and ProposerVM operations. ## Admin API Client Provides administrative operations for managing node aliases, logging, and profiling. **Access:** `client.admin` ### alias Assign an API endpoint an alias. **Function Signature:** ```typescript function alias(params: AliasParameters): Promise; interface AliasParameters { endpoint: string; alias: string; } ``` **Parameters:** | Name | Type | Required | Description | | ---------- | -------- | -------- | ---------------------- | | `endpoint` | `string` | Yes | API endpoint to alias | | `alias` | `string` | Yes | Alias for the endpoint | **Returns:** | Type | Description | | --------------- | --------------- | | `Promise` | No return value | **Example:** ```typescript import { createAvalancheClient } from "@avalanche-sdk/client"; import { avalanche } from "@avalanche-sdk/client/chains"; const client = createAvalancheClient({ chain: avalanche, transport: { type: "http" }, }); await client.admin.alias({ endpoint: "bc/X", alias: "myAlias", }); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/other/admin-rpc#adminalias) --- ### aliasChain Give a blockchain an alias. **Function Signature:** ```typescript function aliasChain(params: AliasChainParameters): Promise; interface AliasChainParameters { chain: string; alias: string; } ``` **Parameters:** | Name | Type | Required | Description | | ------- | -------- | -------- | ------------------------ | | `chain` | `string` | Yes | Blockchain ID to alias | | `alias` | `string` | Yes | Alias for the blockchain | **Returns:** | Type | Description | | --------------- | --------------- | | `Promise` | No return value | **Example:** ```typescript await client.admin.aliasChain({ chain: "sV6o671RtkGBcno1FiaDbVcFv2sG5aVXMZYzKdP4VQAWmJQnM", alias: "myBlockchainAlias", }); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/other/admin-rpc#adminaliaschain) --- ### getChainAliases Get the aliases of a chain. **Function Signature:** ```typescript function getChainAliases(params: GetChainAliasesParameters): Promise; interface GetChainAliasesParameters { chain: string; } ``` **Parameters:** | Name | Type | Required | Description | | ------- | -------- | -------- | ---------------------- | | `chain` | `string` | Yes | Blockchain ID to query | **Returns:** | Type | Description | | ------------------- | ------------------------------ | | `Promise` | Array of aliases for the chain | **Example:** ```typescript const aliases = await client.admin.getChainAliases({ chain: "sV6o671RtkGBcno1FiaDbVcFv2sG5aVXMZYzKdP4VQAWmJQnM", }); console.log("Chain aliases:", aliases); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/other/admin-rpc#admingetchainaliases) --- ### Additional Admin API Methods - **getLoggerLevel** - Get log and display levels of loggers - **setLoggerLevel** - Set log and display levels of loggers - **loadVMs** - Dynamically loads virtual machines - **lockProfile** - Writes mutex statistics to `lock.profile` - **memoryProfile** - Writes memory profile to `mem.profile` - **startCPUProfiler** - Start CPU profiling - **stopCPUProfiler** - Stop CPU profiler See the [Admin API documentation](clients/api-clients#admin-api-client) for complete details. --- ## Info API Client Provides node information and network statistics. **Access:** `client.info` ### getNetworkID Get the ID of the network this node is participating in. **Function Signature:** ```typescript function getNetworkID(): Promise; interface GetNetworkIDReturnType { networkID: string; } ``` **Returns:** | Type | Description | | ------------------------ | ----------------- | | `GetNetworkIDReturnType` | Network ID object | **Return Object:** | Property | Type | Description | | ----------- | -------- | ---------------------------------------------- | | `networkID` | `string` | Network ID (1 for Mainnet, 5 for Fuji testnet) | **Example:** ```typescript const result = await client.info.getNetworkID(); if (result.networkID === "1") { console.log("Connected to Mainnet"); } else if (result.networkID === "5") { console.log("Connected to Fuji testnet"); } ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/other/info-rpc#infogetnetworkid) --- ### getNetworkName Get the name of the network this node is participating in. **Function Signature:** ```typescript function getNetworkName(): Promise; interface GetNetworkNameReturnType { networkName: string; } ``` **Returns:** | Type | Description | | -------------------------- | ------------------- | | `GetNetworkNameReturnType` | Network name object | **Return Object:** | Property | Type | Description | | ------------- | -------- | -------------------------------------- | | `networkName` | `string` | Network name (e.g., "mainnet", "fuji") | **Example:** ```typescript const result = await client.info.getNetworkName(); console.log("Network:", result.networkName); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/other/info-rpc#infogetnetworkname) --- ### getNodeVersion Get the version of this node. **Function Signature:** ```typescript function getNodeVersion(): Promise; interface GetNodeVersionReturnType { version: string; databaseVersion: string; gitCommit: string; vmVersions: Map; rpcProtocolVersion: string; } ``` **Returns:** | Type | Description | | -------------------------- | ------------------- | | `GetNodeVersionReturnType` | Node version object | **Return Object:** | Property | Type | Description | | -------------------- | --------------------- | -------------------------------------- | | `version` | `string` | Node version (e.g., "avalanche/1.9.4") | | `databaseVersion` | `string` | Database version | | `gitCommit` | `string` | Git commit hash | | `vmVersions` | `Map` | Map of VM IDs to their versions | | `rpcProtocolVersion` | `string` | RPC protocol version | **Example:** ```typescript const version = await client.info.getNodeVersion(); console.log("Node version:", version.version); console.log("Database version:", version.databaseVersion); console.log("VM versions:", Object.fromEntries(version.vmVersions)); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/other/info-rpc#infogetnodeversion) --- ### getNodeID Get the node ID, BLS key, and proof of possession. **Function Signature:** ```typescript function getNodeID(): Promise; interface GetNodeIDReturnType { nodeID: string; nodePOP: { publicKey: string; proofOfPossession: string; }; } ``` **Returns:** | Type | Description | | --------------------- | -------------- | | `GetNodeIDReturnType` | Node ID object | **Return Object:** | Property | Type | Description | | --------------------------- | -------- | ----------------------------------------------- | | `nodeID` | `string` | Unique identifier of the node | | `nodePOP.publicKey` | `string` | 48 byte hex representation of the BLS key | | `nodePOP.proofOfPossession` | `string` | 96 byte hex representation of the BLS signature | **Example:** ```typescript const nodeID = await client.info.getNodeID(); console.log("Node ID:", nodeID.nodeID); console.log("BLS Public Key:", nodeID.nodePOP.publicKey); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/other/info-rpc#infogetnodeid) --- ### getNodeIP Get the IP address of the node. **Function Signature:** ```typescript function getNodeIP(): Promise; interface GetNodeIPReturnType { ip: string; } ``` **Returns:** | Type | Description | | --------------------- | -------------- | | `GetNodeIPReturnType` | Node IP object | **Return Object:** | Property | Type | Description | | -------- | -------- | ---------------------- | | `ip` | `string` | IP address of the node | **Example:** ```typescript const result = await client.info.getNodeIP(); console.log("Node IP:", result.ip); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/other/info-rpc#infogetnodeip) --- ### getBlockchainID Get blockchain ID from alias. **Function Signature:** ```typescript function getBlockchainID( params: GetBlockchainIDParameters ): Promise; interface GetBlockchainIDParameters { alias: string; } interface GetBlockchainIDReturnType { blockchainID: string; } ``` **Parameters:** | Name | Type | Required | Description | | ------- | -------- | -------- | --------------------------------- | | `alias` | `string` | Yes | Blockchain alias (e.g., "X", "P") | **Returns:** | Type | Description | | --------------------------- | -------------------- | | `GetBlockchainIDReturnType` | Blockchain ID object | **Return Object:** | Property | Type | Description | | -------------- | -------- | -------------------- | | `blockchainID` | `string` | ID of the blockchain | **Example:** ```typescript const result = await client.info.getBlockchainID({ alias: "X" }); console.log("X-Chain ID:", result.blockchainID); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/other/info-rpc#infogetblockchainid) --- ### getTxFee Get transaction fees for various operations. **Function Signature:** ```typescript function getTxFee(): Promise; interface GetTxFeeReturnType { txFee: bigint; createAssetTxFee: bigint; createSubnetTxFee: bigint; transformSubnetTxFee: bigint; createBlockchainTxFee: bigint; addPrimaryNetworkValidatorFee: bigint; addPrimaryNetworkDelegatorFee: bigint; addSubnetValidatorFee: bigint; addSubnetDelegatorFee: bigint; } ``` **Returns:** | Type | Description | | -------------------- | ----------------------- | | `GetTxFeeReturnType` | Transaction fees object | **Return Object:** | Property | Type | Description | | ------------------------------- | -------- | ------------------------------------------ | | `txFee` | `bigint` | Base transaction fee | | `createAssetTxFee` | `bigint` | Fee for creating an asset | | `createSubnetTxFee` | `bigint` | Fee for creating a subnet | | `transformSubnetTxFee` | `bigint` | Fee for transforming a subnet | | `createBlockchainTxFee` | `bigint` | Fee for creating a blockchain | | `addPrimaryNetworkValidatorFee` | `bigint` | Fee for adding a primary network validator | | `addPrimaryNetworkDelegatorFee` | `bigint` | Fee for adding a primary network delegator | | `addSubnetValidatorFee` | `bigint` | Fee for adding a subnet validator | | `addSubnetDelegatorFee` | `bigint` | Fee for adding a subnet delegator | **Example:** ```typescript const fees = await client.info.getTxFee(); console.log("Base transaction fee:", fees.txFee); console.log("Create asset fee:", fees.createAssetTxFee); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/other/info-rpc#infogettxfee) --- ### getVMs Get supported virtual machines. **Function Signature:** ```typescript function getVMs(): Promise; interface GetVMsReturnType { vms: { [key: string]: string[]; }; } ``` **Returns:** | Type | Description | | ------------------ | ----------- | | `GetVMsReturnType` | VMs object | **Return Object:** | Property | Type | Description | | -------- | ----------------------------- | ------------------------------ | | `vms` | `{ [key: string]: string[] }` | Map of VM IDs to their aliases | **Example:** ```typescript const vms = await client.info.getVMs(); console.log("Supported VMs:", vms.vms); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/other/info-rpc#infogetvms) --- ### isBootstrapped Check whether a given chain is done bootstrapping. **Function Signature:** ```typescript function isBootstrapped( params: IsBootstrappedParameters ): Promise; interface IsBootstrappedParameters { chain: string; } interface IsBootstrappedReturnType { isBootstrapped: boolean; } ``` **Parameters:** | Name | Type | Required | Description | | ------- | -------- | -------- | ----------------------------- | | `chain` | `string` | Yes | Chain ID or alias (e.g., "X") | **Returns:** | Type | Description | | -------------------------- | ------------------- | | `IsBootstrappedReturnType` | Bootstrapped status | **Return Object:** | Property | Type | Description | | ---------------- | --------- | --------------------------------------- | | `isBootstrapped` | `boolean` | Whether the chain is done bootstrapping | **Example:** ```typescript const result = await client.info.isBootstrapped({ chain: "X" }); if (result.isBootstrapped) { console.log("X-Chain is bootstrapped"); } else { console.log("X-Chain is still bootstrapping"); } ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/other/info-rpc#infoisbootstrapped) --- ### peers Get peer information. **Function Signature:** ```typescript function peers(params: PeersParameters): Promise; interface PeersParameters { nodeIDs?: string[]; } interface PeersReturnType { numPeers: number; peers: { ip: string; publicIP: string; nodeID: string; version: string; lastSent: string; lastReceived: string; benched: string[]; observedUptime: number; }[]; } ``` **Parameters:** | Name | Type | Required | Description | | --------- | ---------- | -------- | ------------------------------------ | | `nodeIDs` | `string[]` | No | Optional array of node IDs to filter | **Returns:** | Type | Description | | ----------------- | ------------ | | `PeersReturnType` | Peers object | **Return Object:** | Property | Type | Description | | ---------- | -------- | --------------------------------- | | `numPeers` | `number` | Number of connected peers | | `peers` | `array` | Array of peer information objects | **Peer Object:** | Property | Type | Description | | ---------------- | ---------- | -------------------------------------------------- | | `ip` | `string` | Remote IP of the peer | | `publicIP` | `string` | Public IP of the peer | | `nodeID` | `string` | Prefixed Node ID of the peer | | `version` | `string` | Version the peer is running | | `lastSent` | `string` | Timestamp of last message sent to the peer | | `lastReceived` | `string` | Timestamp of last message received from the peer | | `benched` | `string[]` | Array of chain IDs the peer is benched on | | `observedUptime` | `number` | Node's primary network uptime observed by the peer | **Example:** ```typescript const peers = await client.info.peers(); console.log("Number of peers:", peers.numPeers); console.log("Peer details:", peers.peers); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/other/info-rpc#infopeers) --- ### uptime Get node uptime statistics. **Function Signature:** ```typescript function uptime(): Promise; interface UptimeReturnType { rewardingStakePercentage: number; weightedAveragePercentage: number; } ``` **Returns:** | Type | Description | | ------------------ | ------------- | | `UptimeReturnType` | Uptime object | **Return Object:** | Property | Type | Description | | --------------------------- | -------- | ----------------------------------------------------------------------- | | `rewardingStakePercentage` | `number` | Percent of stake which thinks this node is above the uptime requirement | | `weightedAveragePercentage` | `number` | Stake-weighted average of all observed uptimes for this node | **Example:** ```typescript const uptime = await client.info.uptime(); console.log("Rewarding stake percentage:", uptime.rewardingStakePercentage); console.log("Weighted average percentage:", uptime.weightedAveragePercentage); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/other/info-rpc#infouptime) --- ### upgrades Get upgrade history. **Function Signature:** ```typescript function upgrades(): Promise; interface UpgradesReturnType { apricotPhase1Time: string; apricotPhase2Time: string; apricotPhase3Time: string; apricotPhase4Time: string; apricotPhase4MinPChainHeight: number; apricotPhase5Time: string; apricotPhasePre6Time: string; apricotPhase6Time: string; apricotPhasePost6Time: string; banffTime: string; cortinaTime: string; cortinaXChainStopVertexID: string; durangoTime: string; etnaTime: string; fortunaTime?: string; } ``` **Returns:** | Type | Description | | -------------------- | --------------- | | `UpgradesReturnType` | Upgrades object | **Return Object:** | Property | Type | Description | | ------------------------------ | --------- | ------------------------------------------ | | `apricotPhase1Time` | `string` | Timestamp of Apricot Phase 1 upgrade | | `apricotPhase2Time` | `string` | Timestamp of Apricot Phase 2 upgrade | | `apricotPhase3Time` | `string` | Timestamp of Apricot Phase 3 upgrade | | `apricotPhase4Time` | `string` | Timestamp of Apricot Phase 4 upgrade | | `apricotPhase4MinPChainHeight` | `number` | Minimum P-Chain height for Apricot Phase 4 | | `apricotPhase5Time` | `string` | Timestamp of Apricot Phase 5 upgrade | | `apricotPhasePre6Time` | `string` | Timestamp of Apricot Phase Pre-6 upgrade | | `apricotPhase6Time` | `string` | Timestamp of Apricot Phase 6 upgrade | | `apricotPhasePost6Time` | `string` | Timestamp of Apricot Phase Post-6 upgrade | | `banffTime` | `string` | Timestamp of Banff upgrade | | `cortinaTime` | `string` | Timestamp of Cortina upgrade | | `cortinaXChainStopVertexID` | `string` | X-Chain stop vertex ID for Cortina upgrade | | `durangoTime` | `string` | Timestamp of Durango upgrade | | `etnaTime` | `string` | Timestamp of Etna upgrade | | `fortunaTime` | `string?` | Timestamp of Fortuna upgrade (optional) | **Example:** ```typescript const upgrades = await client.info.upgrades(); console.log("Apricot Phase 1:", upgrades.apricotPhase1Time); console.log("Banff upgrade:", upgrades.banffTime); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/other/info-rpc#infoupgrades) --- ### acps Get peer preferences for Avalanche Community Proposals. **Function Signature:** ```typescript function acps(): Promise; interface AcpsReturnType { acps: Map< number, { supportWeight: bigint; supporters: Set; objectWeight: bigint; objectors: Set; abstainWeight: bigint; } >; } ``` **Returns:** | Type | Description | | ---------------- | ----------- | | `AcpsReturnType` | ACPs object | **Return Object:** | Property | Type | Description | | -------- | ----- | ---------------------------------------- | | `acps` | `Map` | Map of ACP IDs to their peer preferences | **ACP Object:** | Property | Type | Description | | --------------- | ------------- | --------------------------------------- | | `supportWeight` | `bigint` | Weight of stake supporting the ACP | | `supporters` | `Set` | Set of node IDs supporting the ACP | | `objectWeight` | `bigint` | Weight of stake objecting to the ACP | | `objectors` | `Set` | Set of node IDs objecting to the ACP | | `abstainWeight` | `bigint` | Weight of stake abstaining from the ACP | **Example:** ```typescript const acps = await client.info.acps(); console.log("ACP preferences:", acps.acps); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/other/info-rpc#infoacps) --- ## Health API Client Provides health monitoring for the node. **Access:** `client.health` ### health Get health check results for the node. **Function Signature:** ```typescript function health(params: HealthParameters): Promise; interface HealthParameters { tags?: string[]; } interface HealthReturnType { healthy: boolean; checks: { C: ChainHealthCheck; P: ChainHealthCheck; X: ChainHealthCheck; bootstrapped: { message: any[]; timestamp: string; duration: number; }; database: { timestamp: string; duration: number; }; diskspace: { message: { availableDiskBytes: number; }; timestamp: string; duration: number; }; network: { message: { connectedPeers: number; sendFailRate: number; timeSinceLastMsgReceived: string; timeSinceLastMsgSent: string; }; timestamp: string; duration: number; }; router: { message: { longestRunningRequest: string; outstandingRequests: number; }; timestamp: string; duration: number; }; }; } type ChainHealthCheck = { message: { engine: { consensus: { lastAcceptedHeight: number; lastAcceptedID: string; longestProcessingBlock: string; processingBlocks: number; }; vm: null; }; networking: { percentConnected: number; }; }; timestamp: string; duration: number; }; ``` **Parameters:** | Name | Type | Required | Description | | ------ | ---------- | -------- | ------------------------------------- | | `tags` | `string[]` | No | Optional tags to filter health checks | **Returns:** | Type | Description | | ------------------ | -------------------- | | `HealthReturnType` | Health check results | **Return Object:** | Property | Type | Description | | --------- | --------- | --------------------------------------- | | `healthy` | `boolean` | Overall health status of the node | | `checks` | `object` | Health check results for each component | **Example:** ```typescript const healthStatus = await client.health.health({ tags: ["11111111111111111111111111111111LpoYY"], }); console.log("Node healthy:", healthStatus.healthy); console.log("C-Chain health:", healthStatus.checks.C); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/other/health-rpc#healthhealth) --- ### liveness Get liveness check indicating if the node is alive and can handle requests. **Function Signature:** ```typescript function liveness(): Promise; interface LivenessReturnType { checks: object; healthy: boolean; } ``` **Returns:** | Type | Description | | -------------------- | --------------------- | | `LivenessReturnType` | Liveness check result | **Return Object:** | Property | Type | Description | | --------- | --------- | ------------------------------------------------------ | | `checks` | `object` | Liveness check details | | `healthy` | `boolean` | Indicates if the node is alive and can handle requests | **Example:** ```typescript const livenessStatus = await client.health.liveness(); if (livenessStatus.healthy) { console.log("Node is alive and responding"); } ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/other/health-rpc#healthliveness) --- ### readiness Get readiness check indicating if the node has finished initializing. **Function Signature:** ```typescript function readiness(params: ReadinessParameters): Promise; interface ReadinessParameters { tags?: string[]; } interface ReadinessReturnType { checks: { [key: string]: { message: { timestamp: string; duration: number; contiguousFailures: number; timeOfFirstFailure: string | null; }; healthy: boolean; }; }; healthy: boolean; } ``` **Parameters:** | Name | Type | Required | Description | | ------ | ---------- | -------- | ---------------------------------------- | | `tags` | `string[]` | No | Optional tags to filter readiness checks | **Returns:** | Type | Description | | --------------------- | ---------------------- | | `ReadinessReturnType` | Readiness check result | **Return Object:** | Property | Type | Description | | --------- | --------- | ------------------------------------------ | | `checks` | `object` | Readiness check results for each component | | `healthy` | `boolean` | Overall readiness status of the node | **Example:** ```typescript const readinessStatus = await client.health.readiness({ tags: ["11111111111111111111111111111111LpoYY"], }); if (readinessStatus.healthy) { console.log("Node is ready to handle requests"); } ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/other/health-rpc#healthreadiness) --- ## Index API Clients Provides indexed blockchain data queries for fast container lookups. **Access:** - `client.indexBlock.pChain` - P-Chain block index - `client.indexBlock.cChain` - C-Chain block index - `client.indexBlock.xChain` - X-Chain block index - `client.indexTx.xChain` - X-Chain transaction index ### getContainerByIndex Get container by its index. **Function Signature:** ```typescript function getContainerByIndex( params: GetContainerByIndexParameters ): Promise; interface GetContainerByIndexParameters { index: number; encoding: "hex"; } interface GetContainerByIndexReturnType { id: string; bytes: string; timestamp: string; encoding: "hex"; index: string; } ``` **Parameters:** | Name | Type | Required | Description | | ---------- | -------- | -------- | ----------------------------------------------- | | `index` | `number` | Yes | Container index (first container is at index 0) | | `encoding` | `"hex"` | Yes | Encoding format (only "hex" is supported) | **Returns:** | Type | Description | | ------------------------------- | ---------------- | | `GetContainerByIndexReturnType` | Container object | **Return Object:** | Property | Type | Description | | ----------- | -------- | ------------------------------------------------- | | `id` | `string` | Container's ID | | `bytes` | `string` | Byte representation of the container | | `timestamp` | `string` | Time at which this node accepted the container | | `encoding` | `"hex"` | Encoding format used | | `index` | `string` | How many containers were accepted before this one | **Example:** ```typescript const container = await client.indexBlock.pChain.getContainerByIndex({ index: 12345, encoding: "hex", }); console.log("Container ID:", container.id); console.log("Container bytes:", container.bytes); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/other/index-rpc#indexgetcontainerbyindex) --- ### getContainerByID Get container by its ID. **Function Signature:** ```typescript function getContainerByID( params: GetContainerByIDParameters ): Promise; interface GetContainerByIDParameters { id: string; encoding: "hex"; } interface GetContainerByIDReturnType { id: string; bytes: string; timestamp: string; encoding: "hex"; index: string; } ``` **Parameters:** | Name | Type | Required | Description | | ---------- | -------- | -------- | ----------------------------------------- | | `id` | `string` | Yes | Container's ID | | `encoding` | `"hex"` | Yes | Encoding format (only "hex" is supported) | **Returns:** | Type | Description | | ---------------------------- | ---------------- | | `GetContainerByIDReturnType` | Container object | **Return Object:** | Property | Type | Description | | ----------- | -------- | ------------------------------------------------- | | `id` | `string` | Container's ID | | `bytes` | `string` | Byte representation of the container | | `timestamp` | `string` | Time at which this node accepted the container | | `encoding` | `"hex"` | Encoding format used | | `index` | `string` | How many containers were accepted before this one | **Example:** ```typescript const container = await client.indexBlock.cChain.getContainerByID({ id: "0x123...", encoding: "hex", }); console.log("Container index:", container.index); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/other/index-rpc#indexgetcontainerbyid) --- ### getContainerRange Get range of containers by index. **Function Signature:** ```typescript function getContainerRange( params: GetContainerRangeParameters ): Promise; interface GetContainerRangeParameters { startIndex: number; endIndex: number; encoding: "hex"; } interface GetContainerRangeReturnType { containers: Array<{ id: string; bytes: string; timestamp: string; encoding: "hex"; index: string; }>; } ``` **Parameters:** | Name | Type | Required | Description | | ------------ | -------- | -------- | --------------------------------------------------- | | `startIndex` | `number` | Yes | Index of the first container to retrieve | | `endIndex` | `number` | Yes | Index of the last container to retrieve (inclusive) | | `encoding` | `"hex"` | Yes | Encoding format (only "hex" is supported) | **Returns:** | Type | Description | | ----------------------------- | ---------------------- | | `GetContainerRangeReturnType` | Container range object | **Return Object:** | Property | Type | Description | | ------------ | ------- | -------------------------- | | `containers` | `array` | Array of container details | **Container Object:** | Property | Type | Description | | ----------- | -------- | ------------------------------------------------- | | `id` | `string` | Container's ID | | `bytes` | `string` | Byte representation of the container | | `timestamp` | `string` | Time at which this node accepted the container | | `encoding` | `"hex"` | Encoding format used | | `index` | `string` | How many containers were accepted before this one | **Example:** ```typescript const range = await client.indexBlock.xChain.getContainerRange({ startIndex: 1000, endIndex: 1010, encoding: "hex", }); console.log("Containers:", range.containers.length); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/other/index-rpc#indexgetcontainerrange) --- ### getIndex Get container index by ID. **Function Signature:** ```typescript function getIndex(params: GetIndexParameters): Promise; interface GetIndexParameters { id: string; encoding: "hex"; } interface GetIndexReturnType { index: string; } ``` **Parameters:** | Name | Type | Required | Description | | ---------- | -------- | -------- | ----------------------------------------- | | `id` | `string` | Yes | Container's ID | | `encoding` | `"hex"` | Yes | Encoding format (only "hex" is supported) | **Returns:** | Type | Description | | -------------------- | ------------ | | `GetIndexReturnType` | Index object | **Return Object:** | Property | Type | Description | | -------- | -------- | ------------------------------------------------------ | | `index` | `string` | Index of the container (first container is at index 0) | **Example:** ```typescript const result = await client.indexTx.xChain.getIndex({ id: "0x123...", encoding: "hex", }); console.log("Container index:", result.index); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/other/index-rpc#indexgetindex) --- ### getLastAccepted Get last accepted container. **Function Signature:** ```typescript function getLastAccepted( params: GetLastAcceptedParameters ): Promise; interface GetLastAcceptedParameters { encoding: "hex"; } interface GetLastAcceptedReturnType { id: string; bytes: string; timestamp: string; encoding: "hex"; index: string; } ``` **Parameters:** | Name | Type | Required | Description | | ---------- | ------- | -------- | ----------------------------------------- | | `encoding` | `"hex"` | Yes | Encoding format (only "hex" is supported) | **Returns:** | Type | Description | | --------------------------- | ------------------------------ | | `GetLastAcceptedReturnType` | Last accepted container object | **Return Object:** | Property | Type | Description | | ----------- | -------- | ------------------------------------------------- | | `id` | `string` | Container's ID | | `bytes` | `string` | Byte representation of the container | | `timestamp` | `string` | Time at which this node accepted the container | | `encoding` | `"hex"` | Encoding format used | | `index` | `string` | How many containers were accepted before this one | **Example:** ```typescript const lastAccepted = await client.indexBlock.pChain.getLastAccepted({ encoding: "hex", }); console.log("Last accepted container ID:", lastAccepted.id); console.log("Last accepted index:", lastAccepted.index); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/other/index-rpc#indexgetlastaccepted) --- ### isAccepted Check if container is accepted in the index. **Function Signature:** ```typescript function isAccepted( params: IsAcceptedParameters ): Promise; interface IsAcceptedParameters { id: string; encoding: "hex"; } interface IsAcceptedReturnType { isAccepted: boolean; } ``` **Parameters:** | Name | Type | Required | Description | | ---------- | -------- | -------- | ----------------------------------------- | | `id` | `string` | Yes | Container's ID | | `encoding` | `"hex"` | Yes | Encoding format (only "hex" is supported) | **Returns:** | Type | Description | | ---------------------- | ------------------------ | | `IsAcceptedReturnType` | Acceptance status object | **Return Object:** | Property | Type | Description | | ------------ | --------- | -------------------------------------- | | `isAccepted` | `boolean` | Whether the container is in this index | **Example:** ```typescript const result = await client.indexBlock.cChain.isAccepted({ id: "0x123...", encoding: "hex", }); if (result.isAccepted) { console.log("Container is accepted"); } else { console.log("Container is not accepted"); } ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/other/index-rpc#indexisaccepted) --- ## ProposerVM API Client Provides ProposerVM operations for each chain. ProposerVM is responsible for proposing blocks and managing consensus. **Access:** - `client.proposerVM.pChain` - P-Chain ProposerVM - `client.proposerVM.xChain` - X-Chain ProposerVM - `client.proposerVM.cChain` - C-Chain ProposerVM ### getProposedHeight Get the current proposed height for the chain. **Function Signature:** ```typescript function getProposedHeight(): Promise; interface GetProposedHeightReturnType { height: string; } ``` **Returns:** | Type | Description | | ----------------------------- | ---------------------- | | `GetProposedHeightReturnType` | Proposed height object | **Return Object:** | Property | Type | Description | | -------- | -------- | -------------------------------------- | | `height` | `string` | This node's current proposer VM height | **Example:** ```typescript const pChainHeight = await client.proposerVM.pChain.getProposedHeight(); console.log("P-Chain proposed height:", pChainHeight.height); const xChainHeight = await client.proposerVM.xChain.getProposedHeight(); console.log("X-Chain proposed height:", xChainHeight.height); const cChainHeight = await client.proposerVM.cChain.getProposedHeight(); console.log("C-Chain proposed height:", cChainHeight.height); ``` **Related:** - [API Reference](https://build.avax.network/docs/api-reference/proposervm-api#proposervmgetproposedheight) --- ### getCurrentEpoch Get the current epoch information. **Function Signature:** ```typescript function getCurrentEpoch(): Promise; interface GetCurrentEpochReturnType { number: string; startTime: string; pChainHeight: string; } ``` **Returns:** | Type | Description | | --------------------------- | -------------------- | | `GetCurrentEpochReturnType` | Current epoch object | **Return Object:** | Property | Type | Description | | -------------- | -------- | --------------------------------------------- | | `number` | `string` | The current epoch number | | `startTime` | `string` | The epoch start time (Unix timestamp) | | `pChainHeight` | `string` | The P-Chain height at the start of this epoch | **Example:** ```typescript const epoch = await client.proposerVM.pChain.getCurrentEpoch(); console.log("Current epoch:", epoch.number); console.log("Epoch start time:", epoch.startTime); console.log("P-Chain height at epoch start:", epoch.pChainHeight); ``` **Related:** - [API Reference](https://build.avax.network/docs/api-reference/proposervm-api#proposervmgetcurrentepoch) --- ## Next Steps - **[API Clients Documentation](clients/api-clients)** - Detailed API client reference - **[Main Clients](clients)** - Client architecture overview - **[Getting Started](getting-started)** - Quick start guide # C-Chain Methods (/docs/tooling/avalanche-sdk/client/methods/public-methods/c-chain) --- title: C-Chain Methods description: Complete reference for C-Chain (Contract Chain) methods and EVM compatibility --- ## Overview The C-Chain (Contract Chain) is Avalanche's instance of the Ethereum Virtual Machine (EVM), providing full Ethereum compatibility with additional Avalanche-specific features like cross-chain atomic transactions and UTXO management. **Note:** The Avalanche Client SDK fully extends [viem](https://viem.sh), meaning all standard EVM methods are also available. See the [viem documentation](https://viem.sh/docs) for complete EVM method reference. ## Atomic Transaction Operations ### getAtomicTx Get an atomic transaction by its ID. Atomic transactions enable cross-chain transfers between the C-Chain and other Avalanche chains (P-Chain, X-Chain). **Function Signature:** ```typescript function getAtomicTx( params: GetAtomicTxParameters ): Promise; interface GetAtomicTxParameters { txID: string; encoding?: "hex"; } interface GetAtomicTxReturnType { tx: string; blockHeight: string; encoding: "hex"; } ``` **Parameters:** | Name | Type | Required | Description | | ---------- | -------- | -------- | ------------------------------------------------------- | | `txID` | `string` | Yes | Transaction ID in CB58 format | | `encoding` | `"hex"` | No | Encoding format for the transaction (defaults to "hex") | **Returns:** | Type | Description | | ----------------------- | ------------------------- | | `GetAtomicTxReturnType` | Atomic transaction object | **Return Object:** | Property | Type | Description | | ------------- | -------- | ---------------------------------------------- | | `tx` | `string` | Transaction bytes in hex format | | `blockHeight` | `string` | Height of the block containing the transaction | | `encoding` | `"hex"` | Encoding format used | **Example:** ```typescript import { createAvalancheClient } from "@avalanche-sdk/client"; import { avalanche } from "@avalanche-sdk/client/chains"; const client = createAvalancheClient({ chain: avalanche, transport: { type: "http" }, }); const atomicTx = await client.cChain.getAtomicTx({ txID: "2QouvMUbQ6oy7yQ9tLvL3L8tGQG2QK1wJ1q1wJ1q1wJ1q1wJ1q1wJ1q1wJ1", }); console.log("Transaction:", atomicTx.tx); console.log("Block height:", atomicTx.blockHeight); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/c-chain#avaxgetatomictx) - [getAtomicTxStatus](#getatomictxstatus) - Get transaction status --- ### getAtomicTxStatus Get the status of an atomic transaction. Returns the current processing state and block information. **Function Signature:** ```typescript function getAtomicTxStatus( params: GetAtomicTxStatusParameters ): Promise; interface GetAtomicTxStatusParameters { txID: string; } interface GetAtomicTxStatusReturnType { status: CChainAtomicTxStatus; blockHeight: string; } type CChainAtomicTxStatus = "Accepted" | "Processing" | "Dropped" | "Unknown"; ``` **Parameters:** | Name | Type | Required | Description | | ------ | -------- | -------- | ----------------------------- | | `txID` | `string` | Yes | Transaction ID in CB58 format | **Returns:** | Type | Description | | ----------------------------- | ------------------------- | | `GetAtomicTxStatusReturnType` | Transaction status object | **Return Object:** | Property | Type | Description | | ------------- | ---------------------- | --------------------------------------------------------------------- | | `status` | `CChainAtomicTxStatus` | Transaction status: "Accepted", "Processing", "Dropped", or "Unknown" | | `blockHeight` | `string` | Height of the block containing the transaction (if accepted) | **Status Values:** - **Accepted**: Transaction is (or will be) accepted by every node - **Processing**: Transaction is being voted on by this node - **Dropped**: Transaction was dropped by this node because it thought the transaction invalid - **Unknown**: Transaction hasn't been seen by this node **Example:** ```typescript const status = await client.cChain.getAtomicTxStatus({ txID: "2QouvMUbQ6oy7yQ9tLvL3L8tGQG2QK1wJ1q1wJ1q1wJ1q1wJ1q1wJ1q1wJ1", }); console.log("Status:", status.status); if (status.status === "Accepted") { console.log("Block height:", status.blockHeight); } ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/c-chain#avaxgetatomictxstatus) - [getAtomicTx](#getatomictx) - Get transaction details --- ## UTXO Operations ### getUTXOs Get the UTXOs (Unspent Transaction Outputs) for a set of addresses. UTXOs represent unspent native AVAX on the C-Chain from imported transactions. **Function Signature:** ```typescript function getUTXOs(params: GetUTXOsParameters): Promise; interface GetUTXOsParameters { addresses: string[]; limit?: number; startIndex?: { address: string; utxo: string; }; sourceChain?: string; encoding?: "hex"; } interface GetUTXOsReturnType { numFetched: number; utxos: string[]; endIndex: { address: string; utxo: string; }; } ``` **Parameters:** | Name | Type | Required | Description | | ------------- | ----------------------------------- | -------- | -------------------------------------------- | | `addresses` | `string[]` | Yes | Array of C-Chain addresses | | `limit` | `number` | No | Maximum number of UTXOs to return (max 1024) | | `startIndex` | `{ address: string; utxo: string }` | No | Pagination cursor for next page | | `sourceChain` | `string` | No | Source chain ID for filtering UTXOs | | `encoding` | `"hex"` | No | Encoding format for returned UTXOs | **Returns:** | Type | Description | | -------------------- | ------------------------- | | `GetUTXOsReturnType` | UTXO data with pagination | **Return Object:** | Property | Type | Description | | ------------ | ----------------------------------- | ---------------------------------------- | | `numFetched` | `number` | Number of UTXOs fetched in this response | | `utxos` | `string[]` | Array of UTXO bytes (hex encoded) | | `endIndex` | `{ address: string; utxo: string }` | Pagination cursor for fetching next page | **Example:** ```typescript // Get UTXOs from X-Chain const utxos = await client.cChain.getUTXOs({ addresses: ["0x742d35Cc6634C0532925a3b8D4C9db96C4b4d8b6"], limit: 100, sourceChain: "X", }); console.log("Number of UTXOs:", utxos.numFetched); console.log("UTXOs:", utxos.utxos); // Get next page if needed if (utxos.endIndex) { const moreUTXOs = await client.cChain.getUTXOs({ addresses: ["0x742d35Cc6634C0532925a3b8D4C9db96C4b4d8b6"], startIndex: utxos.endIndex, limit: 100, }); } ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/c-chain#avaxgetutxos) - [C-Chain Wallet Methods](../wallet-methods/c-chain-wallet) - Atomic transaction operations --- ## Transaction Operations ### issueTx Issue a transaction to the C-Chain. Submits a signed transaction for processing. **Function Signature:** ```typescript function issueTx(params: IssueTxParameters): Promise; interface IssueTxParameters { tx: string; encoding: "hex"; } interface IssueTxReturnType { txID: string; } ``` **Parameters:** | Name | Type | Required | Description | | ---------- | -------- | -------- | ------------------------------- | | `tx` | `string` | Yes | Transaction bytes in hex format | | `encoding` | `"hex"` | Yes | Encoding format (must be "hex") | **Returns:** | Type | Description | | ------------------- | --------------------- | | `IssueTxReturnType` | Transaction ID object | **Return Object:** | Property | Type | Description | | -------- | -------- | ----------------------------- | | `txID` | `string` | Transaction ID in CB58 format | **Example:** ```typescript const txID = await client.cChain.issueTx({ tx: "0x00000009de31b4d8b22991d51aa6aa1fc733f23a851a8c9400000000000186a0...", encoding: "hex", }); console.log("Transaction ID:", txID.txID); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/c-chain#avaxissuetx) --- ## Admin Operations These methods are available for node administration and debugging. They require admin access to the node. ### setLogLevel Set the log level for the C-Chain node. **Function Signature:** ```typescript function setLogLevel(params: SetLogLevelParameters): Promise; interface SetLogLevelParameters { level: string; } ``` **Parameters:** | Name | Type | Required | Description | | ------- | -------- | -------- | -------------------------------------------------- | | `level` | `string` | Yes | Log level (e.g., "debug", "info", "warn", "error") | **Returns:** | Type | Description | | ------ | -------------------------------------- | | `void` | Promise resolves when log level is set | **Example:** ```typescript await client.cChain.setLogLevel({ level: "info", }); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/c-chain#adminsetloglevel) --- ### startCPUProfiler Start the CPU profiler for performance analysis. **Function Signature:** ```typescript function startCPUProfiler(): Promise; ``` **Parameters:** No parameters required. **Returns:** | Type | Description | | ------ | ----------------------------------------- | | `void` | Promise resolves when profiler is started | **Example:** ```typescript await client.cChain.startCPUProfiler(); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/c-chain#adminstartcpuprofiler) - [stopCPUProfiler](#stopcpuprofiler) - Stop the CPU profiler --- ### stopCPUProfiler Stop the CPU profiler. **Function Signature:** ```typescript function stopCPUProfiler(): Promise; ``` **Parameters:** No parameters required. **Returns:** | Type | Description | | ------ | ----------------------------------------- | | `void` | Promise resolves when profiler is stopped | **Example:** ```typescript await client.cChain.stopCPUProfiler(); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/c-chain#adminstopcpuprofiler) - [startCPUProfiler](#startcpuprofiler) - Start the CPU profiler --- ### memoryProfile Get the memory profile of the C-Chain node. **Function Signature:** ```typescript function memoryProfile(): Promise; ``` **Parameters:** No parameters required. **Returns:** | Type | Description | | ------ | ------------------------------------------------- | | `void` | Promise resolves when memory profile is retrieved | **Example:** ```typescript await client.cChain.memoryProfile(); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/c-chain#adminmemoryprofile) --- ### lockProfile Lock the profile to prevent modifications. **Function Signature:** ```typescript function lockProfile(): Promise; ``` **Parameters:** No parameters required. **Returns:** | Type | Description | | ------ | --------------------------------------- | | `void` | Promise resolves when profile is locked | **Example:** ```typescript await client.cChain.lockProfile(); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/c-chain#adminlockprofile) --- ## Standard EVM Methods The C-Chain client extends viem's Public Client, providing access to all standard Ethereum methods. Here are some commonly used methods: ### Block Operations ```typescript // Get block number const blockNumber = await client.getBlockNumber(); // Get block by number const block = await client.getBlock({ blockNumber: 12345n, }); // Get block by hash const blockByHash = await client.getBlock({ blockHash: "0x...", }); ``` ### Balance Operations ```typescript // Get balance const balance = await client.getBalance({ address: "0x742d35Cc6634C0532925a3b8D4C9db96C4b4d8b6", }); // Get balance with block number const balanceAtBlock = await client.getBalance({ address: "0x742d35Cc6634C0532925a3b8D4C9db96C4b4d8b6", blockNumber: 12345n, }); ``` ### Transaction Operations ```typescript // Get transaction const tx = await client.getTransaction({ hash: "0x...", }); // Get transaction receipt const receipt = await client.getTransactionReceipt({ hash: "0x...", }); // Get transaction count (nonce) const nonce = await client.getTransactionCount({ address: "0x742d35Cc6634C0532925a3b8D4C9db96C4b4d8b6", }); ``` ### Gas Operations ```typescript // Get gas price const gasPrice = await client.getGasPrice(); // Get max priority fee per gas const maxPriorityFee = await client.maxPriorityFeePerGas(); // Estimate gas const estimatedGas = await client.estimateGas({ to: "0x742d35Cc6634C0532925a3b8D4C9db96C4b4d8b6", value: parseEther("0.001"), }); ``` ### Contract Operations ```typescript // Read contract const result = await client.readContract({ address: "0x...", abi: [...], functionName: "balanceOf", args: [address], }); // Simulate contract const { request } = await client.simulateContract({ address: "0x...", abi: [...], functionName: "transfer", args: [to, amount], }); ``` **For complete EVM method reference, see:** [Viem Documentation](https://viem.sh/docs) --- ## Next Steps - **[C-Chain Wallet Methods](../wallet-methods/c-chain-wallet)** - Atomic transaction operations - **[Wallet Client](../../clients/wallet-client)** - Complete wallet operations - **[Account Management](../../accounts)** - Account types and management - **[Viem Documentation](https://viem.sh/docs)** - Complete EVM method reference # P-Chain Methods (/docs/tooling/avalanche-sdk/client/methods/public-methods/p-chain) --- title: P-Chain Methods description: Complete reference for P-Chain (Platform Chain) methods --- ## Overview The P-Chain (Platform Chain) is Avalanche's coordinating chain responsible for managing validators, delegators, subnets, and blockchains. This reference covers all read-only P-Chain operations available through the Avalanche Client SDK. ## Balance Operations ### getBalance Get the balance of AVAX controlled by a given address. **Function Signature:** ```typescript function getBalance( params: GetBalanceParameters ): Promise; interface GetBalanceParameters { addresses: string[]; } interface GetBalanceReturnType { balance: bigint; unlocked: bigint; lockedStakeable: bigint; lockedNotStakeable: bigint; utxoIDs: { txID: string; outputIndex: number; }[]; } ``` **Parameters:** | Name | Type | Required | Description | | ----------- | ---------- | -------- | ----------------------------------- | | `addresses` | `string[]` | Yes | Array of P-Chain addresses to query | **Returns:** | Type | Description | | ---------------------- | -------------------------- | | `GetBalanceReturnType` | Balance information object | **Return Object:** | Property | Type | Description | | -------------------- | -------- | ------------------------------------------- | | `balance` | `bigint` | Total balance | | `unlocked` | `bigint` | Unlocked balance | | `lockedStakeable` | `bigint` | Locked and stakeable balance | | `lockedNotStakeable` | `bigint` | Locked but not stakeable balance | | `utxoIDs` | `array` | Array of UTXO IDs referencing the addresses | **Example:** ```typescript import { createAvalancheClient } from "@avalanche-sdk/client"; import { avalanche } from "@avalanche-sdk/client/chains"; const client = createAvalancheClient({ chain: avalanche, transport: { type: "http" }, }); const balance = await client.pChain.getBalance({ addresses: ["P-custom18jma8ppw3nhx5r4ap8clazz0dps7rv5u9xde7p"], }); console.log("Total balance:", balance.balance); console.log("Unlocked:", balance.unlocked); console.log("Locked stakeable:", balance.lockedStakeable); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/p-chain#platformgetbalance) - [getUTXOs](#getutxos) - Get UTXOs for addresses --- ### getUTXOs Get the UTXOs (Unspent Transaction Outputs) controlled by a set of addresses. **Function Signature:** ```typescript function getUTXOs(params: GetUTXOsParameters): Promise; interface GetUTXOsParameters { addresses: string[]; sourceChain?: string; limit?: number; startIndex?: { address: string; utxo: string; }; encoding?: "hex"; } interface GetUTXOsReturnType { numFetched: number; utxos: string[]; endIndex: { address: string; utxo: string; }; sourceChain?: string; encoding: "hex"; } ``` **Parameters:** | Name | Type | Required | Description | | ------------- | ----------------------------------- | -------- | ----------------------------------------------- | | `addresses` | `string[]` | Yes | Array of P-Chain addresses | | `sourceChain` | `string` | No | Source chain ID (e.g., "X" for X-Chain) | | `limit` | `number` | No | Maximum number of UTXOs to return | | `startIndex` | `{ address: string; utxo: string }` | No | Pagination cursor for next page | | `encoding` | `"hex"` | No | Encoding format (can only be "hex" if provided) | **Returns:** | Type | Description | | -------------------- | ------------------------- | | `GetUTXOsReturnType` | UTXO data with pagination | **Return Object:** | Property | Type | Description | | ------------- | ----------------------------------- | ---------------------------------------- | | `numFetched` | `number` | Number of UTXOs fetched in this response | | `utxos` | `string[]` | Array of UTXO bytes (hex encoded) | | `endIndex` | `{ address: string; utxo: string }` | Pagination cursor for fetching next page | | `sourceChain` | `string` | Source chain ID (if specified) | | `encoding` | `"hex"` | Encoding format used | **Example:** ```typescript // Get first page const utxos = await client.pChain.getUTXOs({ addresses: ["P-custom18jma8ppw3nhx5r4ap8clazz0dps7rv5u9xde7p"], limit: 100, }); console.log("Fetched UTXOs:", utxos.numFetched); console.log("UTXOs:", utxos.utxos); // Get next page if needed if (utxos.endIndex) { const moreUTXOs = await client.pChain.getUTXOs({ addresses: ["P-custom18jma8ppw3nhx5r4ap8clazz0dps7rv5u9xde7p"], startIndex: utxos.endIndex, limit: 100, }); } ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/p-chain#platformgetutxos) - [getBalance](#getbalance) - Get balance summary --- ## Validator Operations ### getCurrentValidators Get the current validators of the specified Subnet. **Function Signature:** ```typescript function getCurrentValidators( params: GetCurrentValidatorsParameters ): Promise; interface GetCurrentValidatorsParameters { subnetID?: string | Buffer; nodeIDs?: string[]; } interface GetCurrentValidatorsReturnType { validators: Array<{ accruedDelegateeReward: string; txID: string; startTime: string; endTime?: string; stakeAmount: string; nodeID: string; weight: string; validationRewardOwner?: { locktime: string; threshold: string; addresses: string[]; }; delegationRewardOwner?: { locktime: string; threshold: string; addresses: string[]; }; signer?: { publicKey: string; proofOfPosession: string; }; delegatorCount?: string; delegatorWeight?: string; potentialReward?: string; delegationFee?: string; uptime?: string; connected?: boolean; delegators?: Array<{ txID: string; startTime: string; endTime: string; stakeAmount: string; nodeID: string; rewardOwner: { locktime: string; threshold: string; addresses: string[]; }; potentialReward: string; }>; }>; } ``` **Parameters:** | Name | Type | Required | Description | | ---------- | ------------------ | -------- | --------------------------------------- | | `subnetID` | `string \| Buffer` | No | Subnet ID (defaults to Primary Network) | | `nodeIDs` | `string[]` | No | Specific NodeIDs to query | **Returns:** | Type | Description | | -------------------------------- | ---------------------- | | `GetCurrentValidatorsReturnType` | Validators list object | **Return Object:** | Property | Type | Description | | ------------ | ------- | ------------------------------------------- | | `validators` | `array` | List of validators for the specified Subnet | **Note:** Many fields in the validator object are omitted if `subnetID` is not the Primary Network. The `delegators` field is only included when `nodeIDs` specifies a single NodeID. **Example:** ```typescript // Get all validators on Primary Network const validators = await client.pChain.getCurrentValidators({}); console.log("Total validators:", validators.validators.length); // Get validators for specific subnet const subnetValidators = await client.pChain.getCurrentValidators({ subnetID: "11111111111111111111111111111111LpoYY", }); // Get specific validators const specificValidators = await client.pChain.getCurrentValidators({ subnetID: "11111111111111111111111111111111LpoYY", nodeIDs: ["NodeID-7Xhw2mDxuDS44j42TCB6U5579esbSt3Lg"], }); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/p-chain#platformgetcurrentvalidators) - [getValidatorsAt](#getvalidatorsat) - Get validators at specific height - [getAllValidatorsAt](#getallvalidatorsat) - Get all validators at height - [sampleValidators](#samplevalidators) - Sample validators --- ### getValidatorsAt Get the validators at a specific height. **Function Signature:** ```typescript function getValidatorsAt( params: GetValidatorsAtParameters ): Promise; interface GetValidatorsAtParameters { height: number; subnetID?: string; } interface GetValidatorsAtReturnType { validators: Record; } ``` **Parameters:** | Name | Type | Required | Description | | ---------- | -------- | -------- | --------------------------------------- | | `height` | `number` | Yes | Block height to query | | `subnetID` | `string` | No | Subnet ID (defaults to Primary Network) | **Returns:** | Type | Description | | --------------------------- | --------------------- | | `GetValidatorsAtReturnType` | Validators map object | **Return Object:** | Property | Type | Description | | ------------ | ------------------------ | ------------------------------------------- | | `validators` | `Record` | Map of validator IDs to their stake amounts | **Example:** ```typescript const validators = await client.pChain.getValidatorsAt({ height: 1000001, subnetID: "11111111111111111111111111111111LpoYY", }); console.log("Validators at height:", validators.validators); ``` **Links:** - [API Reference](https://build.avax.network/docs/rpcs/p-chain#platformgetvalidatorsat) - [Related: getCurrentValidators](#getcurrentvalidators) - Get current validators - [Related: getAllValidatorsAt](#getallvalidatorsat) - Get all validators at height - [Related: getHeight](#getheight) - Get current height --- ### getAllValidatorsAt Get all validators at a specific height across all Subnets and the Primary Network. **Function Signature:** ```typescript function getAllValidatorsAt( params: GetAllValidatorsAtParameters ): Promise; interface GetAllValidatorsAtParameters { height: number | "proposed"; } interface GetAllValidatorsAtReturnType { validatorSets: Record< string, { validators: Array<{ publicKey: string; weight: string; nodeIDs: string[]; }>; totalWeight: string; } >; } ``` **Parameters:** | Name | Type | Required | Description | | -------- | ---------------------- | -------- | -------------------------------------------------- | | `height` | `number \| "proposed"` | Yes | P-Chain height or "proposed" for proposervm height | **Returns:** | Type | Description | | ------------------------------ | --------------------- | | `GetAllValidatorsAtReturnType` | Validator sets object | **Return Object:** | Property | Type | Description | | --------------- | -------- | ------------------------------------------------ | | `validatorSets` | `object` | Map of Subnet IDs to their validator information | **Note:** The public API (api.avax.network) only supports height within 1000 blocks from the P-Chain tip. **Example:** ```typescript // Get all validators at specific height const validators = await client.pChain.getAllValidatorsAt({ height: 1000001, }); // Get validators at proposed height const proposedValidators = await client.pChain.getAllValidatorsAt({ height: "proposed", }); console.log("Subnet IDs:", Object.keys(validators.validatorSets)); Object.entries(validators.validatorSets).forEach(([subnetID, set]) => { console.log(`Subnet ${subnetID}:`); console.log(` Total weight: ${set.totalWeight}`); console.log(` Validators: ${set.validators.length}`); }); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/p-chain#platformgetallvalidatorsat) - [getCurrentValidators](#getcurrentvalidators) - Get current validators - [getValidatorsAt](#getvalidatorsat) - Get validators at height for specific subnet --- ### sampleValidators Sample validators from the specified Subnet. **Function Signature:** ```typescript function sampleValidators( params: SampleValidatorsParameters ): Promise; interface SampleValidatorsParameters { samplingSize: number; subnetID?: string; pChainHeight?: number; } interface SampleValidatorsReturnType { validators: string[]; } ``` **Parameters:** | Name | Type | Required | Description | | -------------- | -------- | -------- | --------------------------------------- | | `samplingSize` | `number` | Yes | Number of validators to sample | | `subnetID` | `string` | No | Subnet ID (defaults to Primary Network) | | `pChainHeight` | `number` | No | Block height (defaults to current) | **Returns:** | Type | Description | | ---------------------------- | ------------------------- | | `SampleValidatorsReturnType` | Sampled validators object | **Return Object:** | Property | Type | Description | | ------------ | ---------- | ---------------------------------- | | `validators` | `string[]` | Array of sampled validator NodeIDs | **Example:** ```typescript const sampled = await client.pChain.sampleValidators({ samplingSize: 5, subnetID: "11111111111111111111111111111111LpoYY", }); console.log("Sampled validators:", sampled.validators); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/p-chain#platformsamplevalidators) - [getCurrentValidators](#getcurrentvalidators) - Get all current validators --- ## Block Operations ### getHeight Get the height of the last accepted block. **Function Signature:** ```typescript function getHeight(): Promise; interface GetHeightReturnType { height: number; } ``` **Parameters:** No parameters required. **Returns:** | Type | Description | | --------------------- | ------------- | | `GetHeightReturnType` | Height object | **Return Object:** | Property | Type | Description | | -------- | -------- | ---------------------------- | | `height` | `number` | Current P-Chain block height | **Example:** ```typescript const height = await client.pChain.getHeight(); console.log("Current P-Chain height:", height.height); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/p-chain#platformgetheight) - [getBlockByHeight](#getblockbyheight) - Get block by height - [getProposedHeight](#getproposedheight) - Get proposed height --- ### getBlockByHeight Get a block by its height. **Function Signature:** ```typescript function getBlockByHeight( params: GetBlockByHeightParameters ): Promise; interface GetBlockByHeightParameters { height: number; encoding?: "hex" | "json"; } interface GetBlockByHeightReturnType { encoding: "hex" | "json"; block: string | object; } ``` **Parameters:** | Name | Type | Required | Description | | ---------- | ----------------- | -------- | ----------------------------------- | | `height` | `number` | Yes | Block height | | `encoding` | `"hex" \| "json"` | No | Encoding format (defaults to "hex") | **Returns:** | Type | Description | | ---------------------------- | ----------------- | | `GetBlockByHeightReturnType` | Block data object | **Return Object:** | Property | Type | Description | | ---------- | ------------------ | ------------------------------------------- | | `encoding` | `"hex" \| "json"` | Encoding format used | | `block` | `string \| object` | Block data in the specified encoding format | **Example:** ```typescript const block = await client.pChain.getBlockByHeight({ height: 12345, encoding: "hex", }); console.log("Block data:", block.block); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/p-chain#platformgetblockbyheight) - [getBlock](#getblock) - Get block by ID - [getHeight](#getheight) - Get current height --- ### getBlock Get a block by its ID. **Function Signature:** ```typescript function getBlock(params: GetBlockParameters): Promise; interface GetBlockParameters { blockId: string; encoding?: "hex" | "json"; } interface GetBlockReturnType { encoding: "hex" | "json"; block: string | object; } ``` **Parameters:** | Name | Type | Required | Description | | ---------- | ----------------- | -------- | ----------------------------------- | | `blockId` | `string` | Yes | Block ID in CB58 format | | `encoding` | `"hex" \| "json"` | No | Encoding format (defaults to "hex") | **Returns:** | Type | Description | | -------------------- | ----------------- | | `GetBlockReturnType` | Block data object | **Return Object:** | Property | Type | Description | | ---------- | ------------------ | ------------------------------------------- | | `encoding` | `"hex" \| "json"` | Encoding format used | | `block` | `string \| object` | Block data in the specified encoding format | **Example:** ```typescript const block = await client.pChain.getBlock({ blockId: "d7WYmb8VeZNHsny3EJCwMm6QA37s1EHwMxw1Y71V3FqPZ5EFG", encoding: "hex", }); console.log("Block:", block.block); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/p-chain#platformgetblock) - [getBlockByHeight](#getblockbyheight) - Get block by height --- ## Staking Operations ### getStake Get the stake amount for a set of addresses. **Function Signature:** ```typescript function getStake(params: GetStakeParameters): Promise; interface GetStakeParameters { addresses: string[]; subnetID: string; } interface GetStakeReturnType { stakeAmount: bigint; } ``` **Parameters:** | Name | Type | Required | Description | | ----------- | ---------- | -------- | ----------------- | | `addresses` | `string[]` | Yes | P-Chain addresses | | `subnetID` | `string` | Yes | Subnet ID | **Returns:** | Type | Description | | -------------------- | ------------------- | | `GetStakeReturnType` | Stake amount object | **Return Object:** | Property | Type | Description | | ------------- | -------- | ------------------------------------ | | `stakeAmount` | `bigint` | Total stake amount for the addresses | **Example:** ```typescript const stake = await client.pChain.getStake({ addresses: ["P-custom18jma8ppw3nhx5r4ap8clazz0dps7rv5u9xde7p"], subnetID: "11111111111111111111111111111111LpoYY", }); console.log("Stake amount:", stake.stakeAmount); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/p-chain#platformgetstake) - [getTotalStake](#gettotalstake) - Get total subnet stake - [getMinStake](#getminstake) - Get minimum stake requirements --- ### getTotalStake Get the total amount of stake for a Subnet. **Function Signature:** ```typescript function getTotalStake( params: GetTotalStakeParameters ): Promise; interface GetTotalStakeParameters { subnetID: string; } interface GetTotalStakeReturnType { stake: bigint; weight: bigint; } ``` **Parameters:** | Name | Type | Required | Description | | ---------- | -------- | -------- | ----------- | | `subnetID` | `string` | Yes | Subnet ID | **Returns:** | Type | Description | | ------------------------- | ------------------ | | `GetTotalStakeReturnType` | Total stake object | **Return Object:** | Property | Type | Description | | -------- | -------- | --------------------------------- | | `stake` | `bigint` | Total stake amount for the subnet | | `weight` | `bigint` | Total weight for the subnet | **Example:** ```typescript const totalStake = await client.pChain.getTotalStake({ subnetID: "11111111111111111111111111111111LpoYY", }); console.log("Total stake:", totalStake.stake); console.log("Total weight:", totalStake.weight); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/p-chain#platformgettotalstake) - [getStake](#getstake) - Get stake for addresses - [getMinStake](#getminstake) - Get minimum stake --- ### getMinStake Get the minimum stake required to validate or delegate. **Function Signature:** ```typescript function getMinStake( params: GetMinStakeParameters ): Promise; interface GetMinStakeParameters { subnetID: string; } interface GetMinStakeReturnType { minValidatorStake: bigint; minDelegatorStake: bigint; } ``` **Parameters:** | Name | Type | Required | Description | | ---------- | -------- | -------- | ----------- | | `subnetID` | `string` | Yes | Subnet ID | **Returns:** | Type | Description | | ----------------------- | -------------------- | | `GetMinStakeReturnType` | Minimum stake object | **Return Object:** | Property | Type | Description | | ------------------- | -------- | -------------------------------------------- | | `minValidatorStake` | `bigint` | Minimum stake required to become a validator | | `minDelegatorStake` | `bigint` | Minimum stake required to delegate | **Example:** ```typescript const minStake = await client.pChain.getMinStake({ subnetID: "11111111111111111111111111111111LpoYY", }); console.log("Min validator stake:", minStake.minValidatorStake); console.log("Min delegator stake:", minStake.minDelegatorStake); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/p-chain#platformgetminstake) - [getTotalStake](#gettotalstake) - Get total subnet stake --- ## Subnet Operations ### getSubnet Get information about a subnet. **Function Signature:** ```typescript function getSubnet(params: GetSubnetParameters): Promise; interface GetSubnetParameters { subnetID: string; } interface GetSubnetReturnType { isPermissioned: boolean; controlKeys: string[]; threshold: string; locktime: string; subnetTransformationTxID: string; conversionID: string; managerChainID: string; managerAddress: string | null; } ``` **Parameters:** | Name | Type | Required | Description | | ---------- | -------- | -------- | -------------------- | | `subnetID` | `string` | Yes | The ID of the subnet | **Returns:** | Type | Description | | --------------------- | ------------------------- | | `GetSubnetReturnType` | Subnet information object | **Return Object:** | Property | Type | Description | | -------------------------- | ---------------- | ------------------------------------ | | `isPermissioned` | `boolean` | Whether the subnet is permissioned | | `controlKeys` | `string[]` | Control keys for the subnet | | `threshold` | `string` | Signature threshold | | `locktime` | `string` | Locktime for the subnet | | `subnetTransformationTxID` | `string` | Subnet transformation transaction ID | | `conversionID` | `string` | Conversion ID | | `managerChainID` | `string` | Manager chain ID | | `managerAddress` | `string \| null` | Manager address (null if not set) | **Example:** ```typescript const subnet = await client.pChain.getSubnet({ subnetID: "11111111111111111111111111111111LpoYY", }); console.log("Is permissioned:", subnet.isPermissioned); console.log("Control keys:", subnet.controlKeys); console.log("Threshold:", subnet.threshold); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/p-chain#platformgetsubnet) - [getSubnets](#getsubnets) - Get multiple subnets --- ### getSubnets Get information about multiple subnets. **Function Signature:** ```typescript function getSubnets( params: GetSubnetsParameters ): Promise; interface GetSubnetsParameters { ids: string[]; } interface GetSubnetsReturnType { subnets: { id: string; controlKeys: string[]; threshold: string; }[]; } ``` **Parameters:** | Name | Type | Required | Description | | ----- | ---------- | -------- | ---------------------------- | | `ids` | `string[]` | Yes | Array of subnet IDs to query | **Returns:** | Type | Description | | ---------------------- | -------------------------- | | `GetSubnetsReturnType` | Subnets information object | **Return Object:** | Property | Type | Description | | --------- | ------- | ----------------------------------- | | `subnets` | `array` | Array of subnet information objects | **Example:** ```typescript const subnets = await client.pChain.getSubnets({ ids: [ "11111111111111111111111111111111LpoYY", "SubnetID-11111111111111111111111111111111LpoYY", ], }); console.log("Number of subnets:", subnets.subnets.length); subnets.subnets.forEach((subnet) => { console.log("Subnet ID:", subnet.id); console.log("Control keys:", subnet.controlKeys); }); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/p-chain#platformgetsubnets) - [getSubnet](#getsubnet) - Get single subnet --- ### getStakingAssetID Get the staking asset ID for a subnet. **Function Signature:** ```typescript function getStakingAssetID( params: GetStakingAssetIDParameters ): Promise; interface GetStakingAssetIDParameters { subnetID: string; } interface GetStakingAssetIDReturnType { assetID: string; } ``` **Parameters:** | Name | Type | Required | Description | | ---------- | -------- | -------- | -------------------- | | `subnetID` | `string` | Yes | The ID of the subnet | **Returns:** | Type | Description | | ----------------------------- | ----------------------- | | `GetStakingAssetIDReturnType` | Staking asset ID object | **Return Object:** | Property | Type | Description | | --------- | -------- | --------------------------------------- | | `assetID` | `string` | Asset ID used for staking on the subnet | **Example:** ```typescript const stakingAsset = await client.pChain.getStakingAssetID({ subnetID: "11111111111111111111111111111111LpoYY", }); console.log("Staking asset ID:", stakingAsset.assetID); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/p-chain#platformgetstakingassetid) - [getSubnet](#getsubnet) - Get subnet information --- ## Blockchain Operations ### getBlockchains Get all the blockchains that exist (excluding the P-Chain). **Function Signature:** ```typescript function getBlockchains(): Promise; interface GetBlockchainsReturnType { blockchains: { id: string; name: string; subnetID: string; vmID: string; }[]; } ``` **Parameters:** No parameters required. **Returns:** | Type | Description | | -------------------------- | ----------------------- | | `GetBlockchainsReturnType` | Blockchains list object | **Return Object:** | Property | Type | Description | | ------------- | ------- | --------------------------------------- | | `blockchains` | `array` | Array of blockchain information objects | **Example:** ```typescript const blockchains = await client.pChain.getBlockchains(); console.log("Number of blockchains:", blockchains.blockchains.length); blockchains.blockchains.forEach((blockchain) => { console.log("Blockchain:", blockchain.name); console.log(" ID:", blockchain.id); console.log(" Subnet ID:", blockchain.subnetID); console.log(" VM ID:", blockchain.vmID); }); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/p-chain#platformgetblockchains) - [getBlockchainStatus](#getblockchainstatus) - Get blockchain status --- ### getBlockchainStatus Get the status of a blockchain. **Function Signature:** ```typescript function getBlockchainStatus( params: GetBlockchainStatusParameters ): Promise; interface GetBlockchainStatusParameters { blockchainId: string; } interface GetBlockchainStatusReturnType { status: "Validating" | "Created" | "Preferred" | "Syncing" | "Unknown"; } ``` **Parameters:** | Name | Type | Required | Description | | -------------- | -------- | -------- | ------------------------ | | `blockchainId` | `string` | Yes | The ID of the blockchain | **Returns:** | Type | Description | | ------------------------------- | ------------------------ | | `GetBlockchainStatusReturnType` | Blockchain status object | **Return Object:** | Property | Type | Description | | -------- | -------- | -------------------------------------------------------------------------------- | | `status` | `string` | Blockchain status: "Validating", "Created", "Preferred", "Syncing", or "Unknown" | **Status Values:** - **Validating**: The blockchain is being validated by this node - **Created**: The blockchain exists but isn't being validated by this node - **Preferred**: The blockchain was proposed to be created and is likely to be created, but the transaction isn't yet accepted - **Syncing**: This node is participating in the blockchain as a non-validating node - **Unknown**: The blockchain either wasn't proposed or the proposal isn't preferred **Example:** ```typescript const status = await client.pChain.getBlockchainStatus({ blockchainId: "11111111111111111111111111111111LpoYY", }); console.log("Blockchain status:", status.status); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/p-chain#platformgetblockchainstatus) - [getBlockchains](#getblockchains) - Get all blockchains --- ## Transaction Operations ### getTx Get a transaction by its ID. **Function Signature:** ```typescript function getTx(params: GetTxParameters): Promise; interface GetTxParameters { txID: string; encoding?: "hex" | "json"; } interface GetTxReturnType { encoding: "hex" | "json"; tx: string | object; } ``` **Parameters:** | Name | Type | Required | Description | | ---------- | ----------------- | -------- | ----------------------------------- | | `txID` | `string` | Yes | Transaction ID in CB58 format | | `encoding` | `"hex" \| "json"` | No | Encoding format (defaults to "hex") | **Returns:** | Type | Description | | ----------------- | ----------------------- | | `GetTxReturnType` | Transaction data object | **Return Object:** | Property | Type | Description | | ---------- | ------------------ | ------------------------------------------------- | | `encoding` | `"hex" \| "json"` | Encoding format used | | `tx` | `string \| object` | Transaction data in the specified encoding format | **Example:** ```typescript const tx = await client.pChain.getTx({ txID: "11111111111111111111111111111111LpoYY", encoding: "hex", }); console.log("Transaction:", tx); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/p-chain#platformgettx) - [getTxStatus](#gettxstatus) - Get transaction status - [issueTx](#issuetx) - Issue a transaction --- ### getTxStatus Get the status of a transaction. **Function Signature:** ```typescript function getTxStatus( params: GetTxStatusParameters ): Promise; interface GetTxStatusParameters { txID: string; } interface GetTxStatusReturnType { status: "Committed" | "Pending" | "Dropped" | "Unknown"; reason?: string; } ``` **Parameters:** | Name | Type | Required | Description | | ------ | -------- | -------- | ----------------------------- | | `txID` | `string` | Yes | Transaction ID in CB58 format | **Returns:** | Type | Description | | ----------------------- | ------------------------- | | `GetTxStatusReturnType` | Transaction status object | **Return Object:** | Property | Type | Description | | -------- | -------- | ------------------------------------------------------------------- | | `status` | `string` | Transaction status: "Committed", "Pending", "Dropped", or "Unknown" | | `reason` | `string` | Optional reason for the status (if dropped) | **Status Values:** - **Committed**: The transaction is (or will be) accepted by every node - **Pending**: The transaction is being voted on by this node - **Dropped**: The transaction will never be accepted by any node in the network - **Unknown**: The transaction hasn't been seen by this node **Example:** ```typescript const txStatus = await client.pChain.getTxStatus({ txID: "11111111111111111111111111111111LpoYY", }); console.log("Transaction status:", txStatus.status); if (txStatus.reason) { console.log("Reason:", txStatus.reason); } ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/p-chain#platformgettxstatus) - [getTx](#gettx) - Get transaction details - [issueTx](#issuetx) - Issue a transaction --- ### issueTx Issue a transaction to the Platform Chain. **Function Signature:** ```typescript function issueTx(params: IssueTxParameters): Promise; interface IssueTxParameters { tx: string; encoding: "hex"; } interface IssueTxReturnType { txID: string; } ``` **Parameters:** | Name | Type | Required | Description | | ---------- | -------- | -------- | ------------------------------- | | `tx` | `string` | Yes | Transaction bytes in hex format | | `encoding` | `"hex"` | Yes | Encoding format (must be "hex") | **Returns:** | Type | Description | | ------------------- | --------------------- | | `IssueTxReturnType` | Transaction ID object | **Return Object:** | Property | Type | Description | | -------- | -------- | ----------------------------- | | `txID` | `string` | Transaction ID in CB58 format | **Example:** ```typescript const txID = await client.pChain.issueTx({ tx: "0x00000009de31b4d8b22991d51aa6aa1fc733f23a851a8c9400000000000186a0...", encoding: "hex", }); console.log("Transaction issued:", txID.txID); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/p-chain#platformissuetx) - [getTx](#gettx) - Get transaction details - [getTxStatus](#gettxstatus) - Get transaction status --- ## Block Operations ### getProposedHeight Get the proposed height of the P-Chain. **Function Signature:** ```typescript function getProposedHeight(): Promise; interface GetProposedHeightReturnType { height: number; } ``` **Parameters:** No parameters required. **Returns:** | Type | Description | | ----------------------------- | ---------------------- | | `GetProposedHeightReturnType` | Proposed height object | **Return Object:** | Property | Type | Description | | -------- | -------- | ----------------------------- | | `height` | `number` | Proposed P-Chain block height | **Example:** ```typescript const proposedHeight = await client.pChain.getProposedHeight(); console.log("Proposed height:", proposedHeight.height); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/p-chain#platformgetproposedheight) - [getHeight](#getheight) - Get current accepted height --- ### getTimestamp Get the current timestamp of the P-Chain. **Function Signature:** ```typescript function getTimestamp(): Promise; interface GetTimestampReturnType { timestamp: string; } ``` **Parameters:** No parameters required. **Returns:** | Type | Description | | ------------------------ | ---------------- | | `GetTimestampReturnType` | Timestamp object | **Return Object:** | Property | Type | Description | | ----------- | -------- | ------------------------------------ | | `timestamp` | `string` | Current timestamp in ISO 8601 format | **Example:** ```typescript const timestamp = await client.pChain.getTimestamp(); console.log("Current timestamp:", timestamp.timestamp); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/p-chain#platformgettimestamp) - [getHeight](#getheight) - Get current height --- ## Fee Operations ### getFeeConfig Get the fee configuration for the P-Chain. **Function Signature:** ```typescript function getFeeConfig(): Promise; interface GetFeeConfigReturnType { weights: [ bandwidth: number, dbRead: number, dbWrite: number, compute: number, ]; maxCapacity: bigint; maxPerSecond: bigint; targetPerSecond: bigint; minPrice: bigint; excessConversionConstant: bigint; } ``` **Parameters:** No parameters required. **Returns:** | Type | Description | | ------------------------ | ------------------------ | | `GetFeeConfigReturnType` | Fee configuration object | **Return Object:** | Property | Type | Description | | -------------------------- | -------- | -------------------------------------------------- | | `weights` | `array` | Fee weights: [bandwidth, dbRead, dbWrite, compute] | | `maxCapacity` | `bigint` | Maximum capacity | | `maxPerSecond` | `bigint` | Maximum per second | | `targetPerSecond` | `bigint` | Target per second | | `minPrice` | `bigint` | Minimum price | | `excessConversionConstant` | `bigint` | Excess conversion constant | **Example:** ```typescript const feeConfig = await client.pChain.getFeeConfig(); console.log("Fee weights:", feeConfig.weights); console.log("Max capacity:", feeConfig.maxCapacity); console.log("Target per second:", feeConfig.targetPerSecond); console.log("Min price:", feeConfig.minPrice); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/p-chain#platformgetfeeconfig) - [getFeeState](#getfeestate) - Get current fee state --- ### getFeeState Get the current fee state of the P-Chain. **Function Signature:** ```typescript function getFeeState(): Promise; interface GetFeeStateReturnType { capacity: bigint; excess: bigint; price: bigint; timestamp: string; } ``` **Parameters:** No parameters required. **Returns:** | Type | Description | | ----------------------- | ---------------- | | `GetFeeStateReturnType` | Fee state object | **Return Object:** | Property | Type | Description | | ----------- | -------- | -------------------------- | | `capacity` | `bigint` | Current fee capacity | | `excess` | `bigint` | Current fee excess | | `price` | `bigint` | Current fee price | | `timestamp` | `string` | Timestamp of the fee state | **Example:** ```typescript const feeState = await client.pChain.getFeeState(); console.log("Fee capacity:", feeState.capacity); console.log("Fee excess:", feeState.excess); console.log("Fee price:", feeState.price); console.log("Timestamp:", feeState.timestamp); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/p-chain#platformgetfeestate) - [getFeeConfig](#getfeeconfig) - Get fee configuration --- ## Supply Operations ### getCurrentSupply Get the current supply of AVAX tokens. **Function Signature:** ```typescript function getCurrentSupply( params?: GetCurrentSupplyParameters ): Promise; interface GetCurrentSupplyParameters { subnetId?: string; } interface GetCurrentSupplyReturnType { supply: bigint; } ``` **Parameters:** | Name | Type | Required | Description | | ---------- | -------- | -------- | -------------------------------------------------- | | `subnetId` | `string` | No | Subnet ID (defaults to Primary Network if omitted) | **Returns:** | Type | Description | | ---------------------------- | ------------- | | `GetCurrentSupplyReturnType` | Supply object | **Return Object:** | Property | Type | Description | | -------- | -------- | ---------------------------------------------- | | `supply` | `bigint` | Upper bound on the number of tokens that exist | **Example:** ```typescript // Get Primary Network supply const supply = await client.pChain.getCurrentSupply(); console.log("Primary Network supply:", supply.supply); // Get subnet-specific supply const subnetSupply = await client.pChain.getCurrentSupply({ subnetId: "11111111111111111111111111111111LpoYY", }); console.log("Subnet supply:", subnetSupply.supply); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/p-chain#platformgetcurrentsupply) - [getBalance](#getbalance) - Get address balance --- ## Reward Operations ### getRewardUTXOs Get the reward UTXOs for a transaction. **Function Signature:** ```typescript function getRewardUTXOs( params: GetRewardUTXOsParameters ): Promise; interface GetRewardUTXOsParameters { txID: string; encoding?: "hex"; } interface GetRewardUTXOsReturnType { numFetched: number; utxos: string[]; encoding: "hex"; } ``` **Parameters:** | Name | Type | Required | Description | | ---------- | -------- | -------- | ----------------------------------- | | `txID` | `string` | Yes | Transaction ID in CB58 format | | `encoding` | `"hex"` | No | Encoding format (defaults to "hex") | **Returns:** | Type | Description | | -------------------------- | ------------------- | | `GetRewardUTXOsReturnType` | Reward UTXOs object | **Return Object:** | Property | Type | Description | | ------------ | ---------- | ---------------------------------------- | | `numFetched` | `number` | Number of reward UTXOs fetched | | `utxos` | `string[]` | Array of reward UTXO bytes (hex encoded) | | `encoding` | `"hex"` | Encoding format used | **Example:** ```typescript const rewardUTXOs = await client.pChain.getRewardUTXOs({ txID: "11111111111111111111111111111111LpoYY", encoding: "hex", }); console.log("Reward UTXOs fetched:", rewardUTXOs.numFetched); console.log("UTXOs:", rewardUTXOs.utxos); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/p-chain#platformgetrewardutxos) - [getUTXOs](#getutxos) - Get UTXOs for addresses --- ## L1 Validator Operations ### getL1Validator Get information about an L1 validator. **Function Signature:** ```typescript function getL1Validator( params: GetL1ValidatorParameters ): Promise; interface GetL1ValidatorParameters { validationID: string; } interface GetL1ValidatorReturnType { subnetID: string; nodeID: string; publicKey: string; remainingBalanceOwner: { addresses: string[]; locktime: string; threshold: string; }; deactivationOwner: { addresses: string[]; locktime: string; threshold: string; }; startTime: bigint; weight: bigint; minNonce?: bigint; balance?: bigint; height?: bigint; } ``` **Parameters:** | Name | Type | Required | Description | | -------------- | -------- | -------- | ------------------------------------------------------- | | `validationID` | `string` | Yes | The ID for L1 subnet validator registration transaction | **Returns:** | Type | Description | | -------------------------- | ------------------------------- | | `GetL1ValidatorReturnType` | L1 validator information object | **Return Object:** | Property | Type | Description | | ----------------------- | -------- | --------------------------------------------- | | `subnetID` | `string` | L1 subnet ID this validator is validating | | `nodeID` | `string` | Node ID of the validator | | `publicKey` | `string` | Compressed BLS public key of the validator | | `remainingBalanceOwner` | `object` | Owner that will receive any withdrawn balance | | `deactivationOwner` | `object` | Owner that can withdraw the balance | | `startTime` | `bigint` | Unix timestamp when validator was added | | `weight` | `bigint` | Weight used for consensus voting and ICM | | `minNonce` | `bigint` | Minimum nonce for SetL1ValidatorWeightTx | | `balance` | `bigint` | Current remaining balance for continuous fee | | `height` | `bigint` | Height of the last accepted block | **Example:** ```typescript const validator = await client.pChain.getL1Validator({ validationID: "11111111111111111111111111111111LpoYY", }); console.log("Subnet ID:", validator.subnetID); console.log("Node ID:", validator.nodeID); console.log("Weight:", validator.weight); console.log("Start time:", validator.startTime); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/p-chain#platformgetl1validator) - [getCurrentValidators](#getcurrentvalidators) - Get current validators --- ## Chain Validation ### validatedBy Get the subnet that validates a given blockchain. **Function Signature:** ```typescript function validatedBy( params: ValidatedByParameters ): Promise; interface ValidatedByParameters { blockchainID: string; } interface ValidatedByReturnType { subnetID: string; } ``` **Parameters:** | Name | Type | Required | Description | | -------------- | -------- | -------- | ------------------- | | `blockchainID` | `string` | Yes | The blockchain's ID | **Returns:** | Type | Description | | ----------------------- | ---------------- | | `ValidatedByReturnType` | Subnet ID object | **Return Object:** | Property | Type | Description | | ---------- | -------- | ---------------------------------------------- | | `subnetID` | `string` | ID of the subnet that validates the blockchain | **Example:** ```typescript const validatedBy = await client.pChain.validatedBy({ blockchainID: "11111111111111111111111111111111LpoYY", }); console.log("Validated by subnet:", validatedBy.subnetID); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/p-chain#platformvalidatedby) - [validates](#validates) - Get blockchains validated by subnet --- ### validates Get the IDs of the blockchains a subnet validates. **Function Signature:** ```typescript function validates(params: ValidatesParameters): Promise; interface ValidatesParameters { subnetID: string; } interface ValidatesReturnType { blockchainIDs: string[]; } ``` **Parameters:** | Name | Type | Required | Description | | ---------- | -------- | -------- | --------------- | | `subnetID` | `string` | Yes | The subnet's ID | **Returns:** | Type | Description | | --------------------- | --------------------- | | `ValidatesReturnType` | Blockchain IDs object | **Return Object:** | Property | Type | Description | | --------------- | ---------- | ----------------------------------------------- | | `blockchainIDs` | `string[]` | Array of blockchain IDs validated by the subnet | **Example:** ```typescript const validates = await client.pChain.validates({ subnetID: "11111111111111111111111111111111LpoYY", }); console.log("Number of blockchains:", validates.blockchainIDs.length); validates.blockchainIDs.forEach((blockchainID) => { console.log("Blockchain ID:", blockchainID); }); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/p-chain#platformvalidates) - [validatedBy](#validatedby) - Get subnet validating blockchain ## Next Steps - **[P-Chain Wallet Methods](../wallet-methods/p-chain-wallet)** - Transaction preparation and signing - **[Wallet Client](../../clients/wallet-client)** - Complete wallet operations - **[Account Management](../../accounts)** - Account types and management # Public Methods (/docs/tooling/avalanche-sdk/client/methods/public-methods/public) --- title: Public Methods description: Complete reference for Avalanche-specific public client methods --- ## Overview The Avalanche Client extends viem's Public Client with Avalanche-specific methods for querying fee information, chain configuration, and active rules. These methods are available on both the main Avalanche Client and the C-Chain client. ## Fee Operations ### baseFee Get the base fee for the next block on the C-Chain. **Function Signature:** ```typescript function baseFee(): Promise; ``` **Parameters:** No parameters required. **Returns:** | Type | Description | | -------- | -------------------------------------------------------------- | | `string` | Base fee for the next block as hex string (e.g., "0x3b9aca00") | **Example:** ```typescript import { createAvalancheClient } from "@avalanche-sdk/client"; import { avalanche } from "@avalanche-sdk/client/chains"; const client = createAvalancheClient({ chain: avalanche, transport: { type: "http" }, }); const baseFee = await client.baseFee(); console.log("Base fee:", baseFee); // "0x3b9aca00" ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/c-chain#eth_basefee) - [maxPriorityFeePerGas](#maxpriorityfeepergas) - Get max priority fee per gas --- ### maxPriorityFeePerGas Get the maximum priority fee per gas for the next block. **Function Signature:** ```typescript function maxPriorityFeePerGas(): Promise; ``` **Parameters:** No parameters required. **Returns:** | Type | Description | | -------- | --------------------------------------------------------------- | | `string` | Maximum priority fee per gas as hex string (e.g., "0x3b9aca00") | **Example:** ```typescript const maxPriorityFee = await client.maxPriorityFeePerGas(); console.log("Max priority fee per gas:", maxPriorityFee); // Use in EIP-1559 transaction const txHash = await walletClient.sendTransaction({ to: "0x...", value: avaxToWei(1), maxFeePerGas: baseFee + maxPriorityFee, maxPriorityFeePerGas: maxPriorityFee, }); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/c-chain#eth_maxpriorityfeepergas) - [baseFee](#basefee) - Get base fee --- ### feeConfig Get the fee configuration for a specific block. Returns fee settings and when they were last changed. **Function Signature:** ```typescript function feeConfig(params: FeeConfigParameters): Promise; interface FeeConfigParameters { blk?: string; // Block number or hash, defaults to "latest" } interface FeeConfigReturnType { feeConfig: { [key: string]: string; }; lastChangedAt: string; } ``` **Parameters:** | Name | Type | Required | Description | | ----- | -------- | -------- | ------------------------------------------------------- | | `blk` | `string` | No | Block number or hash (hex string), defaults to "latest" | **Returns:** | Type | Description | | --------------------- | ------------------------ | | `FeeConfigReturnType` | Fee configuration object | **Return Object:** | Property | Type | Description | | --------------- | -------- | ------------------------------------------ | | `feeConfig` | `object` | Fee configuration key-value pairs | | `lastChangedAt` | `string` | Timestamp when fee config was last changed | **Example:** ```typescript // Get fee config for latest block const feeConfig = await client.feeConfig({}); console.log("Fee config:", feeConfig.feeConfig); console.log("Last changed:", feeConfig.lastChangedAt); // Get fee config for specific block const blockFeeConfig = await client.feeConfig({ blk: "0x123456" }); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/subnet-evm#eth_feeconfig) - [baseFee](#basefee) - Get base fee --- ## Chain Configuration ### getChainConfig Get the chain configuration for the C-Chain, including fork blocks and Avalanche-specific upgrade timestamps. **Function Signature:** ```typescript function getChainConfig(): Promise; interface GetChainConfigReturnType { chainId: number; homesteadBlock: number; daoForkBlock: number; daoForkSupport: boolean; eip150Block: number; eip150Hash: string; eip155Block: number; eip158Block: number; byzantiumBlock: number; constantinopleBlock: number; petersburgBlock: number; istanbulBlock: number; muirGlacierBlock: number; apricotPhase1BlockTimestamp: number; apricotPhase2BlockTimestamp: number; apricotPhase3BlockTimestamp: number; apricotPhase4BlockTimestamp: number; apricotPhase5BlockTimestamp: number; } ``` **Parameters:** No parameters required. **Returns:** | Type | Description | | -------------------------- | -------------------------- | | `GetChainConfigReturnType` | Chain configuration object | **Return Object:** | Property | Type | Description | | ----------------------------- | --------- | --------------------------------- | | `chainId` | `number` | Chain ID | | `homesteadBlock` | `number` | Homestead fork block | | `daoForkBlock` | `number` | DAO fork block | | `daoForkSupport` | `boolean` | DAO fork support flag | | `eip150Block` | `number` | EIP-150 fork block | | `eip150Hash` | `string` | EIP-150 fork hash | | `eip155Block` | `number` | EIP-155 fork block | | `eip158Block` | `number` | EIP-158 fork block | | `byzantiumBlock` | `number` | Byzantium fork block | | `constantinopleBlock` | `number` | Constantinople fork block | | `petersburgBlock` | `number` | Petersburg fork block | | `istanbulBlock` | `number` | Istanbul fork block | | `muirGlacierBlock` | `number` | Muir Glacier fork block | | `apricotPhase1BlockTimestamp` | `number` | Apricot Phase 1 upgrade timestamp | | `apricotPhase2BlockTimestamp` | `number` | Apricot Phase 2 upgrade timestamp | | `apricotPhase3BlockTimestamp` | `number` | Apricot Phase 3 upgrade timestamp | | `apricotPhase4BlockTimestamp` | `number` | Apricot Phase 4 upgrade timestamp | | `apricotPhase5BlockTimestamp` | `number` | Apricot Phase 5 upgrade timestamp | **Example:** ```typescript const chainConfig = await client.getChainConfig(); console.log("Chain ID:", chainConfig.chainId); console.log("Apricot Phase 1:", chainConfig.apricotPhase1BlockTimestamp); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/c-chain#eth_getchainconfig) --- ## Active Rules ### getActiveRulesAt Get the active rules (EIPs, precompiles) at a specific timestamp. Useful for determining which features are enabled at a given time. **Function Signature:** ```typescript function getActiveRulesAt( params: GetActiveRulesAtParameters ): Promise; interface GetActiveRulesAtParameters { timestamp: string; // Unix timestamp as hex string or "latest" } interface GetActiveRulesAtReturnType { ethRules: Map; avalancheRules: Map; precompiles: Map; } ``` **Parameters:** | Name | Type | Required | Description | | ----------- | -------- | -------- | --------------------------------------------------------------- | | `timestamp` | `string` | Yes | Unix timestamp as hex string (e.g., "0x1234567890") or "latest" | **Returns:** | Type | Description | | ---------------------------- | ------------------- | | `GetActiveRulesAtReturnType` | Active rules object | **Return Object:** | Property | Type | Description | | ---------------- | --------------------- | -------------------------------------------- | | `ethRules` | `Map` | Active Ethereum rules (EIPs) | | `avalancheRules` | `Map` | Active Avalanche-specific rules | | `precompiles` | `Map` | Active precompiles with their configurations | **Example:** ```typescript // Get active rules at current time const activeRules = await client.getActiveRulesAt({ timestamp: "latest", }); console.log("Ethereum rules:", Array.from(activeRules.ethRules.keys())); console.log("Avalanche rules:", Array.from(activeRules.avalancheRules.keys())); console.log("Precompiles:", Array.from(activeRules.precompiles.keys())); // Get active rules at specific timestamp const historicalRules = await client.getActiveRulesAt({ timestamp: "0x1234567890", }); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/subnet-evm#eth_getactiverulesat) --- ## Viem Integration The Avalanche Client extends viem's Public Client, providing access to all standard Ethereum RPC methods. For complete method reference, see: - **[viem Documentation](https://viem.sh/docs)** - Complete EVM method reference - **[viem Actions](https://viem.sh/docs/actions/public)** - Public client actions - **[viem Utilities](https://viem.sh/docs/utilities)** - Utility functions ## Next Steps - **[C-Chain Methods](c-chain)** - C-Chain-specific methods - **[Wallet Client](../clients/wallet-client)** - Transaction operations - **[Account Management](../../accounts)** - Account types and management # X-Chain Methods (/docs/tooling/avalanche-sdk/client/methods/public-methods/x-chain) --- title: X-Chain Methods description: Complete reference for X-Chain (Exchange Chain) methods --- ## Overview The X-Chain (Exchange Chain) is Avalanche's DAG-based chain designed for creating and trading digital smart assets. It handles asset creation, transfers, UTXO management, and provides the foundation for the Avalanche ecosystem. This reference covers all read-only X-Chain operations available through the Avalanche Client SDK. ## Balance Operations ### getBalance Get the balance of a specific asset controlled by a given address. **Function Signature:** ```typescript function getBalance( params: GetBalanceParameters ): Promise; interface GetBalanceParameters { address: string; assetID: string; } interface GetBalanceReturnType { balance: bigint; utxoIDs: { txID: string; outputIndex: number; }[]; } ``` **Parameters:** | Name | Type | Required | Description | | --------- | -------- | -------- | ------------------------ | | `address` | `string` | Yes | X-Chain address to query | | `assetID` | `string` | Yes | Asset ID to query | **Returns:** | Type | Description | | ---------------------- | -------------------------- | | `GetBalanceReturnType` | Balance information object | **Return Object:** | Property | Type | Description | | --------- | -------- | ----------------------------------------- | | `balance` | `bigint` | Balance amount for the specified asset | | `utxoIDs` | `array` | Array of UTXO IDs referencing the address | **Example:** ```typescript import { createAvalancheClient } from "@avalanche-sdk/client"; import { avalanche } from "@avalanche-sdk/client/chains"; const client = createAvalancheClient({ chain: avalanche, transport: { type: "http" }, }); const balance = await client.xChain.getBalance({ address: "X-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5", assetID: "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z", // AVAX }); console.log("Balance:", balance.balance); console.log("UTXO IDs:", balance.utxoIDs); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/x-chain#avmgetbalance) - [getAllBalances](#getallbalances) - Get balances for all assets - [getUTXOs](#getutxos) - Get UTXOs for addresses --- ### getAllBalances Get the balances of all assets controlled by given addresses. **Function Signature:** ```typescript function getAllBalances( params: GetAllBalancesParameters ): Promise; interface GetAllBalancesParameters { addresses: string[]; } interface GetAllBalancesReturnType { balances: Array<{ assetID: string; balance: bigint; }>; } ``` **Parameters:** | Name | Type | Required | Description | | ----------- | ---------- | -------- | ----------------------------------- | | `addresses` | `string[]` | Yes | Array of X-Chain addresses to query | **Returns:** | Type | Description | | -------------------------- | --------------------- | | `GetAllBalancesReturnType` | Balances array object | **Return Object:** | Property | Type | Description | | ---------- | ------- | ------------------------------------------------- | | `balances` | `array` | Array of balance objects with assetID and balance | **Example:** ```typescript const allBalances = await client.xChain.getAllBalances({ addresses: ["X-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5"], }); // Iterate over all assets allBalances.balances.forEach(({ assetID, balance }) => { console.log(`Asset ${assetID}: ${balance}`); }); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/x-chain#avmgetallbalances) - [getBalance](#getbalance) - Get balance for specific asset - [getAssetDescription](#getassetdescription) - Get asset information --- ## Asset Operations ### getAssetDescription Get information about an asset. **Function Signature:** ```typescript function getAssetDescription( params: GetAssetDescriptionParameters ): Promise; interface GetAssetDescriptionParameters { assetID: string; } interface GetAssetDescriptionReturnType { assetID: string; name: string; symbol: string; denomination: number; } ``` **Parameters:** | Name | Type | Required | Description | | --------- | -------- | -------- | ----------- | | `assetID` | `string` | Yes | Asset ID | **Returns:** | Type | Description | | ------------------------------- | ------------------------ | | `GetAssetDescriptionReturnType` | Asset information object | **Return Object:** | Property | Type | Description | | -------------- | -------- | --------------------------------------------- | | `assetID` | `string` | Asset ID | | `name` | `string` | Asset name | | `symbol` | `string` | Asset symbol | | `denomination` | `number` | Asset denomination (number of decimal places) | **Example:** ```typescript const asset = await client.xChain.getAssetDescription({ assetID: "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z", }); console.log("Asset name:", asset.name); console.log("Asset symbol:", asset.symbol); console.log("Denomination:", asset.denomination); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/x-chain#avmgetassetdescription) - [getBalance](#getbalance) - Get balance for this asset --- ## Block Operations ### getHeight Get the current block height of the X-Chain. **Function Signature:** ```typescript function getHeight(): Promise; interface GetHeightReturnType { height: number; } ``` **Parameters:** No parameters required. **Returns:** | Type | Description | | --------------------- | ------------- | | `GetHeightReturnType` | Height object | **Return Object:** | Property | Type | Description | | -------- | -------- | ---------------------------- | | `height` | `number` | Current X-Chain block height | **Example:** ```typescript const height = await client.xChain.getHeight(); console.log("Current X-Chain height:", height.height); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/x-chain#avmgetheight) - [getBlockByHeight](#getblockbyheight) - Get block at height --- ### getBlockByHeight Get a block by its height. **Function Signature:** ```typescript function getBlockByHeight( params: GetBlockByHeightParameters ): Promise; interface GetBlockByHeightParameters { height: number; encoding?: "hex" | "json"; } interface GetBlockByHeightReturnType { encoding: "hex" | "json"; block: string | object; } ``` **Parameters:** | Name | Type | Required | Description | | ---------- | ----------------- | -------- | ------------------------------------ | | `height` | `number` | Yes | Block height | | `encoding` | `"hex" \| "json"` | No | Encoding format (defaults to "json") | **Returns:** | Type | Description | | ---------------------------- | ----------------- | | `GetBlockByHeightReturnType` | Block data object | **Return Object:** | Property | Type | Description | | ---------- | ------------------ | ------------------------------------------- | | `encoding` | `"hex" \| "json"` | Encoding format used | | `block` | `string \| object` | Block data in the specified encoding format | **Example:** ```typescript const block = await client.xChain.getBlockByHeight({ height: 12345, encoding: "hex", }); console.log("Block data:", block.block); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/x-chain#avmgetblockbyheight) - [getBlock](#getblock) - Get block by ID - [getHeight](#getheight) - Get current height --- ### getBlock Get a block by its ID. **Function Signature:** ```typescript function getBlock(params: GetBlockParameters): Promise; interface GetBlockParameters { blockId: string; encoding?: "hex" | "json"; } interface GetBlockReturnType { encoding: "hex" | "json"; block: string | object; } ``` **Parameters:** | Name | Type | Required | Description | | ---------- | ----------------- | -------- | ------------------------------------ | | `blockId` | `string` | Yes | Block ID in CB58 format | | `encoding` | `"hex" \| "json"` | No | Encoding format (defaults to "json") | **Returns:** | Type | Description | | -------------------- | ----------------- | | `GetBlockReturnType` | Block data object | **Return Object:** | Property | Type | Description | | ---------- | ------------------ | ------------------------------------------- | | `encoding` | `"hex" \| "json"` | Encoding format used | | `block` | `string \| object` | Block data in the specified encoding format | **Example:** ```typescript const block = await client.xChain.getBlock({ blockId: "block-id-in-cb58-format", encoding: "hex", }); console.log("Block:", block.block); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/x-chain#avmgetblock) - [getBlockByHeight](#getblockbyheight) - Get block by height --- ## Transaction Operations ### getTx Get a transaction by its ID. **Function Signature:** ```typescript function getTx(params: GetTxParameters): Promise; interface GetTxParameters { txID: string; encoding?: "hex" | "json"; } interface GetTxReturnType { encoding: "hex" | "json"; tx: string | object; } ``` **Parameters:** | Name | Type | Required | Description | | ---------- | ----------------- | -------- | ------------------------------------ | | `txID` | `string` | Yes | Transaction ID in CB58 format | | `encoding` | `"hex" \| "json"` | No | Encoding format (defaults to "json") | **Returns:** | Type | Description | | ----------------- | ----------------------- | | `GetTxReturnType` | Transaction data object | **Return Object:** | Property | Type | Description | | ---------- | ------------------ | ------------------------------------------------- | | `encoding` | `"hex" \| "json"` | Encoding format used | | `tx` | `string \| object` | Transaction data in the specified encoding format | **Example:** ```typescript const tx = await client.xChain.getTx({ txID: "transaction-id", encoding: "hex", }); console.log("Transaction:", tx.tx); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/x-chain#avmgettx) - [getTxStatus](#gettxstatus) - Get transaction status - [issueTx](#issuetx) - Submit transaction --- ### getTxStatus Get the status of a transaction. **Function Signature:** ```typescript function getTxStatus( params: GetTxStatusParameters ): Promise; interface GetTxStatusParameters { txID: string; includeReason?: boolean | true; } interface GetTxStatusReturnType { status: "Accepted" | "Processing" | "Rejected" | "Unknown"; reason?: string; } ``` **Parameters:** | Name | Type | Required | Description | | --------------- | ----------------- | -------- | -------------------------------------------- | | `txID` | `string` | Yes | Transaction ID in CB58 format | | `includeReason` | `boolean \| true` | No | Whether to include the reason for the status | **Returns:** | Type | Description | | ----------------------- | ------------------------- | | `GetTxStatusReturnType` | Transaction status object | **Return Object:** | Property | Type | Description | | -------- | -------- | ---------------------------------------------------------------------- | | `status` | `string` | Transaction status: "Accepted", "Processing", "Rejected", or "Unknown" | | `reason` | `string` | Optional reason for the status (if rejected) | **Status Values:** - **Accepted**: Transaction has been accepted and included in a block - **Processing**: Transaction is being processed - **Rejected**: Transaction was rejected - **Unknown**: Transaction status cannot be determined **Example:** ```typescript const status = await client.xChain.getTxStatus({ txID: "transaction-id", }); if (status.status === "Accepted") { console.log("Transaction accepted!"); } else if (status.status === "Rejected") { console.log("Transaction rejected"); if (status.reason) { console.log("Reason:", status.reason); } } else { console.log("Transaction status:", status.status); } ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/x-chain#avmgettxstatus) - [getTx](#gettx) - Get transaction details --- ### getTxFee Get the transaction fee for the X-Chain. **Function Signature:** ```typescript function getTxFee(): Promise; interface GetTxFeeReturnType { txFee: number; createAssetTxFee: number; } ``` **Parameters:** No parameters required. **Returns:** | Type | Description | | -------------------- | ---------------------- | | `GetTxFeeReturnType` | Transaction fee object | **Return Object:** | Property | Type | Description | | ------------------ | -------- | ---------------------------- | | `txFee` | `number` | Standard transaction fee | | `createAssetTxFee` | `number` | Fee for creating a new asset | **Example:** ```typescript const fees = await client.xChain.getTxFee(); console.log("Standard transaction fee:", fees.txFee); console.log("Create asset fee:", fees.createAssetTxFee); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/x-chain#avmgettxfee) --- ## UTXO Operations ### getUTXOs Get the UTXOs controlled by a set of addresses. **Function Signature:** ```typescript function getUTXOs(params: GetUTXOsParameters): Promise; interface GetUTXOsParameters { addresses: string[]; sourceChain?: string; limit?: number; startIndex?: { address: string; utxo: string; }; encoding?: "hex"; } interface GetUTXOsReturnType { numFetched: number; utxos: string[]; endIndex: { address: string; utxo: string; }; sourceChain?: string; encoding: "hex"; } ``` **Parameters:** | Name | Type | Required | Description | | ------------- | ----------------------------------- | -------- | ----------------------------------------------- | | `addresses` | `string[]` | Yes | Array of X-Chain addresses | | `sourceChain` | `string` | No | Source chain ID (e.g., "P" for P-Chain) | | `limit` | `number` | No | Maximum number of UTXOs to return | | `startIndex` | `{ address: string; utxo: string }` | No | Pagination cursor for next page | | `encoding` | `"hex"` | No | Encoding format (can only be "hex" if provided) | **Returns:** | Type | Description | | -------------------- | ------------------------- | | `GetUTXOsReturnType` | UTXO data with pagination | **Return Object:** | Property | Type | Description | | ------------- | ----------------------------------- | ---------------------------------------- | | `numFetched` | `number` | Number of UTXOs fetched in this response | | `utxos` | `string[]` | Array of UTXO bytes (hex encoded) | | `endIndex` | `{ address: string; utxo: string }` | Pagination cursor for fetching next page | | `sourceChain` | `string` | Source chain ID (if specified) | | `encoding` | `"hex"` | Encoding format used | **Example:** ```typescript // Get first page const utxos = await client.xChain.getUTXOs({ addresses: ["X-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5"], limit: 100, }); console.log("Number of UTXOs:", utxos.numFetched); // Get next page if needed if (utxos.endIndex) { const moreUTXOs = await client.xChain.getUTXOs({ addresses: ["X-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5"], startIndex: utxos.endIndex, limit: 100, }); } ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/x-chain#avmgetutxos) - [getBalance](#getbalance) - Get balance summary --- ## Transaction Submission ### issueTx Issue a transaction to the X-Chain. **Function Signature:** ```typescript function issueTx(params: IssueTxParameters): Promise; interface IssueTxParameters { tx: string; encoding: "hex"; } interface IssueTxReturnType { txID: string; } ``` **Parameters:** | Name | Type | Required | Description | | ---------- | -------- | -------- | ------------------------------- | | `tx` | `string` | Yes | Transaction bytes in hex format | | `encoding` | `"hex"` | Yes | Encoding format (must be "hex") | **Returns:** | Type | Description | | ------------------- | --------------------- | | `IssueTxReturnType` | Transaction ID object | **Return Object:** | Property | Type | Description | | -------- | -------- | ----------------------------- | | `txID` | `string` | Transaction ID in CB58 format | **Example:** ```typescript const txID = await client.xChain.issueTx({ tx: "0x00000009de31b4d8b22991d51aa6aa1fc733f23a851a8c9400000000000186a0...", encoding: "hex", }); console.log("Transaction ID:", txID.txID); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/x-chain#avmissuetx) - [getTxStatus](#gettxstatus) - Check transaction status --- ## Genesis Operations ### buildGenesis Build a genesis block for a custom blockchain. **Function Signature:** ```typescript function buildGenesis( params: BuildGenesisParameters ): Promise; interface BuildGenesisParameters { networkID: number; genesisData: { name: string; symbol: string; denomination: number; initialState: { fixedCap: { amount: number; addresses: string[]; }; }; }; encoding: "hex"; } interface BuildGenesisReturnType { bytes: string; encoding: "hex"; } ``` **Parameters:** | Name | Type | Required | Description | | ------------- | -------- | -------- | ------------------------------------------- | | `networkID` | `number` | Yes | Network ID | | `genesisData` | `object` | Yes | Genesis block data with asset configuration | | `encoding` | `"hex"` | Yes | Encoding format (must be "hex") | **Returns:** | Type | Description | | ------------------------ | -------------------------- | | `BuildGenesisReturnType` | Genesis block bytes object | **Return Object:** | Property | Type | Description | | ---------- | -------- | --------------------------------- | | `bytes` | `string` | Genesis block bytes in hex format | | `encoding` | `"hex"` | Encoding format used | **Example:** ```typescript const genesis = await client.xChain.buildGenesis({ networkID: 16, genesisData: { name: "myFixedCapAsset", symbol: "MFCA", denomination: 0, initialState: { fixedCap: { amount: 100000, addresses: ["X-custom18jma8ppw3nhx5r4ap8clazz0dps7rv5u9xde7p"], }, }, }, encoding: "hex", }); console.log("Genesis bytes:", genesis.bytes); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/x-chain#avmbuildgenesis) --- ## Next Steps - **[X-Chain Wallet Methods](../wallet-methods/x-chain-wallet)** - Transaction preparation and signing - **[Wallet Client](../../clients/wallet-client)** - Complete wallet operations - **[Account Management](../../accounts)** - Account types and management # C-Chain Wallet Methods (/docs/tooling/avalanche-sdk/client/methods/wallet-methods/c-chain-wallet) --- title: C-Chain Wallet Methods description: Complete reference for C-Chain atomic transaction methods --- ## Overview The C-Chain Wallet Methods provide transaction preparation capabilities for atomic cross-chain transfers between the C-Chain and other Avalanche chains (P-Chain and X-Chain). These methods handle the export and import of native AVAX via atomic transactions. **Access:** `walletClient.cChain` ## prepareExportTxn Prepare a transaction to export AVAX from C-Chain to another chain (P-Chain or X-Chain). **Function Signature:** ```typescript function prepareExportTxn( params: PrepareExportTxnParameters ): Promise; interface PrepareExportTxnParameters { destinationChain: "P" | "X"; fromAddress: string; exportedOutput: { addresses: string[]; amount: bigint; locktime?: bigint; threshold?: number; }; context?: Context; } interface PrepareExportTxnReturnType { tx: UnsignedTx; exportTx: ExportTx; chainAlias: "C"; } ``` **Parameters:** | Name | Type | Required | Description | | ------------------ | ------------ | -------- | --------------------------------------------------- | | `destinationChain` | `"P" \| "X"` | Yes | Chain alias to export funds to (P-Chain or X-Chain) | | `fromAddress` | `string` | Yes | EVM address to export funds from | | `exportedOutput` | `object` | Yes | Consolidated exported output (UTXO) | | `context` | `Context` | No | Optional context for the transaction | **Exported Output Object:** | Name | Type | Required | Description | | ----------- | ---------- | -------- | ------------------------------------------------------------------ | | `addresses` | `string[]` | Yes | Addresses who can sign the consuming of this UTXO | | `amount` | `bigint` | Yes | Amount in nano AVAX held by this exported output | | `locktime` | `bigint` | No | Timestamp in seconds after which this UTXO can be consumed | | `threshold` | `number` | No | Threshold of `addresses`' signatures required to consume this UTXO | **Returns:** | Type | Description | | ---------------------------- | ------------------------- | | `PrepareExportTxnReturnType` | Export transaction object | **Return Object:** | Property | Type | Description | | ------------ | ------------ | ------------------------------- | | `tx` | `UnsignedTx` | The unsigned transaction | | `exportTx` | `ExportTx` | The export transaction instance | | `chainAlias` | `"C"` | The chain alias | **Example:** ```typescript import { createAvalancheWalletClient } from "@avalanche-sdk/client"; import { privateKeyToAvalancheAccount } from "@avalanche-sdk/client/accounts"; import { avalanche } from "@avalanche-sdk/client/chains"; import { avaxToNanoAvax } from "@avalanche-sdk/client/utils"; const account = privateKeyToAvalancheAccount("0x..."); const walletClient = createAvalancheWalletClient({ account, chain: avalanche, transport: { type: "http" }, }); // Export from C-Chain to P-Chain const exportTx = await walletClient.cChain.prepareExportTxn({ destinationChain: "P", fromAddress: account.getEVMAddress(), exportedOutput: { addresses: [account.getXPAddress("P")], amount: avaxToNanoAvax(0.001), }, }); // Sign and send const { txHash } = await walletClient.sendXPTransaction({ txOrTxHex: exportTx, chainAlias: "C", }); console.log("Export transaction hash:", txHash); ``` **Related:** - [prepareImportTxn](#prepareimporttxn) - Import to C-Chain - [Wallet send method](./wallet#send) - Simplified cross-chain transfers - [getAtomicTxStatus](../public-methods/c-chain#getatomictxstatus) - Check transaction status --- ## prepareImportTxn Prepare a transaction to import AVAX from another chain (P-Chain or X-Chain) to C-Chain. **Function Signature:** ```typescript function prepareImportTxn( params: PrepareImportTxnParameters ): Promise; interface PrepareImportTxnParameters { account?: AvalancheAccount | Address | undefined; sourceChain: "P" | "X"; toAddress: string; fromAddresses?: string[]; utxos?: Utxo[]; context?: Context; } interface PrepareImportTxnReturnType { tx: UnsignedTx; importTx: ImportTx; chainAlias: "C"; } ``` **Parameters:** | Name | Type | Required | Description | | --------------- | ----------------------------- | -------- | ------------------------------------------------------------------------------- | | `account` | `AvalancheAccount \| Address` | No | Account to use for the transaction | | `sourceChain` | `"P" \| "X"` | Yes | Chain alias to import funds from (P-Chain or X-Chain) | | `toAddress` | `string` | Yes | EVM address to import funds to | | `fromAddresses` | `string[]` | No | Addresses to import funds from (auto-fetched if not provided) | | `utxos` | `Utxo[]` | No | UTXOs to use as inputs (must be in atomic memory, auto-fetched if not provided) | | `context` | `Context` | No | Optional context for the transaction | **Returns:** | Type | Description | | ---------------------------- | ------------------------- | | `PrepareImportTxnReturnType` | Import transaction object | **Return Object:** | Property | Type | Description | | ------------ | ------------ | ------------------------------- | | `tx` | `UnsignedTx` | The unsigned transaction | | `importTx` | `ImportTx` | The import transaction instance | | `chainAlias` | `"C"` | The chain alias | **Example:** ```typescript const importTx = await walletClient.cChain.prepareImportTxn({ sourceChain: "P", toAddress: account.getEVMAddress(), fromAddresses: [account.getXPAddress("P")], }); // Sign and send const { txHash } = await walletClient.sendXPTransaction({ txOrTxHex: importTx, chainAlias: "C", }); console.log("Import transaction hash:", txHash); ``` **Related:** - [prepareExportTxn](#prepareexporttxn) - Export from C-Chain - [getAtomicTxStatus](../public-methods/c-chain#getatomictxstatus) - Check transaction status --- ## Complete Cross-Chain Transfer Workflow ### Export from C-Chain to P-Chain ```typescript import { createAvalancheWalletClient } from "@avalanche-sdk/client"; import { privateKeyToAvalancheAccount } from "@avalanche-sdk/client/accounts"; import { avalanche } from "@avalanche-sdk/client/chains"; import { avaxToNanoAvax } from "@avalanche-sdk/client/utils"; const account = privateKeyToAvalancheAccount("0x..."); const walletClient = createAvalancheWalletClient({ account, chain: avalanche, transport: { type: "http" }, }); // 1. Export from C-Chain const exportTx = await walletClient.cChain.prepareExportTxn({ destinationChain: "P", fromAddress: account.getEVMAddress(), exportedOutput: { addresses: [account.getXPAddress("P")], amount: avaxToNanoAvax(1.0), }, }); // 2. Sign and send export transaction const { txHash: exportTxHash } = await walletClient.sendXPTransaction({ txOrTxHex: exportTx, chainAlias: "C", }); // 3. Wait for export to be committed await walletClient.waitForTxn({ txHash: exportTxHash, chainAlias: "C", }); console.log("Export completed:", exportTxHash); // 4. Import to P-Chain const importTx = await walletClient.pChain.prepareImportTxn({ sourceChain: "C", toAddresses: [account.getXPAddress("P")], }); const { txHash: importTxHash } = await walletClient.sendXPTransaction({ txOrTxHex: importTx, chainAlias: "P", }); console.log("Import completed:", importTxHash); ``` --- ## Next Steps - **[Wallet Methods](./wallet)** - General wallet operations - **[P-Chain Wallet Methods](./p-chain-wallet)** - P-Chain transaction preparation - **[X-Chain Wallet Methods](./x-chain-wallet)** - X-Chain transaction preparation - **[Account Management](../accounts)** - Account types and management # P-Chain Wallet Methods (/docs/tooling/avalanche-sdk/client/methods/wallet-methods/p-chain-wallet) --- title: P-Chain Wallet Methods description: Complete reference for P-Chain transaction preparation methods --- ## Overview The P-Chain Wallet Methods provide transaction preparation capabilities for the Platform Chain. These methods allow you to create unsigned transactions for various operations including base transfers, validator operations, delegator operations, subnet management, and cross-chain transfers. **Access:** `walletClient.pChain` ## prepareBaseTxn Prepare a base P-Chain transaction for transferring AVAX. **Function Signature:** ```typescript function prepareBaseTxn( params: PrepareBaseTxnParameters ): Promise; interface PrepareBaseTxnParameters { outputs?: Output[]; fromAddresses?: string[]; changeAddresses?: string[]; utxos?: Utxo[]; memo?: string; minIssuanceTime?: bigint; context?: Context; } interface Output { addresses: string[]; amount: bigint; assetId?: string; locktime?: bigint; threshold?: number; } interface PrepareBaseTxnReturnType { tx: UnsignedTx; baseTx: BaseTx; chainAlias: "P"; } ``` **Parameters:** | Name | Type | Required | Description | | ----------------- | ---------- | -------- | --------------------------------------------- | | `outputs` | `Output[]` | No | Array of outputs to send funds to | | `fromAddresses` | `string[]` | No | Addresses to send funds from | | `changeAddresses` | `string[]` | No | Addresses to receive change | | `utxos` | `Utxo[]` | No | UTXOs to use as inputs | | `memo` | `string` | No | Transaction memo | | `minIssuanceTime` | `bigint` | No | Earliest time this transaction can be issued | | `context` | `Context` | No | Transaction context (auto-fetched if omitted) | **Output Object:** | Name | Type | Required | Description | | ----------- | ---------- | -------- | ------------------------------------------------------------------ | | `addresses` | `string[]` | Yes | Addresses who can sign the consuming of this UTXO | | `amount` | `bigint` | Yes | Amount in nano AVAX | | `assetId` | `string` | No | Asset ID of the UTXO | | `locktime` | `bigint` | No | Timestamp in seconds after which this UTXO can be consumed | | `threshold` | `number` | No | Threshold of `addresses`' signatures required to consume this UTXO | **Returns:** | Type | Description | | -------------------------- | ----------------------- | | `PrepareBaseTxnReturnType` | Base transaction object | **Return Object:** | Property | Type | Description | | ------------ | ------------ | ----------------------------- | | `tx` | `UnsignedTx` | The unsigned transaction | | `baseTx` | `BaseTx` | The base transaction instance | | `chainAlias` | `"P"` | The chain alias | **Example:** ```typescript import { createAvalancheWalletClient } from "@avalanche-sdk/client"; import { privateKeyToAvalancheAccount } from "@avalanche-sdk/client/accounts"; import { avalanche } from "@avalanche-sdk/client/chains"; import { avaxToNanoAvax } from "@avalanche-sdk/client/utils"; const account = privateKeyToAvalancheAccount("0x..."); const walletClient = createAvalancheWalletClient({ account, chain: avalanche, transport: { type: "http" }, }); const unsignedTx = await walletClient.pChain.prepareBaseTxn({ outputs: [ { addresses: ["P-fuji19fc97zn3mzmwr827j4d3n45refkksgms4y2yzz"], amount: avaxToNanoAvax(1), }, ], }); // Sign and send const signedTx = await walletClient.signXPTransaction({ tx: unsignedTx.tx, chainAlias: "P", }); const { txHash } = await walletClient.sendXPTransaction({ txOrTxHex: signedTx.signedTxHex, chainAlias: "P", }); console.log("Transaction hash:", txHash); ``` **Related:** - [prepareExportTxn](#prepareexporttxn) - Cross-chain exports - [prepareImportTxn](#prepareimporttxn) - Cross-chain imports --- ## prepareAddPermissionlessValidatorTxn Prepare a transaction to add a permissionless validator to the Primary Network. **Function Signature:** ```typescript function prepareAddPermissionlessValidatorTxn( params: PrepareAddPermissionlessValidatorTxnParameters ): Promise; interface PrepareAddPermissionlessValidatorTxnParameters { nodeId: string; stakeInAvax: bigint; end: bigint; rewardAddresses: string[]; delegatorRewardAddresses: string[]; delegatorRewardPercentage: number; publicKey?: string; signature?: string; threshold?: number; locktime?: bigint; fromAddresses?: string[]; changeAddresses?: string[]; utxos?: Utxo[]; memo?: string; minIssuanceTime?: bigint; context?: Context; } interface PrepareAddPermissionlessValidatorTxnReturnType { tx: UnsignedTx; addPermissionlessValidatorTx: AddPermissionlessValidatorTx; chainAlias: "P"; } ``` **Parameters:** | Name | Type | Required | Description | | --------------------------- | ---------- | -------- | --------------------------------------------------------------------------------- | | `nodeId` | `string` | Yes | Node ID of the validator being added | | `stakeInAvax` | `bigint` | Yes | Amount of AVAX to stake (in nano AVAX) | | `end` | `bigint` | Yes | Unix time in seconds when validator will be removed | | `rewardAddresses` | `string[]` | Yes | Addresses which will receive validator rewards | | `delegatorRewardAddresses` | `string[]` | Yes | Addresses which will receive delegator fee rewards | | `delegatorRewardPercentage` | `number` | Yes | Percentage of delegator rewards as delegation fee (2-100, up to 3 decimal places) | | `publicKey` | `string` | No | BLS public key (in hex format) | | `signature` | `string` | No | BLS signature (in hex format) | | `threshold` | `number` | No | Number of signatures required to spend reward UTXO (default: 1) | | `locktime` | `bigint` | No | Unix timestamp after which reward UTXO can be spent (default: 0) | | `fromAddresses` | `string[]` | No | Addresses to send funds from | | `changeAddresses` | `string[]` | No | Addresses to receive change | | `utxos` | `Utxo[]` | No | UTXOs to use as inputs | | `memo` | `string` | No | Transaction memo | | `minIssuanceTime` | `bigint` | No | Earliest time this transaction can be issued | | `context` | `Context` | No | Transaction context (auto-fetched if omitted) | **Returns:** | Type | Description | | ------------------------------------------------ | -------------------------------- | | `PrepareAddPermissionlessValidatorTxnReturnType` | Add validator transaction object | **Return Object:** | Property | Type | Description | | ------------------------------ | ------------------------------ | -------------------------------------- | | `tx` | `UnsignedTx` | The unsigned transaction | | `addPermissionlessValidatorTx` | `AddPermissionlessValidatorTx` | The add validator transaction instance | | `chainAlias` | `"P"` | The chain alias | **Example:** ```typescript import { avaxToNanoAvax } from "@avalanche-sdk/client/utils"; const validatorTx = await walletClient.pChain.prepareAddPermissionlessValidatorTxn({ nodeId: "NodeID-7Xhw2mDxuDS44j42TCB6U5579esbSt3Lg", stakeInAvax: avaxToNanoAvax(2000), end: BigInt(1716441600), rewardAddresses: ["P-fuji19fc97zn3mzmwr827j4d3n45refkksgms4y2yzz"], delegatorRewardAddresses: ["P-fuji19fc97zn3mzmwr827j4d3n45refkksgms4y2yzz"], delegatorRewardPercentage: 2.5, threshold: 1, }); // Sign and send const signedTx = await walletClient.signXPTransaction({ tx: validatorTx.tx, chainAlias: "P", }); const { txHash } = await walletClient.sendXPTransaction({ txOrTxHex: signedTx.signedTxHex, chainAlias: "P", }); ``` **Related:** - [prepareAddSubnetValidatorTxn](#prepareaddsubnetvalidatortxn) - Add to subnet - [prepareAddPermissionlessDelegatorTxn](#prepareaddpermissionlessdelegatortxn) - Add delegator --- ## prepareAddPermissionlessDelegatorTxn Prepare a transaction to add a permissionless delegator to a validator. **Function Signature:** ```typescript function prepareAddPermissionlessDelegatorTxn( params: PrepareAddPermissionlessDelegatorTxnParameters ): Promise; interface PrepareAddPermissionlessDelegatorTxnParameters { nodeId: string; stakeInAvax: bigint; end: bigint; rewardAddresses: string[]; threshold?: number; locktime?: bigint; fromAddresses?: string[]; changeAddresses?: string[]; utxos?: Utxo[]; memo?: string; minIssuanceTime?: bigint; context?: Context; } interface PrepareAddPermissionlessDelegatorTxnReturnType { tx: UnsignedTx; addPermissionlessDelegatorTx: AddPermissionlessDelegatorTx; chainAlias: "P"; } ``` **Parameters:** | Name | Type | Required | Description | | ----------------- | ---------- | -------- | ---------------------------------------------------------------- | | `nodeId` | `string` | Yes | Node ID of the validator to delegate to | | `stakeInAvax` | `bigint` | Yes | Amount of AVAX to stake (in nano AVAX) | | `end` | `bigint` | Yes | Unix time in seconds when delegation stops | | `rewardAddresses` | `string[]` | Yes | Addresses which will receive rewards | | `threshold` | `number` | No | Number of signatures required to spend reward UTXO (default: 1) | | `locktime` | `bigint` | No | Unix timestamp after which reward UTXO can be spent (default: 0) | | `fromAddresses` | `string[]` | No | Addresses to send funds from | | `changeAddresses` | `string[]` | No | Addresses to receive change | | `utxos` | `Utxo[]` | No | UTXOs to use as inputs | | `memo` | `string` | No | Transaction memo | | `minIssuanceTime` | `bigint` | No | Earliest time this transaction can be issued | | `context` | `Context` | No | Transaction context (auto-fetched if omitted) | **Returns:** | Type | Description | | ------------------------------------------------ | -------------------------------- | | `PrepareAddPermissionlessDelegatorTxnReturnType` | Add delegator transaction object | **Return Object:** | Property | Type | Description | | ------------------------------ | ------------------------------ | -------------------------------------- | | `tx` | `UnsignedTx` | The unsigned transaction | | `addPermissionlessDelegatorTx` | `AddPermissionlessDelegatorTx` | The add delegator transaction instance | | `chainAlias` | `"P"` | The chain alias | **Example:** ```typescript const delegatorTx = await walletClient.pChain.prepareAddPermissionlessDelegatorTxn({ nodeId: "NodeID-7Xhw2mDxuDS44j42TCB6U5579esbSt3Lg", stakeInAvax: avaxToNanoAvax(25), end: BigInt(1716441600), rewardAddresses: ["P-fuji19fc97zn3mzmwr827j4d3n45refkksgms4y2yzz"], threshold: 1, }); ``` **Related:** - [prepareAddPermissionlessValidatorTxn](#prepareaddpermissionlessvalidatortxn) - Add validator --- ## prepareExportTxn Prepare a transaction to export AVAX from P-Chain to another chain. **Function Signature:** ```typescript function prepareExportTxn( params: PrepareExportTxnParameters ): Promise; interface PrepareExportTxnParameters { destinationChain: "X" | "C"; exportedOutputs: Output[]; fromAddresses?: string[]; changeAddresses?: string[]; utxos?: Utxo[]; memo?: string; minIssuanceTime?: bigint; context?: Context; } interface PrepareExportTxnReturnType { tx: UnsignedTx; exportTx: ExportTx; chainAlias: "P"; } ``` **Parameters:** | Name | Type | Required | Description | | ------------------ | ------------ | -------- | --------------------------------------------- | | `destinationChain` | `"X" \| "C"` | Yes | Chain alias to export funds to | | `exportedOutputs` | `Output[]` | Yes | Outputs to export | | `fromAddresses` | `string[]` | No | Addresses to send funds from | | `changeAddresses` | `string[]` | No | Addresses to receive change | | `utxos` | `Utxo[]` | No | UTXOs to use as inputs | | `memo` | `string` | No | Transaction memo | | `minIssuanceTime` | `bigint` | No | Earliest time this transaction can be issued | | `context` | `Context` | No | Transaction context (auto-fetched if omitted) | **Returns:** | Type | Description | | ---------------------------- | ------------------------- | | `PrepareExportTxnReturnType` | Export transaction object | **Return Object:** | Property | Type | Description | | ------------ | ------------ | ------------------------------- | | `tx` | `UnsignedTx` | The unsigned transaction | | `exportTx` | `ExportTx` | The export transaction instance | | `chainAlias` | `"P"` | The chain alias | **Example:** ```typescript const exportTx = await walletClient.pChain.prepareExportTxn({ destinationChain: "C", exportedOutputs: [ { addresses: [account.getEVMAddress()], amount: avaxToNanoAvax(0.001), }, ], }); // Sign and send const signedTx = await walletClient.signXPTransaction({ tx: exportTx.tx, chainAlias: "P", }); const { txHash } = await walletClient.sendXPTransaction({ txOrTxHex: signedTx.signedTxHex, chainAlias: "P", }); ``` **Related:** - [prepareImportTxn](#prepareimporttxn) - Import to P-Chain - [Wallet send method](./wallet#send) - Simplified cross-chain transfers --- ## prepareImportTxn Prepare a transaction to import AVAX from another chain to P-Chain. **Function Signature:** ```typescript function prepareImportTxn( params: PrepareImportTxnParameters ): Promise; interface PrepareImportTxnParameters { sourceChain: "X" | "C"; importedOutput: ImportedOutput; fromAddresses?: string[]; utxos?: Utxo[]; memo?: string; minIssuanceTime?: bigint; context?: Context; } interface ImportedOutput { addresses: string[]; locktime?: bigint; threshold?: number; } interface PrepareImportTxnReturnType { tx: UnsignedTx; importTx: ImportTx; chainAlias: "P"; } ``` **Parameters:** | Name | Type | Required | Description | | ----------------- | ---------------- | -------- | ----------------------------------------------- | | `sourceChain` | `"X" \| "C"` | Yes | Chain alias to import funds from | | `importedOutput` | `ImportedOutput` | Yes | Consolidated imported output from atomic memory | | `fromAddresses` | `string[]` | No | Addresses to send funds from | | `utxos` | `Utxo[]` | No | UTXOs to use as inputs | | `memo` | `string` | No | Transaction memo | | `minIssuanceTime` | `bigint` | No | Earliest time this transaction can be issued | | `context` | `Context` | No | Transaction context (auto-fetched if omitted) | **Imported Output Object:** | Name | Type | Required | Description | | ----------- | ---------- | -------- | ----------------------------------------------------------------------------------- | | `addresses` | `string[]` | Yes | Addresses who can sign the consuming of this UTXO | | `locktime` | `bigint` | No | Timestamp in seconds after which this UTXO can be consumed | | `threshold` | `number` | No | Number of signatures required out of total `addresses` to spend the imported output | **Returns:** | Type | Description | | ---------------------------- | ------------------------- | | `PrepareImportTxnReturnType` | Import transaction object | **Return Object:** | Property | Type | Description | | ------------ | ------------ | ------------------------------- | | `tx` | `UnsignedTx` | The unsigned transaction | | `importTx` | `ImportTx` | The import transaction instance | | `chainAlias` | `"P"` | The chain alias | **Example:** ```typescript const importTx = await walletClient.pChain.prepareImportTxn({ sourceChain: "C", importedOutput: { addresses: ["P-fuji19fc97zn3mzmwr827j4d3n45refkksgms4y2yzz"], threshold: 1, }, }); // Sign and send const signedTx = await walletClient.signXPTransaction({ tx: importTx.tx, chainAlias: "P", }); const { txHash } = await walletClient.sendXPTransaction({ txOrTxHex: signedTx.signedTxHex, chainAlias: "P", }); ``` **Related:** - [prepareExportTxn](#prepareexporttxn) - Export from P-Chain --- ## prepareAddSubnetValidatorTxn Prepare a transaction to add a validator to a subnet. **Function Signature:** ```typescript function prepareAddSubnetValidatorTxn( params: PrepareAddSubnetValidatorTxnParameters ): Promise; interface PrepareAddSubnetValidatorTxnParameters { subnetId: string; nodeId: string; weight: bigint; end: bigint; subnetAuth: readonly number[]; fromAddresses?: string[]; changeAddresses?: string[]; utxos?: Utxo[]; memo?: string; minIssuanceTime?: bigint; context?: Context; } interface PrepareAddSubnetValidatorTxnReturnType { tx: UnsignedTx; addSubnetValidatorTx: AddSubnetValidatorTx; subnetOwners: PChainOwner; subnetAuth: number[]; chainAlias: "P"; } ``` **Parameters:** | Name | Type | Required | Description | | ----------------- | ------------------- | -------- | -------------------------------------------------------------------------- | | `subnetId` | `string` | Yes | Subnet ID to add the validator to | | `nodeId` | `string` | Yes | Node ID of the validator being added | | `weight` | `bigint` | Yes | Weight of the validator used during consensus | | `end` | `bigint` | Yes | End timestamp in seconds after which validator will be removed | | `subnetAuth` | `readonly number[]` | Yes | Array of indices from subnet's owners array who will sign this transaction | | `fromAddresses` | `string[]` | No | Addresses to send funds from | | `changeAddresses` | `string[]` | No | Addresses to receive change | | `utxos` | `Utxo[]` | No | UTXOs to use as inputs | | `memo` | `string` | No | Transaction memo | | `minIssuanceTime` | `bigint` | No | Earliest time this transaction can be issued | | `context` | `Context` | No | Transaction context (auto-fetched if omitted) | **Returns:** | Type | Description | | ---------------------------------------- | --------------------------------------- | | `PrepareAddSubnetValidatorTxnReturnType` | Add subnet validator transaction object | **Return Object:** | Property | Type | Description | | ---------------------- | ---------------------- | --------------------------------------------- | | `tx` | `UnsignedTx` | The unsigned transaction | | `addSubnetValidatorTx` | `AddSubnetValidatorTx` | The add subnet validator transaction instance | | `subnetOwners` | `PChainOwner` | The subnet owners | | `subnetAuth` | `number[]` | Array of indices from subnet's owners array | | `chainAlias` | `"P"` | The chain alias | **Example:** ```typescript const addSubnetValidatorTx = await walletClient.pChain.prepareAddSubnetValidatorTxn({ subnetId: "2b175hLJhGdj3CzgXNUHXDPVY3wQo3y3VWqPjKpF5vK", nodeId: "NodeID-7Xhw2mDxuDS44j42TCB6U5579esbSt3Lg", weight: BigInt(1000000), end: BigInt(1716441600), subnetAuth: [0, 1], }); ``` **Related:** - [prepareRemoveSubnetValidatorTxn](#prepareremovesubnetvalidatortxn) - Remove subnet validator --- ## prepareRemoveSubnetValidatorTxn Prepare a transaction to remove a validator from a subnet. **Function Signature:** ```typescript function prepareRemoveSubnetValidatorTxn( params: PrepareRemoveSubnetValidatorTxnParameters ): Promise; interface PrepareRemoveSubnetValidatorTxnParameters { subnetId: string; nodeId: string; subnetAuth: readonly number[]; fromAddresses?: string[]; changeAddresses?: string[]; utxos?: Utxo[]; memo?: string; minIssuanceTime?: bigint; context?: Context; } interface PrepareRemoveSubnetValidatorTxnReturnType { tx: UnsignedTx; removeSubnetValidatorTx: RemoveSubnetValidatorTx; subnetOwners: PChainOwner; subnetAuth: number[]; chainAlias: "P"; } ``` **Parameters:** | Name | Type | Required | Description | | ----------------- | ------------------- | -------- | -------------------------------------------------------------------------- | | `subnetId` | `string` | Yes | Subnet ID to remove the validator from | | `nodeId` | `string` | Yes | Node ID of the validator being removed | | `subnetAuth` | `readonly number[]` | Yes | Array of indices from subnet's owners array who will sign this transaction | | `fromAddresses` | `string[]` | No | Addresses to send funds from | | `changeAddresses` | `string[]` | No | Addresses to receive change | | `utxos` | `Utxo[]` | No | UTXOs to use as inputs | | `memo` | `string` | No | Transaction memo | | `minIssuanceTime` | `bigint` | No | Earliest time this transaction can be issued | | `context` | `Context` | No | Transaction context (auto-fetched if omitted) | **Returns:** | Type | Description | | ------------------------------------------- | ------------------------------------------ | | `PrepareRemoveSubnetValidatorTxnReturnType` | Remove subnet validator transaction object | **Return Object:** | Property | Type | Description | | ------------------------- | ------------------------- | ------------------------------------------------ | | `tx` | `UnsignedTx` | The unsigned transaction | | `removeSubnetValidatorTx` | `RemoveSubnetValidatorTx` | The remove subnet validator transaction instance | | `subnetOwners` | `PChainOwner` | The subnet owners | | `subnetAuth` | `number[]` | Array of indices from subnet's owners array | | `chainAlias` | `"P"` | The chain alias | **Example:** ```typescript const removeSubnetValidatorTx = await walletClient.pChain.prepareRemoveSubnetValidatorTxn({ subnetId: "2b175hLJhGdj3CzgXNUHXDPVY3wQo3y3VWqPjKpF5vK", nodeId: "NodeID-7Xhw2mDxuDS44j42TCB6U5579esbSt3Lg", subnetAuth: [0, 1], }); ``` **Related:** - [prepareAddSubnetValidatorTxn](#prepareaddsubnetvalidatortxn) - Add subnet validator --- ## prepareCreateSubnetTxn Prepare a transaction to create a new subnet. **Function Signature:** ```typescript function prepareCreateSubnetTxn( params: PrepareCreateSubnetTxnParameters ): Promise; interface PrepareCreateSubnetTxnParameters { subnetOwners: SubnetOwners; fromAddresses?: string[]; changeAddresses?: string[]; utxos?: Utxo[]; memo?: string; minIssuanceTime?: bigint; context?: Context; } interface SubnetOwners { addresses: string[]; threshold?: number; locktime?: bigint; } interface PrepareCreateSubnetTxnReturnType { tx: UnsignedTx; createSubnetTx: CreateSubnetTx; chainAlias: "P"; } ``` **Parameters:** | Name | Type | Required | Description | | ----------------- | -------------- | -------- | --------------------------------------------- | | `subnetOwners` | `SubnetOwners` | Yes | Subnet owners configuration | | `fromAddresses` | `string[]` | No | Addresses to send funds from | | `changeAddresses` | `string[]` | No | Addresses to receive change | | `utxos` | `Utxo[]` | No | UTXOs to use as inputs | | `memo` | `string` | No | Transaction memo | | `minIssuanceTime` | `bigint` | No | Earliest time this transaction can be issued | | `context` | `Context` | No | Transaction context (auto-fetched if omitted) | **Subnet Owners Object:** | Name | Type | Required | Description | | ----------- | ---------- | -------- | ---------------------------------------------------------------------------------------- | | `addresses` | `string[]` | Yes | List of unique addresses (must be sorted lexicographically) | | `threshold` | `number` | No | Number of unique signatures required to spend the output (must be ≤ length of addresses) | | `locktime` | `bigint` | No | Unix timestamp after which the output can be spent | **Returns:** | Type | Description | | ---------------------------------- | -------------------------------- | | `PrepareCreateSubnetTxnReturnType` | Create subnet transaction object | **Return Object:** | Property | Type | Description | | ---------------- | ---------------- | -------------------------------------- | | `tx` | `UnsignedTx` | The unsigned transaction | | `createSubnetTx` | `CreateSubnetTx` | The create subnet transaction instance | | `chainAlias` | `"P"` | The chain alias | **Example:** ```typescript const createSubnetTx = await walletClient.pChain.prepareCreateSubnetTxn({ subnetOwners: { addresses: [ "P-fuji19fc97zn3mzmwr827j4d3n45refkksgms4y2yzz", "P-fuji1y8zrxh9cvdny0e8n8n4n7h4q4h4q4h4q4h4q4h", ], threshold: 2, }, }); ``` **Related:** - [prepareCreateChainTxn](#preparecreatechaintxn) - Create chain on subnet --- ## prepareCreateChainTxn Prepare a transaction to create a new blockchain on a subnet. **Function Signature:** ```typescript function prepareCreateChainTxn( params: PrepareCreateChainTxnParameters ): Promise; interface PrepareCreateChainTxnParameters { subnetId: string; vmId: string; chainName: string; genesisData: Record; subnetAuth: readonly number[]; fxIds?: readonly string[]; fromAddresses?: string[]; changeAddresses?: string[]; utxos?: Utxo[]; memo?: string; minIssuanceTime?: bigint; context?: Context; } interface PrepareCreateChainTxnReturnType { tx: UnsignedTx; createChainTx: CreateChainTx; subnetOwners: PChainOwner; subnetAuth: number[]; chainAlias: "P"; } ``` **Parameters:** | Name | Type | Required | Description | | ----------------- | ------------------------- | -------- | -------------------------------------------------------------------------- | | `subnetId` | `string` | Yes | Subnet ID to create the chain on | | `vmId` | `string` | Yes | VM ID of the chain being created | | `chainName` | `string` | Yes | Name of the chain being created | | `genesisData` | `Record` | Yes | Genesis JSON data of the chain being created | | `subnetAuth` | `readonly number[]` | Yes | Array of indices from subnet's owners array who will sign this transaction | | `fxIds` | `readonly string[]` | No | Array of FX IDs to be added to the chain | | `fromAddresses` | `string[]` | No | Addresses to send funds from | | `changeAddresses` | `string[]` | No | Addresses to receive change | | `utxos` | `Utxo[]` | No | UTXOs to use as inputs | | `memo` | `string` | No | Transaction memo | | `minIssuanceTime` | `bigint` | No | Earliest time this transaction can be issued | | `context` | `Context` | No | Transaction context (auto-fetched if omitted) | **Returns:** | Type | Description | | --------------------------------- | ------------------------------- | | `PrepareCreateChainTxnReturnType` | Create chain transaction object | **Return Object:** | Property | Type | Description | | --------------- | --------------- | ------------------------------------------- | | `tx` | `UnsignedTx` | The unsigned transaction | | `createChainTx` | `CreateChainTx` | The create chain transaction instance | | `subnetOwners` | `PChainOwner` | The subnet owners | | `subnetAuth` | `number[]` | Array of indices from subnet's owners array | | `chainAlias` | `"P"` | The chain alias | **Example:** ```typescript const createChainTx = await walletClient.pChain.prepareCreateChainTxn({ subnetId: "2b175hLJhGdj3CzgXNUHXDPVY3wQo3y3VWqPjKpF5vK", vmId: "avm", chainName: "MyCustomChain", genesisData: { // Genesis configuration }, subnetAuth: [0, 1], }); ``` **Related:** - [prepareCreateSubnetTxn](#preparecreatesubnettxn) - Create subnet --- ## prepareConvertSubnetToL1Txn Prepare a transaction to convert a subnet to an L1 (Layer 1) blockchain. **Function Signature:** ```typescript function prepareConvertSubnetToL1Txn( params: PrepareConvertSubnetToL1TxnParameters ): Promise; interface PrepareConvertSubnetToL1TxnParameters { subnetId: string; blockchainId: string; managerContractAddress: string; validators: L1Validator[]; subnetAuth: readonly number[]; fromAddresses?: string[]; changeAddresses?: string[]; utxos?: Utxo[]; memo?: string; minIssuanceTime?: bigint; context?: Context; } interface L1Validator { nodeId: string; nodePoP: { publicKey: string; proofOfPossession: string; }; weight: bigint; initialBalanceInAvax: bigint; remainingBalanceOwner: PChainOwnerJSON; deactivationOwner: PChainOwnerJSON; } interface PrepareConvertSubnetToL1TxnReturnType { tx: UnsignedTx; convertSubnetToL1Tx: ConvertSubnetToL1Tx; subnetOwners: PChainOwner; subnetAuth: number[]; chainAlias: "P"; } ``` **Parameters:** | Name | Type | Required | Description | | ------------------------ | ------------------- | -------- | -------------------------------------------------------------------------- | | `subnetId` | `string` | Yes | Subnet ID of the subnet to convert | | `blockchainId` | `string` | Yes | Blockchain ID of the L1 where validator manager contract is deployed | | `managerContractAddress` | `string` | Yes | Address of the validator manager contract | | `validators` | `L1Validator[]` | Yes | Initial set of L1 validators after conversion | | `subnetAuth` | `readonly number[]` | Yes | Array of indices from subnet's owners array who will sign this transaction | | `fromAddresses` | `string[]` | No | Addresses to send funds from | | `changeAddresses` | `string[]` | No | Addresses to receive change | | `utxos` | `Utxo[]` | No | UTXOs to use as inputs | | `memo` | `string` | No | Transaction memo | | `minIssuanceTime` | `bigint` | No | Earliest time this transaction can be issued | | `context` | `Context` | No | Transaction context (auto-fetched if omitted) | **L1 Validator Object:** | Name | Type | Description | | --------------------------- | ----------------- | ------------------------------------------------------------------------ | | `nodeId` | `string` | Node ID of the validator | | `nodePoP.publicKey` | `string` | Public key of the validator | | `nodePoP.proofOfPossession` | `string` | Proof of possession of the public key | | `weight` | `bigint` | Weight of the validator on the L1 used during consensus participation | | `initialBalanceInAvax` | `bigint` | Initial balance in nano AVAX required for paying contiguous fee | | `remainingBalanceOwner` | `PChainOwnerJSON` | Owner information for remaining balance if validator is removed/disabled | | `deactivationOwner` | `PChainOwnerJSON` | Owner information which can remove or disable the validator | **Returns:** | Type | Description | | --------------------------------------- | --------------------------------------- | | `PrepareConvertSubnetToL1TxnReturnType` | Convert subnet to L1 transaction object | **Return Object:** | Property | Type | Description | | --------------------- | --------------------- | --------------------------------------------- | | `tx` | `UnsignedTx` | The unsigned transaction | | `convertSubnetToL1Tx` | `ConvertSubnetToL1Tx` | The convert subnet to L1 transaction instance | | `subnetOwners` | `PChainOwner` | The subnet owners | | `subnetAuth` | `number[]` | Array of indices from subnet's owners array | | `chainAlias` | `"P"` | The chain alias | **Example:** ```typescript const convertSubnetToL1Tx = await walletClient.pChain.prepareConvertSubnetToL1Txn({ subnetId: "2b175hLJhGdj3CzgXNUHXDPVY3wQo3y3VWqPjKpF5vK", blockchainId: "2oYMBNV4eNHyqk2fjjV5nVwDzxvbmovtDAOwPJCTc9wqg8k9t", managerContractAddress: "0x1234567890123456789012345678901234567890", validators: [ { nodeId: "NodeID-7Xhw2mDxuDS44j42TCB6U5579esbSt3Lg", nodePoP: { publicKey: "0x...", proofOfPossession: "0x...", }, weight: BigInt(1000000), initialBalanceInAvax: avaxToNanoAvax(1000), remainingBalanceOwner: { addresses: ["P-fuji19fc97zn3mzmwr827j4d3n45refkksgms4y2yzz"], threshold: 1, }, deactivationOwner: { addresses: ["P-fuji19fc97zn3mzmwr827j4d3n45refkksgms4y2yzz"], threshold: 1, }, }, ], subnetAuth: [0, 1], }); ``` --- ## prepareRegisterL1ValidatorTxn Prepare a transaction to register an L1 validator. **Function Signature:** ```typescript function prepareRegisterL1ValidatorTxn( params: PrepareRegisterL1ValidatorTxnParameters ): Promise; interface PrepareRegisterL1ValidatorTxnParameters { initialBalanceInAvax: bigint; blsSignature: string; message: string; fromAddresses?: string[]; changeAddresses?: string[]; utxos?: Utxo[]; memo?: string; minIssuanceTime?: bigint; context?: Context; } interface PrepareRegisterL1ValidatorTxnReturnType { tx: UnsignedTx; registerL1ValidatorTx: RegisterL1ValidatorTx; chainAlias: "P"; } ``` **Parameters:** | Name | Type | Required | Description | | ---------------------- | ---------- | -------- | -------------------------------------------------------------------------------------------- | | `initialBalanceInAvax` | `bigint` | Yes | Initial balance in nano AVAX required for paying contiguous fee | | `blsSignature` | `string` | Yes | BLS signature of the validator | | `message` | `string` | Yes | Signed warp message hex with `AddressedCall` payload containing `RegisterL1ValidatorMessage` | | `fromAddresses` | `string[]` | No | Addresses to send funds from | | `changeAddresses` | `string[]` | No | Addresses to receive change | | `utxos` | `Utxo[]` | No | UTXOs to use as inputs | | `memo` | `string` | No | Transaction memo | | `minIssuanceTime` | `bigint` | No | Earliest time this transaction can be issued | | `context` | `Context` | No | Transaction context (auto-fetched if omitted) | **Returns:** | Type | Description | | ----------------------------------------- | ---------------------------------------- | | `PrepareRegisterL1ValidatorTxnReturnType` | Register L1 validator transaction object | **Return Object:** | Property | Type | Description | | ----------------------- | ----------------------- | ---------------------------------------------- | | `tx` | `UnsignedTx` | The unsigned transaction | | `registerL1ValidatorTx` | `RegisterL1ValidatorTx` | The register L1 validator transaction instance | | `chainAlias` | `"P"` | The chain alias | **Example:** ```typescript const registerL1ValidatorTx = await walletClient.pChain.prepareRegisterL1ValidatorTxn({ initialBalanceInAvax: avaxToNanoAvax(1000), blsSignature: "0x...", message: "0x...", }); ``` --- ## prepareDisableL1ValidatorTxn Prepare a transaction to disable an L1 validator. **Function Signature:** ```typescript function prepareDisableL1ValidatorTxn( params: PrepareDisableL1ValidatorTxnParameters ): Promise; interface PrepareDisableL1ValidatorTxnParameters { validationId: string; disableAuth: number[]; fromAddresses?: string[]; changeAddresses?: string[]; utxos?: Utxo[]; memo?: string; minIssuanceTime?: bigint; context?: Context; } interface PrepareDisableL1ValidatorTxnReturnType { tx: UnsignedTx; disableL1ValidatorTx: DisableL1ValidatorTx; disableOwners: PChainOwner; disableAuth: number[]; chainAlias: "P"; } ``` **Parameters:** | Name | Type | Required | Description | | ----------------- | ---------- | -------- | ---------------------------------------------------------------------------------------- | | `validationId` | `string` | Yes | Validation ID of the L1 validator | | `disableAuth` | `number[]` | Yes | Array of indices from L1 validator's disable owners array who will sign this transaction | | `fromAddresses` | `string[]` | No | Addresses to send funds from | | `changeAddresses` | `string[]` | No | Addresses to receive change | | `utxos` | `Utxo[]` | No | UTXOs to use as inputs | | `memo` | `string` | No | Transaction memo | | `minIssuanceTime` | `bigint` | No | Earliest time this transaction can be issued | | `context` | `Context` | No | Transaction context (auto-fetched if omitted) | **Returns:** | Type | Description | | ---------------------------------------- | --------------------------------------- | | `PrepareDisableL1ValidatorTxnReturnType` | Disable L1 validator transaction object | **Return Object:** | Property | Type | Description | | ---------------------- | ---------------------- | --------------------------------------------- | | `tx` | `UnsignedTx` | The unsigned transaction | | `disableL1ValidatorTx` | `DisableL1ValidatorTx` | The disable L1 validator transaction instance | | `disableOwners` | `PChainOwner` | The disable owners | | `disableAuth` | `number[]` | Array of indices from disable owners array | | `chainAlias` | `"P"` | The chain alias | **Example:** ```typescript const disableL1ValidatorTx = await walletClient.pChain.prepareDisableL1ValidatorTxn({ validationId: "0x...", disableAuth: [0, 1], }); ``` --- ## prepareSetL1ValidatorWeightTxn Prepare a transaction to set the weight of an L1 validator. **Function Signature:** ```typescript function prepareSetL1ValidatorWeightTxn( params: PrepareSetL1ValidatorWeightTxnParameters ): Promise; interface PrepareSetL1ValidatorWeightTxnParameters { message: string; fromAddresses?: string[]; changeAddresses?: string[]; utxos?: Utxo[]; memo?: string; minIssuanceTime?: bigint; context?: Context; } interface PrepareSetL1ValidatorWeightTxnReturnType { tx: UnsignedTx; setL1ValidatorWeightTx: SetL1ValidatorWeightTx; chainAlias: "P"; } ``` **Parameters:** | Name | Type | Required | Description | | ----------------- | ---------- | -------- | --------------------------------------------------------------------------------------------- | | `message` | `string` | Yes | Signed warp message hex with `AddressedCall` payload containing `SetL1ValidatorWeightMessage` | | `fromAddresses` | `string[]` | No | Addresses to send funds from | | `changeAddresses` | `string[]` | No | Addresses to receive change | | `utxos` | `Utxo[]` | No | UTXOs to use as inputs | | `memo` | `string` | No | Transaction memo | | `minIssuanceTime` | `bigint` | No | Earliest time this transaction can be issued | | `context` | `Context` | No | Transaction context (auto-fetched if omitted) | **Returns:** | Type | Description | | ------------------------------------------ | ------------------------------------------ | | `PrepareSetL1ValidatorWeightTxnReturnType` | Set L1 validator weight transaction object | **Return Object:** | Property | Type | Description | | ------------------------ | ------------------------ | ------------------------------------------------ | | `tx` | `UnsignedTx` | The unsigned transaction | | `setL1ValidatorWeightTx` | `SetL1ValidatorWeightTx` | The set L1 validator weight transaction instance | | `chainAlias` | `"P"` | The chain alias | **Example:** ```typescript const setL1ValidatorWeightTx = await walletClient.pChain.prepareSetL1ValidatorWeightTxn({ message: "0x...", }); ``` --- ## prepareIncreaseL1ValidatorBalanceTxn Prepare a transaction to increase the balance of an L1 validator. **Function Signature:** ```typescript function prepareIncreaseL1ValidatorBalanceTxn( params: PrepareIncreaseL1ValidatorBalanceTxnParameters ): Promise; interface PrepareIncreaseL1ValidatorBalanceTxnParameters { balanceInAvax: bigint; validationId: string; fromAddresses?: string[]; changeAddresses?: string[]; utxos?: Utxo[]; memo?: string; minIssuanceTime?: bigint; context?: Context; } interface PrepareIncreaseL1ValidatorBalanceTxnReturnType { tx: UnsignedTx; increaseL1ValidatorBalanceTx: IncreaseL1ValidatorBalanceTx; chainAlias: "P"; } ``` **Parameters:** | Name | Type | Required | Description | | ----------------- | ---------- | -------- | -------------------------------------------------------- | | `balanceInAvax` | `bigint` | Yes | Amount of AVAX to increase the balance by (in nano AVAX) | | `validationId` | `string` | Yes | Validation ID of the L1 validator | | `fromAddresses` | `string[]` | No | Addresses to send funds from | | `changeAddresses` | `string[]` | No | Addresses to receive change | | `utxos` | `Utxo[]` | No | UTXOs to use as inputs | | `memo` | `string` | No | Transaction memo | | `minIssuanceTime` | `bigint` | No | Earliest time this transaction can be issued | | `context` | `Context` | No | Transaction context (auto-fetched if omitted) | **Returns:** | Type | Description | | ------------------------------------------------ | ------------------------------------------------ | | `PrepareIncreaseL1ValidatorBalanceTxnReturnType` | Increase L1 validator balance transaction object | **Return Object:** | Property | Type | Description | | ------------------------------ | ------------------------------ | ------------------------------------------------------ | | `tx` | `UnsignedTx` | The unsigned transaction | | `increaseL1ValidatorBalanceTx` | `IncreaseL1ValidatorBalanceTx` | The increase L1 validator balance transaction instance | | `chainAlias` | `"P"` | The chain alias | **Example:** ```typescript const increaseL1ValidatorBalanceTx = await walletClient.pChain.prepareIncreaseL1ValidatorBalanceTxn({ balanceInAvax: avaxToNanoAvax(500), validationId: "0x...", }); ``` --- ## Next Steps - **[Wallet Methods](./wallet)** - General wallet operations - **[X-Chain Wallet Methods](./x-chain-wallet)** - X-Chain transaction preparation - **[C-Chain Wallet Methods](./c-chain-wallet)** - C-Chain atomic transactions - **[Account Management](../accounts)** - Account types and management # Wallet Methods (/docs/tooling/avalanche-sdk/client/methods/wallet-methods/wallet) --- title: Wallet Methods description: Complete reference for Avalanche wallet operations --- ## Overview The Avalanche Wallet Client provides methods for sending transactions, signing messages and transactions, and managing accounts across all Avalanche chains (P-Chain, X-Chain, and C-Chain). This reference covers all wallet-specific methods available through the Avalanche Wallet Client SDK. **Access:** `walletClient` ## send Send tokens from the source chain to the destination chain. Automatically handles cross-chain transfers when needed. **Function Signature:** ```typescript function send(params: SendParameters): Promise; interface SendParameters { account?: AvalancheAccount; amount: bigint; to: Address | XPAddress; from?: Address | XPAddress; sourceChain?: "P" | "C"; destinationChain?: "P" | "C"; token?: "AVAX"; context?: Context; } interface SendReturnType { txHashes: TransactionDetails[]; } interface TransactionDetails { txHash: string; chainAlias: "P" | "C"; } ``` **Parameters:** | Name | Type | Required | Description | | ------------------ | ---------------------- | -------- | ----------------------------------------------------- | | `account` | `AvalancheAccount` | No | Account to send from (uses client account if omitted) | | `amount` | `bigint` | Yes | Amount to send in wei | | `to` | `Address \| XPAddress` | Yes | Destination address | | `from` | `Address \| XPAddress` | No | Source address (defaults to account address) | | `sourceChain` | `"P" \| "C"` | No | Source chain (default: "C") | | `destinationChain` | `"P" \| "C"` | No | Destination chain (default: "C") | | `token` | `"AVAX"` | No | Token to send (only AVAX supported) | | `context` | `Context` | No | Transaction context (auto-fetched if omitted) | **Returns:** | Type | Description | | ---------------- | ------------------------- | | `SendReturnType` | Transaction hashes object | **Return Object:** | Property | Type | Description | | ---------- | ---------------------- | ---------------------------- | | `txHashes` | `TransactionDetails[]` | Array of transaction details | **Transaction Details Object:** | Property | Type | Description | | ------------ | ------------ | ---------------------------------- | | `txHash` | `string` | The hash of the transaction | | `chainAlias` | `"P" \| "C"` | The chain alias of the transaction | **Example:** ```typescript import { createAvalancheWalletClient } from "@avalanche-sdk/client"; import { privateKeyToAvalancheAccount } from "@avalanche-sdk/client/accounts"; import { avalanche } from "@avalanche-sdk/client/chains"; import { avaxToWei, avaxToNanoAvax } from "@avalanche-sdk/client/utils"; const account = privateKeyToAvalancheAccount("0x..."); const walletClient = createAvalancheWalletClient({ account, chain: avalanche, transport: { type: "http" }, }); // Send AVAX on C-Chain const result = await walletClient.send({ to: "0x742d35Cc6634C0532925a3b8D4C9db96C4b4d8b6", amount: avaxToWei(0.001), destinationChain: "C", }); console.log("Transaction hash:", result.txHashes[0].txHash); // Send AVAX from C-Chain to P-Chain const crossChainResult = await walletClient.send({ to: "P-avax1example...", amount: avaxToWei(1), sourceChain: "C", destinationChain: "P", }); console.log("Transfer transactions:", crossChainResult.txHashes); ``` **Related:** - [waitForTxn](#waitfortxn) - Wait for transaction confirmation - [signXPMessage](#signxpmessage) - Sign messages --- ## getAccountPubKey Get the public key associated with the wallet account in both EVM and XP formats. **Function Signature:** ```typescript function getAccountPubKey(): Promise; interface GetAccountPubKeyReturnType { evm: string; xp: string; } ``` **Returns:** | Type | Description | | ---------------------------- | ------------------ | | `GetAccountPubKeyReturnType` | Public keys object | **Return Object:** | Property | Type | Description | | -------- | -------- | ------------------------ | | `evm` | `string` | Public key in EVM format | | `xp` | `string` | Public key in XP format | **Example:** ```typescript const pubKeys = await walletClient.getAccountPubKey(); console.log("EVM public key:", pubKeys.evm); console.log("XP public key:", pubKeys.xp); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/x-chain#avmgetaccountpubkey) - [Account Management](../accounts) - Account types and management --- ## waitForTxn Wait for a transaction to be confirmed on the network. **Function Signature:** ```typescript function waitForTxn(params: WaitForTxnParameters): Promise; interface WaitForTxnParameters { txHash: string; chainAlias: "X" | "P" | "C"; sleepTime?: number; maxRetries?: number; } ``` **Parameters:** | Name | Type | Required | Description | | ------------ | ------------------- | -------- | ------------------------------------------------------------ | | `txHash` | `string` | Yes | Transaction hash | | `chainAlias` | `"X" \| "P" \| "C"` | Yes | Chain where transaction was submitted | | `sleepTime` | `number` | No | Time to sleep between retries in milliseconds (default: 300) | | `maxRetries` | `number` | No | Maximum number of retries (default: 10) | **Returns:** | Type | Description | | --------------- | ----------------------------------------------------------------------------------- | | `Promise` | Promise that resolves when transaction is confirmed or rejects if transaction fails | **Example:** ```typescript const txHash = "0x..."; try { await walletClient.waitForTxn({ txHash, chainAlias: "C", sleepTime: 500, // Wait 500ms between checks maxRetries: 20, // Check up to 20 times }); console.log("Transaction confirmed!"); } catch (error) { console.error("Transaction failed:", error); } ``` **Related:** - [send](#send) - Send transactions - [Client tx status methods](../public-methods) - Check transaction status manually --- ## sendXPTransaction Send a signed XP transaction to the network (X-Chain, P-Chain, or C-Chain). **Function Signature:** ```typescript function sendXPTransaction( params: SendXPTransactionParameters ): Promise; interface SendXPTransactionParameters { account?: AvalancheAccount | Address; tx: string | UnsignedTx; chainAlias: "X" | "P" | "C"; externalIndices?: number[]; internalIndices?: number[]; utxoIds?: string[]; feeTolerance?: number; subnetAuth?: number[]; subnetOwners?: PChainOwner; disableOwners?: PChainOwner; disableAuth?: number[]; } interface SendXPTransactionReturnType { txHash: string; chainAlias: "X" | "P" | "C"; } ``` **Parameters:** | Name | Type | Required | Description | | ----------------- | ----------------------------- | -------- | ----------------------------------------------------- | | `account` | `AvalancheAccount \| Address` | No | Account to use for the transaction | | `tx` | `string \| UnsignedTx` | Yes | Transaction to send (hex string or UnsignedTx object) | | `chainAlias` | `"X" \| "P" \| "C"` | Yes | Target chain | | `externalIndices` | `number[]` | No | External indices to use for the transaction | | `internalIndices` | `number[]` | No | Internal indices to use for the transaction | | `utxoIds` | `string[]` | No | UTXO IDs to use for the transaction | | `feeTolerance` | `number` | No | Fee tolerance to use for the transaction | | `subnetAuth` | `number[]` | No | Subnet auth to use for the transaction | | `subnetOwners` | `PChainOwner` | No | Subnet owners to use for the transaction | | `disableOwners` | `PChainOwner` | No | Disable owners to use for the transaction | | `disableAuth` | `number[]` | No | Disable auth to use for the transaction | **Returns:** | Type | Description | | ----------------------------- | ----------------------- | | `SendXPTransactionReturnType` | Transaction hash object | **Return Object:** | Property | Type | Description | | ------------ | ------------------- | --------------------------- | | `txHash` | `string` | The hash of the transaction | | `chainAlias` | `"X" \| "P" \| "C"` | The chain alias | **Example:** ```typescript // This is typically used with prepare methods from chain-specific wallets const unsignedTx = await walletClient.pChain.prepareBaseTxn({ outputs: [ { addresses: ["P-fuji19fc97zn3mzmwr827j4d3n45refkksgms4y2yzz"], amount: avaxToNanoAvax(1), }, ], }); const signedTx = await walletClient.signXPTransaction({ tx: unsignedTx.tx, chainAlias: "P", }); const { txHash } = await walletClient.sendXPTransaction({ tx: signedTx.signedTxHex, chainAlias: "P", }); console.log("Transaction hash:", txHash); ``` **Related:** - [signXPTransaction](#signxptransaction) - Sign transactions - [P-Chain Wallet Methods](./p-chain-wallet) - P-Chain transaction preparation - [X-Chain Wallet Methods](./x-chain-wallet) - X-Chain transaction preparation --- ## signXPTransaction Sign an XP transaction (X-Chain, P-Chain, or C-Chain). **Function Signature:** ```typescript function signXPTransaction( params: SignXPTransactionParameters ): Promise; interface SignXPTransactionParameters { account?: AvalancheAccount | Address; tx?: string | UnsignedTx; signedTxHex?: string; chainAlias: "X" | "P" | "C"; utxoIds?: string[]; subnetAuth?: number[]; subnetOwners?: PChainOwner; disableOwners?: PChainOwner; disableAuth?: number[]; context?: Context; } interface Signatures { signature: string; sigIndices: number[]; } interface SignXPTransactionReturnType { signedTxHex: string; signatures: Signatures[]; chainAlias: "X" | "P" | "C"; subnetAuth?: number[]; subnetOwners?: PChainOwner; disableOwners?: PChainOwner; disableAuth?: number[]; } ``` **Parameters:** | Name | Type | Required | Description | | --------------- | ----------------------------- | -------- | ---------------------------------------------------------------------------- | | `account` | `AvalancheAccount \| Address` | No | Account to use for the transaction | | `tx` | `string \| UnsignedTx` | No | Unsigned transaction (either `tx` or `signedTxHex` must be provided) | | `signedTxHex` | `string` | No | Pre-signed transaction bytes (either `tx` or `signedTxHex` must be provided) | | `chainAlias` | `"X" \| "P" \| "C"` | Yes | Target chain | | `utxoIds` | `string[]` | No | UTXO IDs to use for the transaction | | `subnetAuth` | `number[]` | No | Subnet auth to use for the transaction | | `subnetOwners` | `PChainOwner` | No | Subnet owners to use for the transaction | | `disableOwners` | `PChainOwner` | No | Disable owners to use for the transaction | | `disableAuth` | `number[]` | No | Disable auth to use for the transaction | | `context` | `Context` | No | Transaction context (auto-fetched if omitted) | **Returns:** | Type | Description | | ----------------------------- | ------------------------- | | `SignXPTransactionReturnType` | Signed transaction object | **Return Object:** | Property | Type | Description | | --------------- | ------------------- | --------------------------------------- | | `signedTxHex` | `string` | The signed transaction in hex format | | `signatures` | `Signatures[]` | Array of signatures for the transaction | | `chainAlias` | `"X" \| "P" \| "C"` | The chain alias | | `subnetAuth` | `number[]?` | Subnet auth used for the transaction | | `subnetOwners` | `PChainOwner?` | Subnet owners used for the transaction | | `disableOwners` | `PChainOwner?` | Disable owners used for the transaction | | `disableAuth` | `number[]?` | Disable auth used for the transaction | **Signatures Object:** | Property | Type | Description | | ------------ | ---------- | ------------------------------------------------------------------------- | | `signature` | `string` | The signature of the transaction with the current account | | `sigIndices` | `number[]` | The indices of the signatures. Contains [inputIndex, signatureIndex] pair | **Example:** ```typescript const unsignedTx = await walletClient.pChain.prepareBaseTxn({ outputs: [ { addresses: ["P-fuji19fc97zn3mzmwr827j4d3n45refkksgms4y2yzz"], amount: avaxToNanoAvax(0.1), }, ], }); const signedTx = await walletClient.signXPTransaction({ tx: unsignedTx.tx, chainAlias: "P", }); // Now send the signed transaction const { txHash } = await walletClient.sendXPTransaction({ tx: signedTx.signedTxHex, chainAlias: "P", }); console.log("Transaction hash:", txHash); ``` **Related:** - [sendXPTransaction](#sendxptransaction) - Send signed transaction - [signXPMessage](#signxpmessage) - Sign messages --- ## signXPMessage Sign a message with an XP account (P-Chain or X-Chain addresses). **Function Signature:** ```typescript function signXPMessage( params: SignXPMessageParameters ): Promise; interface SignXPMessageParameters { account?: AvalancheAccount | Address; message: string; accountIndex?: number; } interface SignXPMessageReturnType { signature: string; } ``` **Parameters:** | Name | Type | Required | Description | | -------------- | ----------------------------- | -------- | --------------------------------------------------------------------------------- | | `account` | `AvalancheAccount \| Address` | No | Account to use for the message | | `message` | `string` | Yes | Message to sign | | `accountIndex` | `number` | No | Account index to use for the message from custom transport (e.g., core extension) | **Returns:** | Type | Description | | ------------------------- | ------------------------ | | `SignXPMessageReturnType` | Message signature object | **Return Object:** | Property | Type | Description | | ----------- | -------- | ------------------------------------ | | `signature` | `string` | Hex-encoded signature of the message | **Example:** ```typescript const { signature } = await walletClient.signXPMessage({ message: "Hello Avalanche", }); console.log("Signature:", signature); ``` **Related:** - [API Reference](https://build.avax.network/docs/rpcs/x-chain#avmsignmessage) - [signXPTransaction](#signxptransaction) - Sign transactions --- ## EVM Transaction Methods The Avalanche Wallet Client extends viem's Wallet Client, providing access to all standard Ethereum transaction methods. ### Standard viem Methods ```typescript // Send transaction const hash = await walletClient.sendTransaction({ to: "0x742d35Cc6634C0532925a3b8D4C9db96C4b4d8b6", value: parseEther("0.001"), }); // Sign message const signature = await walletClient.signMessage({ message: "Hello World", }); // Sign typed data (EIP-712) const typedSignature = await walletClient.signTypedData({ domain: { ... }, types: { ... }, primaryType: "Message", message: { ... }, }); ``` **For complete EVM wallet method reference, see:** [Viem Documentation](https://viem.sh/docs) --- ## Chain-Specific Wallet Operations ### P-Chain Wallet Operations Access through `walletClient.pChain`: ```typescript // Prepare base transaction const baseTx = await walletClient.pChain.prepareBaseTxn({ outputs: [ { addresses: ["P-fuji19fc97zn3mzmwr827j4d3n45refkksgms4y2yzz"], amount: avaxToNanoAvax(1), }, ], }); // Prepare export transaction const exportTx = await walletClient.pChain.prepareExportTxn({ destinationChain: "C", exportedOutputs: [ { addresses: ["0x742d35Cc6634C0532925a3b8D4C9db96C4b4d8b6"], amount: avaxToNanoAvax(1), }, ], }); // Prepare import transaction const importTx = await walletClient.pChain.prepareImportTxn({ sourceChain: "C", importedOutput: { addresses: ["P-fuji19fc97zn3mzmwr827j4d3n45refkksgms4y2yzz"], }, }); ``` See [P-Chain Wallet Methods](./p-chain-wallet) for complete reference. ### X-Chain Wallet Operations Access through `walletClient.xChain`: ```typescript // Prepare base transaction const baseTx = await walletClient.xChain.prepareBaseTxn({ outputs: [ { addresses: ["X-fuji19fc97zn3mzmwr827j4d3n45refkksgms4y2yzz"], amount: avaxToNanoAvax(1), }, ], }); ``` See [X-Chain Wallet Methods](./x-chain-wallet) for complete reference. ### C-Chain Wallet Operations Access through `walletClient.cChain`: ```typescript // Prepare export transaction const exportTx = await walletClient.cChain.prepareExportTxn({ destinationChain: "P", fromAddress: account.getEVMAddress(), exportedOutput: { addresses: [account.getXPAddress("P")], amount: avaxToNanoAvax(1), }, }); // Prepare import transaction const importTx = await walletClient.cChain.prepareImportTxn({ sourceChain: "P", toAddress: account.getEVMAddress(), }); ``` See [C-Chain Wallet Methods](./c-chain-wallet) for complete reference. --- ## Next Steps - **[P-Chain Wallet Methods](./p-chain-wallet)** - P-Chain transaction preparation - **[X-Chain Wallet Methods](./x-chain-wallet)** - X-Chain transaction preparation - **[C-Chain Wallet Methods](./c-chain-wallet)** - C-Chain atomic transactions - **[Account Management](../accounts)** - Account types and creation - **[Viem Documentation](https://viem.sh/docs)** - Complete EVM wallet reference # X-Chain Wallet Methods (/docs/tooling/avalanche-sdk/client/methods/wallet-methods/x-chain-wallet) --- title: X-Chain Wallet Methods description: Complete reference for X-Chain transaction preparation methods --- ## Overview The X-Chain Wallet Methods provide transaction preparation capabilities for the Exchange Chain. These methods allow you to create unsigned transactions for various operations including base transfers and cross-chain transfers. **Access:** `walletClient.xChain` ## prepareBaseTxn Prepare a base X-Chain transaction for transferring AVAX or other assets. **Function Signature:** ```typescript function prepareBaseTxn( params: PrepareBaseTxnParameters ): Promise; interface PrepareBaseTxnParameters { outputs?: Output[]; fromAddresses?: string[]; changeAddresses?: string[]; utxos?: Utxo[]; memo?: string; minIssuanceTime?: bigint; context?: Context; } interface Output { addresses: string[]; amount: bigint; assetId?: string; locktime?: bigint; threshold?: number; } interface PrepareBaseTxnReturnType { tx: UnsignedTx; baseTx: BaseTx; chainAlias: "X"; } ``` **Parameters:** | Name | Type | Required | Description | | ----------------- | ---------- | -------- | --------------------------------------------- | | `outputs` | `Output[]` | No | Array of outputs to send funds to | | `fromAddresses` | `string[]` | No | Addresses to send funds from | | `changeAddresses` | `string[]` | No | Addresses to receive change | | `utxos` | `Utxo[]` | No | UTXOs to use as inputs | | `memo` | `string` | No | Transaction memo | | `minIssuanceTime` | `bigint` | No | Earliest time this transaction can be issued | | `context` | `Context` | No | Transaction context (auto-fetched if omitted) | **Output Object:** | Name | Type | Required | Description | | ----------- | ---------- | -------- | ------------------------------------------------------------------ | | `addresses` | `string[]` | Yes | Addresses who can sign the consuming of this UTXO | | `amount` | `bigint` | Yes | Amount in nano AVAX | | `assetId` | `string` | No | Asset ID of the UTXO | | `locktime` | `bigint` | No | Timestamp in seconds after which this UTXO can be consumed | | `threshold` | `number` | No | Threshold of `addresses`' signatures required to consume this UTXO | **Returns:** | Type | Description | | -------------------------- | ----------------------- | | `PrepareBaseTxnReturnType` | Base transaction object | **Return Object:** | Property | Type | Description | | ------------ | ------------ | ----------------------------- | | `tx` | `UnsignedTx` | The unsigned transaction | | `baseTx` | `BaseTx` | The base transaction instance | | `chainAlias` | `"X"` | The chain alias | **Example:** ```typescript import { createAvalancheWalletClient } from "@avalanche-sdk/client"; import { privateKeyToAvalancheAccount } from "@avalanche-sdk/client/accounts"; import { avalanche } from "@avalanche-sdk/client/chains"; import { avaxToNanoAvax } from "@avalanche-sdk/client/utils"; const account = privateKeyToAvalancheAccount("0x..."); const walletClient = createAvalancheWalletClient({ account, chain: avalanche, transport: { type: "http" }, }); const unsignedTx = await walletClient.xChain.prepareBaseTxn({ outputs: [ { addresses: ["X-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5"], amount: avaxToNanoAvax(1), }, ], }); // Sign and send const signedTx = await walletClient.signXPTransaction({ tx: unsignedTx.tx, chainAlias: "X", }); const { txHash } = await walletClient.sendXPTransaction({ tx: signedTx.signedTxHex, chainAlias: "X", }); console.log("Transaction hash:", txHash); ``` **Related:** - [prepareExportTxn](#prepareexporttxn) - Cross-chain exports - [prepareImportTxn](#prepareimporttxn) - Cross-chain imports --- ## prepareExportTxn Prepare a transaction to export AVAX or other assets from X-Chain to another chain. **Function Signature:** ```typescript function prepareExportTxn( params: PrepareExportTxnParameters ): Promise; interface PrepareExportTxnParameters { destinationChain: "P" | "C"; exportedOutputs: Output[]; fromAddresses?: string[]; changeAddresses?: string[]; utxos?: Utxo[]; memo?: string; minIssuanceTime?: bigint; context?: Context; } interface PrepareExportTxnReturnType { tx: UnsignedTx; exportTx: ExportTx; chainAlias: "X"; } ``` **Parameters:** | Name | Type | Required | Description | | ------------------ | ------------ | -------- | --------------------------------------------- | | `destinationChain` | `"P" \| "C"` | Yes | Chain alias to export funds to | | `exportedOutputs` | `Output[]` | Yes | Outputs to export | | `fromAddresses` | `string[]` | No | Addresses to send funds from | | `changeAddresses` | `string[]` | No | Addresses to receive change | | `utxos` | `Utxo[]` | No | UTXOs to use as inputs | | `memo` | `string` | No | Transaction memo | | `minIssuanceTime` | `bigint` | No | Earliest time this transaction can be issued | | `context` | `Context` | No | Transaction context (auto-fetched if omitted) | **Returns:** | Type | Description | | ---------------------------- | ------------------------- | | `PrepareExportTxnReturnType` | Export transaction object | **Return Object:** | Property | Type | Description | | ------------ | ------------ | ------------------------------- | | `tx` | `UnsignedTx` | The unsigned transaction | | `exportTx` | `ExportTx` | The export transaction instance | | `chainAlias` | `"X"` | The chain alias | **Example:** ```typescript const exportTx = await walletClient.xChain.prepareExportTxn({ destinationChain: "C", exportedOutputs: [ { addresses: ["0x742d35Cc6634C0532925a3b8D4C9db96C4b4d8b6"], amount: avaxToNanoAvax(1), }, ], }); // Sign and send const signedTx = await walletClient.signXPTransaction({ tx: exportTx.tx, chainAlias: "X", }); const { txHash } = await walletClient.sendXPTransaction({ tx: signedTx.signedTxHex, chainAlias: "X", }); ``` **Related:** - [prepareImportTxn](#prepareimporttxn) - Import to X-Chain - [Wallet send method](./wallet#send) - Simplified cross-chain transfers --- ## prepareImportTxn Prepare a transaction to import AVAX or other assets from another chain to X-Chain. **Function Signature:** ```typescript function prepareImportTxn( params: PrepareImportTxnParameters ): Promise; interface PrepareImportTxnParameters { sourceChain: "P" | "C"; importedOutput: ImportedOutput; fromAddresses?: string[]; utxos?: Utxo[]; memo?: string; minIssuanceTime?: bigint; context?: Context; } interface ImportedOutput { addresses: string[]; locktime?: bigint; threshold?: number; } interface PrepareImportTxnReturnType { tx: UnsignedTx; importTx: ImportTx; chainAlias: "X"; } ``` **Parameters:** | Name | Type | Required | Description | | ----------------- | ---------------- | -------- | ----------------------------------------------- | | `sourceChain` | `"P" \| "C"` | Yes | Chain alias to import funds from | | `importedOutput` | `ImportedOutput` | Yes | Consolidated imported output from atomic memory | | `fromAddresses` | `string[]` | No | Addresses to send funds from | | `utxos` | `Utxo[]` | No | UTXOs to use as inputs | | `memo` | `string` | No | Transaction memo | | `minIssuanceTime` | `bigint` | No | Earliest time this transaction can be issued | | `context` | `Context` | No | Transaction context (auto-fetched if omitted) | **Imported Output Object:** | Name | Type | Required | Description | | ----------- | ---------- | -------- | ----------------------------------------------------------------------------------- | | `addresses` | `string[]` | Yes | Addresses who can sign the consuming of this UTXO | | `locktime` | `bigint` | No | Timestamp in seconds after which this UTXO can be consumed | | `threshold` | `number` | No | Number of signatures required out of total `addresses` to spend the imported output | **Returns:** | Type | Description | | ---------------------------- | ------------------------- | | `PrepareImportTxnReturnType` | Import transaction object | **Return Object:** | Property | Type | Description | | ------------ | ------------ | ------------------------------- | | `tx` | `UnsignedTx` | The unsigned transaction | | `importTx` | `ImportTx` | The import transaction instance | | `chainAlias` | `"X"` | The chain alias | **Example:** ```typescript const importTx = await walletClient.xChain.prepareImportTxn({ sourceChain: "C", importedOutput: { addresses: ["X-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5"], threshold: 1, }, }); // Sign and send const signedTx = await walletClient.signXPTransaction({ tx: importTx.tx, chainAlias: "X", }); const { txHash } = await walletClient.sendXPTransaction({ tx: signedTx.signedTxHex, chainAlias: "X", }); ``` **Related:** - [prepareExportTxn](#prepareexporttxn) - Export from X-Chain --- ## Next Steps - **[Wallet Methods](./wallet)** - General wallet operations - **[P-Chain Wallet Methods](./p-chain-wallet)** - P-Chain transaction preparation - **[C-Chain Wallet Methods](./c-chain-wallet)** - C-Chain atomic transactions - **[Account Management](../accounts)** - Account types and management # Advanced Topics Certificate (/academy/avalanche-l1/access-restriction/certificate-advanced) # Fundamentals Certificate (/academy/avalanche-l1/access-restriction/certificate-fundamentals) # Welcome to the Course (/academy/avalanche-l1/access-restriction) # Course Completion Certificate (/academy/avalanche-l1/avalanche-fundamentals/get-certificate) # Welcome to the Course (/academy/avalanche-l1/avalanche-fundamentals) # Course Completion Certificate (/academy/avalanche-l1/erc20-bridge/certificate) # Welcome to the Course (/academy/avalanche-l1/erc20-bridge) # Course Completion Certificate (/academy/avalanche-l1/customizing-evm/certificate) # Welcome to the Course (/academy/avalanche-l1/customizing-evm) # Course Completion Certificate (/academy/avalanche-l1/interchain-messaging/certificate) # Welcome to the course (/academy/avalanche-l1/interchain-messaging) # Course Completion Certificate (/academy/avalanche-l1/native-token-bridge/certificate) # Welcome to the Course (/academy/avalanche-l1/native-token-bridge) # Course Completion Certificate (/academy/avalanche-l1/permissionless-l1s/certificate) # Welcome to the Course (/academy/avalanche-l1/permissionless-l1s) # Course Completion Certificate (/academy/avalanche-l1/l1-native-tokenomics/certificate) # Dapp's and L1's (/academy/avalanche-l1/l1-native-tokenomics/dappVsL1) # Welcome to the Course (/academy/avalanche-l1/l1-native-tokenomics) # Token Ownership (/academy/avalanche-l1/l1-native-tokenomics/token-ownership) # Course Completion Certificate (/academy/avalanche-l1/permissioned-l1s/certificate) # Welcome to the Course (/academy/avalanche-l1/permissioned-l1s) # Course Completion Certificate (/academy/blockchain/blockchain-fundamentals/get-certificate) # Welcome to the Course (/academy/blockchain/blockchain-fundamentals) # Understanding ERC-721 Tokens (/academy/blockchain/nft-deployment/01-erc721-standard) # Prepare NFT Files (/academy/blockchain/nft-deployment/02-prepare-nft-files) # Create Your NFT Smart Contract (/academy/blockchain/nft-deployment/03-create-smart-contract) # Deploy and Mint Your NFT (/academy/blockchain/nft-deployment/04-deploy-and-mint) # Understanding Token URIs (/academy/blockchain/nft-deployment/05-token-uris) # Course Completion Certificate (/academy/blockchain/nft-deployment/certificate) # Welcome to NFT Deployment (/academy/blockchain/nft-deployment) # Course Completion Certificate (/academy/blockchain/solidity-foundry/certificate) # Welcome to the course (/academy/blockchain/solidity-foundry) # Course Completion Certificate (/academy/blockchain/encrypted-erc/certificate) # Welcome to the course (/academy/blockchain/encrypted-erc) # Course Completion Certificate (/academy/blockchain/x402-payment-infrastructure/get-certificate) # Welcome to the Course (/academy/blockchain/x402-payment-infrastructure) # Course Completion Certificate (/academy/entrepreneur/foundations-web3-venture/certificate) # Entrepreneur Academy (/academy/entrepreneur/foundations-web3-venture) # Course Completion Certificate (/academy/entrepreneur/fundraising-finance/certificate) # Welcome to the Course (/academy/entrepreneur/fundraising-finance) # Course Completion Certificate (/academy/entrepreneur/go-to-market/certificate) # Welcome to Go-to-Market Strategies (/academy/entrepreneur/go-to-market) # Course Completion Certificate (/academy/entrepreneur/web3-community-architect/certificate) # Welcome to Web3 Community Architecture (/academy/entrepreneur/web3-community-architect) # Introduction to Precompiles (/academy/avalanche-l1/access-restriction/01-introduction/01-introduction-to-precompiles) # Allowlist Interface (/academy/avalanche-l1/access-restriction/01-introduction/02-allowlist-breakdown-admin-manager-enabled) # Access Control Use-cases (/academy/avalanche-l1/access-restriction/01-introduction/03-access-control-use-cases) # Permissioning Precompiles (/academy/avalanche-l1/access-restriction/02-genesis-activation/01-permissioning-precompiles) # Create Your L1 (/academy/avalanche-l1/access-restriction/02-genesis-activation/02-create-l1) # Test Transaction Allowlist (/academy/avalanche-l1/access-restriction/02-genesis-activation/03-test-transaction-deployer-allowlist) # Test Contract Deployer Allowlist (/academy/avalanche-l1/access-restriction/02-genesis-activation/04-test-contract-deployer-allowlist) # Direct Precompile Calls (/academy/avalanche-l1/access-restriction/03-precompile-flow/01-direct-precompile-calls) # Automatic Enforcement (/academy/avalanche-l1/access-restriction/03-precompile-flow/02-automatic-enforcement) # Remove Users Admin Wallet (/academy/avalanche-l1/access-restriction/04-user-error/01-remove-users-admin-wallet-from-precompiles) # Test for Error (/academy/avalanche-l1/access-restriction/04-user-error/02-test-for-error) # Introduction (/academy/avalanche-l1/access-restriction/05-network-upgrade-de-activation/01-introduction) # Set Up Docker Validator (/academy/avalanche-l1/access-restriction/05-network-upgrade-de-activation/02-setup-docker-validator) # Upgrade Rules (/academy/avalanche-l1/access-restriction/05-network-upgrade-de-activation/03-upgrade-rules) # Reading Validator Config (/academy/avalanche-l1/access-restriction/05-network-upgrade-de-activation/04-reading-validator-config) # Modify upgrade.json (/academy/avalanche-l1/access-restriction/05-network-upgrade-de-activation/05-modify-upgrade-json) # Restarting Node (/academy/avalanche-l1/access-restriction/05-network-upgrade-de-activation/06-restarting-node) # Testing in console precompiles no longer activated (/academy/avalanche-l1/access-restriction/05-network-upgrade-de-activation/07-testing-deactivation) # Introduction (/academy/avalanche-l1/access-restriction/06-network-upgrade-activation/01-introduction) # Creating or modifying upgrade.json (/academy/avalanche-l1/access-restriction/06-network-upgrade-activation/02-creating-or-modifying-upgrade-json) # Restarting Node (/academy/avalanche-l1/access-restriction/06-network-upgrade-activation/03-restarting-node) # Testing precompile activation (/academy/avalanche-l1/access-restriction/06-network-upgrade-activation/04-testing-precompile-activation) # Multi-Chain Architecture (/academy/avalanche-l1/avalanche-fundamentals/03-multi-chain-architecture-intro/01-multi-chain-architecture) # The Primary Network (/academy/avalanche-l1/avalanche-fundamentals/03-multi-chain-architecture-intro/02-primary-network) # Avalanche L1s (/academy/avalanche-l1/avalanche-fundamentals/03-multi-chain-architecture-intro/03-L1) # Features & Benefits of Avalanche L1s (/academy/avalanche-l1/avalanche-fundamentals/03-multi-chain-architecture-intro/04-benefits) # Avalanche L1s vs Layer 2 (/academy/avalanche-l1/avalanche-fundamentals/03-multi-chain-architecture-intro/05-custom-blockchains-vs-layer-2) # Set Up Core Wallet (/academy/avalanche-l1/avalanche-fundamentals/03-multi-chain-architecture-intro/06-setup-core) # Use Dexalot L1 (/academy/avalanche-l1/avalanche-fundamentals/03-multi-chain-architecture-intro/07-use-dexalot) # Avalanche Consensus (/academy/avalanche-l1/avalanche-fundamentals/02-avalanche-consensus-intro/01-avalanche-consensus-intro) # Consensus Mechanisms (/academy/avalanche-l1/avalanche-fundamentals/02-avalanche-consensus-intro/02-consensus-mechanisms) # Snowman Consensus (/academy/avalanche-l1/avalanche-fundamentals/02-avalanche-consensus-intro/03-snowman-consensus) # Throughput vs. Time to Finality (/academy/avalanche-l1/avalanche-fundamentals/02-avalanche-consensus-intro/04-tps-vs-ttf) # Creating an L1 (/academy/avalanche-l1/avalanche-fundamentals/04-creating-an-l1/01-creating-an-l1) # Create Builder Account (/academy/avalanche-l1/avalanche-fundamentals/04-creating-an-l1/01a-create-builder-account) # Install Core Wallet (/academy/avalanche-l1/avalanche-fundamentals/04-creating-an-l1/02-connect-core) # Claim Testnet Tokens (/academy/avalanche-l1/avalanche-fundamentals/04-creating-an-l1/02a-claim-testnet-tokens) # Network Architecture (/academy/avalanche-l1/avalanche-fundamentals/04-creating-an-l1/03-network-architecture) # Create a Blockchain (/academy/avalanche-l1/avalanche-fundamentals/04-creating-an-l1/05-create-blockchain) # Set up Validator Nodes (/academy/avalanche-l1/avalanche-fundamentals/04-creating-an-l1/06-run-a-node) # Convert a Subnet to an L1 (/academy/avalanche-l1/avalanche-fundamentals/04-creating-an-l1/07-convert-subnet-l1) # Test your L1 (/academy/avalanche-l1/avalanche-fundamentals/04-creating-an-l1/08-test-l1) # Remove Node (/academy/avalanche-l1/avalanche-fundamentals/04-creating-an-l1/09-remove-node) # Introduction (/academy/avalanche-l1/avalanche-fundamentals/05-interoperability/01-introduction) # Source, Message and Destination (/academy/avalanche-l1/avalanche-fundamentals/05-interoperability/02-source-message-destination) # ICM, ICM Contracts & ICTT (/academy/avalanche-l1/avalanche-fundamentals/05-interoperability/03-icm-icmContracts-and-ictt) # Signature Schemes (/academy/avalanche-l1/avalanche-fundamentals/05-interoperability/04-signature-schemes) # Use a Signature Scheme (/academy/avalanche-l1/avalanche-fundamentals/05-interoperability/05-signature-demo) # Multi-Signature Schemes (/academy/avalanche-l1/avalanche-fundamentals/05-interoperability/06-multi-signatures) # Use Multi-Signature Schemes (/academy/avalanche-l1/avalanche-fundamentals/05-interoperability/07-multi-signature-demo) # BLS Signature Aggregation (/academy/avalanche-l1/avalanche-fundamentals/05-interoperability/08-signature-aggregation) # Use Cases (/academy/avalanche-l1/avalanche-fundamentals/05-interoperability/09-use-cases) # Token Bridging (/academy/avalanche-l1/erc20-bridge/01-token-bridging/01-token-bridging) # Bridge Architecture (/academy/avalanche-l1/erc20-bridge/01-token-bridging/02-bridge-architecture) # Use a Demo Bridge (/academy/avalanche-l1/erc20-bridge/01-token-bridging/03-use-a-demo-bridge) # Bridge Hacks (/academy/avalanche-l1/erc20-bridge/01-token-bridging/04-bridge-hacks) # Avalanche Interchain Token Transfer (/academy/avalanche-l1/erc20-bridge/02-avalanche-interchain-token-transfer/01-avalanche-interchain-token-transfer) # Interchain Token Transfer Design (/academy/avalanche-l1/erc20-bridge/02-avalanche-interchain-token-transfer/02-bridge-design) # File Structure (/academy/avalanche-l1/erc20-bridge/02-avalanche-interchain-token-transfer/03-file-structure) # Token Home (/academy/avalanche-l1/erc20-bridge/02-avalanche-interchain-token-transfer/04-token-home) # Token Remote (/academy/avalanche-l1/erc20-bridge/02-avalanche-interchain-token-transfer/05-token-remote) # ERC-20 to ERC-20 Bridge (/academy/avalanche-l1/erc20-bridge/03-erc-20-to-erc-20-bridge/01-erc-20-to-erc-20-bridge) # Deploy an ERC-20 (/academy/avalanche-l1/erc20-bridge/03-erc-20-to-erc-20-bridge/02-deploy-erc-20-token) # Deploy a Home Contract (/academy/avalanche-l1/erc20-bridge/03-erc-20-to-erc-20-bridge/03-deploy-home) # Deploy a Remote Contract (/academy/avalanche-l1/erc20-bridge/03-erc-20-to-erc-20-bridge/04-deploy-remote) # Register Remote Bridge (/academy/avalanche-l1/erc20-bridge/03-erc-20-to-erc-20-bridge/05-register-remote) # Transfer Tokens (/academy/avalanche-l1/erc20-bridge/03-erc-20-to-erc-20-bridge/06-transfer-tokens) # Integrate ICTT with Core (/academy/avalanche-l1/erc20-bridge/03-erc-20-to-erc-20-bridge/07-avacloud-and-core-bridge) # Deploy Your Own ICTT Frontend (/academy/avalanche-l1/erc20-bridge/03-erc-20-to-erc-20-bridge/08-deploy-your-own-frontend) # Overview of Multi-hop Transfers (/academy/avalanche-l1/erc20-bridge/04-tokens-on-multiple-chains/01-tokens-on-multiple-chain) # Deploy Token Remote for Multi-hop (/academy/avalanche-l1/erc20-bridge/04-tokens-on-multiple-chains/02-deploy-token-remote) # Register Remote Bridge (/academy/avalanche-l1/erc20-bridge/04-tokens-on-multiple-chains/03-register-remote) # Multi-hop Transfer (/academy/avalanche-l1/erc20-bridge/04-tokens-on-multiple-chains/04-multihop) # Origin of the EVM (/academy/avalanche-l1/customizing-evm/02-intro-to-evm/01-origin-of-evm) # Accounts, Keys, and Addresses (/academy/avalanche-l1/customizing-evm/02-intro-to-evm/02-accounts-keys-address) # Transactions and Blocks (/academy/avalanche-l1/customizing-evm/02-intro-to-evm/03-transactons-and-blocks) # Different Versions of EVM (/academy/avalanche-l1/customizing-evm/02-intro-to-evm/05-different-evm-versions) # Set Up Development Environment (/academy/avalanche-l1/customizing-evm/03-development-env-setup/00-intro) # Create Codespaces (/academy/avalanche-l1/customizing-evm/03-development-env-setup/02-create-codespaces) # Codespace in VS Code (/academy/avalanche-l1/customizing-evm/03-development-env-setup/03-codespace-in-vscode) # Your Own EVM Blockchain (/academy/avalanche-l1/customizing-evm/04-your-evm-blockchain/00-intro) # Avalanche CLI (/academy/avalanche-l1/customizing-evm/04-your-evm-blockchain/01-avalanche-cli) # Create Your Blockchain (/academy/avalanche-l1/customizing-evm/04-your-evm-blockchain/02-create-your-blockchain) # Sending Tokens (/academy/avalanche-l1/customizing-evm/04-your-evm-blockchain/03-sending-tokens) # EVM Configuration (/academy/avalanche-l1/customizing-evm/05-genesis-configuration/00-vm-configuration) # Genesis Block (/academy/avalanche-l1/customizing-evm/05-genesis-configuration/01-genesis-block) # Create Your Genesis File (/academy/avalanche-l1/customizing-evm/05-genesis-configuration/02-create-your-genesis) # Setup Your ChainID (/academy/avalanche-l1/customizing-evm/05-genesis-configuration/03-setup-chainid) # Gas Fees and Gas Limit (/academy/avalanche-l1/customizing-evm/05-genesis-configuration/04-gas-fees-and-limit) # Gas Fees Configuration (/academy/avalanche-l1/customizing-evm/05-genesis-configuration/05-gas-fee-configuration) # Configure Gas Fees (/academy/avalanche-l1/customizing-evm/05-genesis-configuration/06-configuring-gas-fees) # Initial Token Allocation (/academy/avalanche-l1/customizing-evm/05-genesis-configuration/07-initial-token-allocation) # Build and Run Custom Genesis EVM (/academy/avalanche-l1/customizing-evm/05-genesis-configuration/08-build-and-run-custom-genesis-blockchain) # What are Precompiles? (/academy/avalanche-l1/customizing-evm/06-precompiles/01-what-are-precompiles) # Why Precompiles? (/academy/avalanche-l1/customizing-evm/06-precompiles/02-why-precompiles) # Interact with a Precompile (/academy/avalanche-l1/customizing-evm/06-precompiles/03-interact-wtih-precompile) # Overview (/academy/avalanche-l1/customizing-evm/07-hash-function-precompile/00-intro) # Create an MD5 Solidity Interface (/academy/avalanche-l1/customizing-evm/07-hash-function-precompile/01-create-solidity-interface) # Generate the Precompile (/academy/avalanche-l1/customizing-evm/07-hash-function-precompile/02-generate-the-precompile) # Packing and Unpacking (/academy/avalanche-l1/customizing-evm/07-hash-function-precompile/03-unpack-input-pack-output) # Implement the Precompile (/academy/avalanche-l1/customizing-evm/07-hash-function-precompile/04-implementing-precompile) # ConfigKey, ContractAddress, and Genesis (/academy/avalanche-l1/customizing-evm/07-hash-function-precompile/05-configkey-and-contractaddr) # Register Your Precompile (/academy/avalanche-l1/customizing-evm/07-hash-function-precompile/06-register-precompile) # Build and Run (/academy/avalanche-l1/customizing-evm/07-hash-function-precompile/07-build-and-run) # Interact with Precompile (/academy/avalanche-l1/customizing-evm/07-hash-function-precompile/08-interact-with-md5) # Overview (/academy/avalanche-l1/customizing-evm/08-calculator-precompile/00-intro) # Create Solidity Interface (/academy/avalanche-l1/customizing-evm/08-calculator-precompile/01-create-solidity-interface) # Generating the Precompile (/academy/avalanche-l1/customizing-evm/08-calculator-precompile/02-generating-precompile) # Unpacking Multiple Inputs & Packing Multiple Outputs (/academy/avalanche-l1/customizing-evm/08-calculator-precompile/03-unpacking-and-packing) # Implementing Precompile (/academy/avalanche-l1/customizing-evm/08-calculator-precompile/04-implementing-precompile) # Setting the ConfigKey & ContractAddress (/academy/avalanche-l1/customizing-evm/08-calculator-precompile/05-set-configkey-contractaddr) # Creating Genesis Block with precompileConfig (/academy/avalanche-l1/customizing-evm/08-calculator-precompile/06-create-genesis-block) # Testing Precompiles via Go (/academy/avalanche-l1/customizing-evm/08-calculator-precompile/07-testing-precompile) # Modify Autogenerated Tests (/academy/avalanche-l1/customizing-evm/08-calculator-precompile/08-autogenerated-tests) # Adding Unit Tests (/academy/avalanche-l1/customizing-evm/08-calculator-precompile/09-unit-tests) # Adding Fuzz Tests (/academy/avalanche-l1/customizing-evm/08-calculator-precompile/10-fuzz-tests) # Testing CalculatorPlus (/academy/avalanche-l1/customizing-evm/08-calculator-precompile/11-test-calculatorplus) # What are Stateful Precompiles? (/academy/avalanche-l1/customizing-evm/09-stateful-precompiles/00-intro) # Interacting with StringStore Precompile (/academy/avalanche-l1/customizing-evm/09-stateful-precompiles/01-interacting-with-precompile) # Creating Counter Precompile (/academy/avalanche-l1/customizing-evm/10-stateful-counter-precompile/00-intro) # Create Solidity Interface (/academy/avalanche-l1/customizing-evm/10-stateful-counter-precompile/01-create-solidity-interface) # Store Data in EVM State (/academy/avalanche-l1/customizing-evm/10-stateful-counter-precompile/02-store-data-in-evm) # Implementing setCounter (/academy/avalanche-l1/customizing-evm/10-stateful-counter-precompile/03-implement-set-counter) # Read Data From EVM State (/academy/avalanche-l1/customizing-evm/10-stateful-counter-precompile/04-read-date-from-evm) # Implementing getCounter & increment (/academy/avalanche-l1/customizing-evm/10-stateful-counter-precompile/05-implement-getcounter-increment) # Setting Base Gas Fees of Your Precompile (/academy/avalanche-l1/customizing-evm/10-stateful-counter-precompile/06-setting-base-gasfees) # Setting ConfigKey and ContractAddress (/academy/avalanche-l1/customizing-evm/10-stateful-counter-precompile/07-set-configkey-contractaddr) # Initial State (/academy/avalanche-l1/customizing-evm/10-stateful-counter-precompile/08-initial-state) # Defining Default Values via Golang (/academy/avalanche-l1/customizing-evm/10-stateful-counter-precompile/09-define-default-values-via-go) # Defining Default Values via Genesis (/academy/avalanche-l1/customizing-evm/10-stateful-counter-precompile/10-define-default-values-via-genesis) # Testing Your Precompile (/academy/avalanche-l1/customizing-evm/10-stateful-counter-precompile/11-testing-precompile-hardhat) # Build Your Precompile (/academy/avalanche-l1/customizing-evm/10-stateful-counter-precompile/12-build-your-precompile) # Interoperability between blockchain (/academy/avalanche-l1/interchain-messaging/02-interoperability/01-interopability-between-blockchains) # Source, Message and Destination (/academy/avalanche-l1/interchain-messaging/02-interoperability/02-source-message-destination) # Recap of Multi-Chain Networks (/academy/avalanche-l1/interchain-messaging/02-interoperability/03-multi-chain-networks) # Interoperability in Multi-Chain Systems (/academy/avalanche-l1/interchain-messaging/02-interoperability/04-interoperability-in-multi-chain-systems) # Finality Importance in Interoperabile Systems (/academy/avalanche-l1/interchain-messaging/02-interoperability/05-finality-and-interoperability) # Trusted Third Parties (/academy/avalanche-l1/interchain-messaging/02-interoperability/06-trusted-third-parties) # What is Interchain Messaging? (/academy/avalanche-l1/interchain-messaging/03-icm-protocol/01-what-is-icm) # Recap of Bytes, Encoding and Decoding (/academy/avalanche-l1/interchain-messaging/03-icm-protocol/02-encoding-decoding) # Sending a Message (/academy/avalanche-l1/interchain-messaging/03-icm-protocol/03-sending-a-message) # Receiving a Message (/academy/avalanche-l1/interchain-messaging/03-icm-protocol/04-receiving-a-message) # ICM Registry (/academy/avalanche-l1/interchain-messaging/03-icm-protocol/05-icm-registry) # ICM Infrastructure Overview (/academy/avalanche-l1/interchain-messaging/04-icm-setup/01-overview) # Deploy Teleporter Messenger (/academy/avalanche-l1/interchain-messaging/04-icm-setup/02-deploy-teleporter-messenger) # Deploy Teleporter Registry (/academy/avalanche-l1/interchain-messaging/04-icm-setup/03-deploy-teleporter-registry) # Relayer Setup (/academy/avalanche-l1/interchain-messaging/04-icm-setup/04-relayer-setup) # Deploy ICM Demo Contracts (/academy/avalanche-l1/interchain-messaging/05-testing-icm/01-deploy-icm-demo) # Send Your First Cross-Chain Message (/academy/avalanche-l1/interchain-messaging/05-testing-icm/02-send-a-message) # Relayer Configuration (/academy/avalanche-l1/interchain-messaging/06-relayer-deep-dive/01-relayer-configuration) # Restricting Relayers (/academy/avalanche-l1/interchain-messaging/06-relayer-deep-dive/02-restricting-relayers) # Fee Data Flow (/academy/avalanche-l1/interchain-messaging/06-relayer-deep-dive/03-fee-data-flow) # Determining the Fee (/academy/avalanche-l1/interchain-messaging/06-relayer-deep-dive/04-determining-fees) # Avalanche Warp Messaging (/academy/avalanche-l1/interchain-messaging/08-avalanche-warp-messaging/01-avalanche-warp-messaging) # Recap P-Chain (/academy/avalanche-l1/interchain-messaging/08-avalanche-warp-messaging/02-p-chain) # Warp Message Format (/academy/avalanche-l1/interchain-messaging/08-avalanche-warp-messaging/03-warp-message-format) # AWM Relayer (/academy/avalanche-l1/interchain-messaging/08-avalanche-warp-messaging/04-awm-relayer) # Dataflow (/academy/avalanche-l1/interchain-messaging/08-avalanche-warp-messaging/05-dataflow) # Message Pickup (/academy/avalanche-l1/interchain-messaging/08-avalanche-warp-messaging/06-message-pickup) # Message Delivery (/academy/avalanche-l1/interchain-messaging/08-avalanche-warp-messaging/07-message-delivery) # Load Considerations (/academy/avalanche-l1/interchain-messaging/08-avalanche-warp-messaging/08-load-considerations) # Trust Assumptions (/academy/avalanche-l1/interchain-messaging/08-avalanche-warp-messaging/09-trust-assumption-of-awm) # Introduction (/academy/avalanche-l1/native-token-bridge/01-erc20-to-native/01-overview) # Get USDC & Create L1 (/academy/avalanche-l1/native-token-bridge/01-erc20-to-native/02-create-l1) # ICM & Relayer Setup (/academy/avalanche-l1/native-token-bridge/01-erc20-to-native/03-relayer-setup) # Deploy ERC20 Token Home (/academy/avalanche-l1/native-token-bridge/01-erc20-to-native/04-deploy-erc20-token-home) # Deploy Native Token Remote (/academy/avalanche-l1/native-token-bridge/01-erc20-to-native/05-deploy-native-token-remote) # Register and Collateralize (/academy/avalanche-l1/native-token-bridge/01-erc20-to-native/06-register-and-collateralize) # Bridge Tokens (/academy/avalanche-l1/native-token-bridge/01-erc20-to-native/07-bridge-tokens) # Introduction (/academy/avalanche-l1/native-token-bridge/02-native-to-erc20/01-overview) # Create L1 with Custom Token (/academy/avalanche-l1/native-token-bridge/02-native-to-erc20/02-create-l1) # Deploy Wrapped Token (/academy/avalanche-l1/native-token-bridge/02-native-to-erc20/03-deploy-wrapped-token) # ICM & Relayer Setup (/academy/avalanche-l1/native-token-bridge/02-native-to-erc20/04-relayer-setup) # Deploy Native Token Home (/academy/avalanche-l1/native-token-bridge/02-native-to-erc20/05-deploy-native-token-home) # Deploy ERC20 Token Remote (/academy/avalanche-l1/native-token-bridge/02-native-to-erc20/06-deploy-erc20-token-remote) # Register Remote (/academy/avalanche-l1/native-token-bridge/02-native-to-erc20/07-register-remote) # Bridge Tokens (/academy/avalanche-l1/native-token-bridge/02-native-to-erc20/08-bridge-tokens) # Introduction (/academy/avalanche-l1/native-token-bridge/03-native-to-native/01-overview) # L1 + Native Minter (/academy/avalanche-l1/native-token-bridge/03-native-to-native/02-create-l1-with-native-minter) # ICM & Relayer Setup (/academy/avalanche-l1/native-token-bridge/03-native-to-native/03-relayer-setup) # Deploy Native Token Home (/academy/avalanche-l1/native-token-bridge/03-native-to-native/04-deploy-native-token-home) # Deploy Native Token Remote (/academy/avalanche-l1/native-token-bridge/03-native-to-native/05-deploy-native-token-remote) # Register and Collateralize (/academy/avalanche-l1/native-token-bridge/03-native-to-native/06-register-and-collateralize) # Bridge Tokens (/academy/avalanche-l1/native-token-bridge/03-native-to-native/07-bridge-native-tokens) # P-Chain (/academy/avalanche-l1/permissionless-l1s/01-review/01-pchain-review) # Multi-Chain Architecture (/academy/avalanche-l1/permissionless-l1s/01-review/02-multi-chain-review) # Permissioned L1s Review (/academy/avalanche-l1/permissionless-l1s/01-review/03-permissioned-l1s-review) # Native Tokenomics Review (/academy/avalanche-l1/permissionless-l1s/01-review/04-native-tokenomics-review) # VMC Deployment Options (/academy/avalanche-l1/permissionless-l1s/01-review/05-vmc-deployment-options) # Introduction (/academy/avalanche-l1/permissionless-l1s/02-proof-of-stake/01-introduction) # Staking Token Selection (/academy/avalanche-l1/permissionless-l1s/02-proof-of-stake/02-staking-token) # Liquid Staking (/academy/avalanche-l1/permissionless-l1s/02-proof-of-stake/03-liquid-staking) # Introduction (/academy/avalanche-l1/permissionless-l1s/03-transformation-requirements/01-introduction) # Native Minter with PoS (/academy/avalanche-l1/permissionless-l1s/03-transformation-requirements/02-native-minter) # Reward Manager (/academy/avalanche-l1/permissionless-l1s/03-transformation-requirements/03-reward-manger) # Create Your L1 (/academy/avalanche-l1/permissionless-l1s/04-speedrun-base-l1/01-create-l1-speedrun) # Setup Permissioned L1 (/academy/avalanche-l1/permissionless-l1s/04-speedrun-base-l1/02-permissioned-l1-speedrun) # Validator Manager Contract (/academy/avalanche-l1/permissionless-l1s/05-staking-manager-setup/01-introduction) # Deployment Flow Overview (/academy/avalanche-l1/permissionless-l1s/05-staking-manager-setup/02-deployment-overview) # NativeTokenStakingManager (/academy/avalanche-l1/permissionless-l1s/05-staking-manager-setup/03-deploy-staking-manager) # Example Reward Calculator (/academy/avalanche-l1/permissionless-l1s/05-staking-manager-setup/04-deploy-reward-calculator) # Initialize Staking Manager (/academy/avalanche-l1/permissionless-l1s/05-staking-manager-setup/05-initialize-staking-manager) # StakingManager Minting Rights (/academy/avalanche-l1/permissionless-l1s/05-staking-manager-setup/06-enable-native-minter) # Transfer Ownership (/academy/avalanche-l1/permissionless-l1s/05-staking-manager-setup/07-transfer-ownership) # Introduction (/academy/avalanche-l1/permissionless-l1s/06-staking-manager-operations/01-introduction) # Query PoS Validator Set (/academy/avalanche-l1/permissionless-l1s/06-staking-manager-operations/02-query-pos-validator-set) # Register Validator (/academy/avalanche-l1/permissionless-l1s/06-staking-manager-operations/03-register-validator) # Register Validator Demo (/academy/avalanche-l1/permissionless-l1s/06-staking-manager-operations/04-register-validator-demo) # Delegate to Validator (/academy/avalanche-l1/permissionless-l1s/06-staking-manager-operations/05-delegate-to-validator) # Delegate to Validator Demo (/academy/avalanche-l1/permissionless-l1s/06-staking-manager-operations/06-delegate-to-validator-demo) # Remove Validator (/academy/avalanche-l1/permissionless-l1s/06-staking-manager-operations/07-remove-validator) # Remove Validator Demo (/academy/avalanche-l1/permissionless-l1s/06-staking-manager-operations/08-remove-validator-demo) # Remove Delegation (/academy/avalanche-l1/permissionless-l1s/06-staking-manager-operations/09-remove-delegation) # Remove Delegation Demo (/academy/avalanche-l1/permissionless-l1s/06-staking-manager-operations/10-remove-delegation-demo) # Introduction (/academy/avalanche-l1/l1-native-tokenomics/01-tokens-fundamentals/01-introduction) # Token Design (/academy/avalanche-l1/l1-native-tokenomics/01-tokens-fundamentals/02-token-design) # Native Tokens (/academy/avalanche-l1/l1-native-tokenomics/01-tokens-fundamentals/03-native-tokens) # Transfer Native Tokens (/academy/avalanche-l1/l1-native-tokenomics/01-tokens-fundamentals/04-transfer-native-token) # ERC-20 Tokens (/academy/avalanche-l1/l1-native-tokenomics/01-tokens-fundamentals/05-erc20) # Deploy an ERC-20 (/academy/avalanche-l1/l1-native-tokenomics/01-tokens-fundamentals/06-deploy-erc20) # Key Differences (/academy/avalanche-l1/l1-native-tokenomics/01b-native-vs-erc20/08-native-and-erc20-tokens) # Wrapped Native Tokens (/academy/avalanche-l1/l1-native-tokenomics/01b-native-vs-erc20/09-wrapped-tokens) # Deploy a Wrapped Token (/academy/avalanche-l1/l1-native-tokenomics/01b-native-vs-erc20/10-deploy-wrapped-tokens) # Introduction (/academy/avalanche-l1/l1-native-tokenomics/02-custom-tokens/01-introduction) # Custom Token vs ERC20 (/academy/avalanche-l1/l1-native-tokenomics/02-custom-tokens/02-custom-native-vs-erc20-native) # Native and Staking Tokens (/academy/avalanche-l1/l1-native-tokenomics/02-custom-tokens/03-native-vs-staking) # Token Symbol (/academy/avalanche-l1/l1-native-tokenomics/02-custom-tokens/04-token-symbol) # Create Custom Token (/academy/avalanche-l1/l1-native-tokenomics/02-custom-tokens/05-create-custom-token) # Native Token Allocation (/academy/avalanche-l1/l1-native-tokenomics/02-custom-tokens/06-native-token-allocation) # Configure Native Token Allocation (/academy/avalanche-l1/l1-native-tokenomics/02-custom-tokens/07-configure-token-allocation) # Introduction (/academy/avalanche-l1/l1-native-tokenomics/03-precompiles/01-introduction) # Stateful Precompiles (/academy/avalanche-l1/l1-native-tokenomics/03-precompiles/02-precompiles) # Protocol Integration (/academy/avalanche-l1/l1-native-tokenomics/03-precompiles/03-precompile-architecture) # Introduction (/academy/avalanche-l1/l1-native-tokenomics/04-native-minter/01-introduction) # Native Token Minting Rights (/academy/avalanche-l1/l1-native-tokenomics/04-native-minter/02-native-token-minting-rights) # Activate Native Minter (/academy/avalanche-l1/l1-native-tokenomics/04-native-minter/03-activate-native-minter) # Introduction (/academy/avalanche-l1/l1-native-tokenomics/05-fee-config/01-introduction) # Transaction Fees (/academy/avalanche-l1/l1-native-tokenomics/05-fee-config/02-transaction-fees) # Activate Fee Config (/academy/avalanche-l1/l1-native-tokenomics/05-fee-config/03-activate-fee-config) # Initial Allocation (/academy/avalanche-l1/l1-native-tokenomics/07-token-distribution/01-initial-allocation) # Vesting Schedules (/academy/avalanche-l1/l1-native-tokenomics/07-token-distribution/02-vesting-schedules) # Bonding Curves (/academy/avalanche-l1/l1-native-tokenomics/07-token-distribution/03-bonding-curves) # Airdrops (/academy/avalanche-l1/l1-native-tokenomics/07-token-distribution/04-airdrop) # Introduction (/academy/avalanche-l1/l1-native-tokenomics/08-governance/01-introduction) # Governance Models (/academy/avalanche-l1/l1-native-tokenomics/08-governance/02-governance-models) # DAOs (/academy/avalanche-l1/l1-native-tokenomics/08-governance/03-daos) # Quadratic Voting (/academy/avalanche-l1/l1-native-tokenomics/08-governance/04-quadratic-voting) # Governance 2.0 (/academy/avalanche-l1/l1-native-tokenomics/08-governance/05-governance-20) # P-Chain (/academy/avalanche-l1/permissioned-l1s/01-introduction/01-pchain-review) # Multi-Chain Architecture (/academy/avalanche-l1/permissioned-l1s/01-introduction/02-multi-chain-review) # ICM for Validator Manager (/academy/avalanche-l1/permissioned-l1s/01-introduction/03-interop) # Proxy Patterns (/academy/avalanche-l1/permissioned-l1s/01-introduction/04-proxy-pattern) # Blockchain Permissioning (/academy/avalanche-l1/permissioned-l1s/02-proof-of-authority/01-blockchain-permissioning) # Proof of Authority (/academy/avalanche-l1/permissioned-l1s/02-proof-of-authority/02-proof-of-authority) # Validator Manager Contract (/academy/avalanche-l1/permissioned-l1s/02-proof-of-authority/03-validator-manager-contract) # VMC Options (/academy/avalanche-l1/permissioned-l1s/02-proof-of-authority/04-where-should-you-deploy-your-validator-manager-contract) # Etna Upgrade (optional) (/academy/avalanche-l1/permissioned-l1s/02-proof-of-authority/05-etna-upgrade) # Subnet Creation (/academy/avalanche-l1/permissioned-l1s/03-create-an-L1/01-create-subnet) # Transparent Proxy (/academy/avalanche-l1/permissioned-l1s/03-create-an-L1/02-transparent-proxy) # Genesis Breakdown (/academy/avalanche-l1/permissioned-l1s/03-create-an-L1/03-genesis-breakdown) # Convert Subnet to L1 (/academy/avalanche-l1/permissioned-l1s/03-create-an-L1/04-convert-subnet-to-l1) # Query L1 Details (/academy/avalanche-l1/permissioned-l1s/03-create-an-L1/05-query-l1-details) # Introduction (/academy/avalanche-l1/permissioned-l1s/04-validator-manager-deployment/01-vmc-deployment-intro) # Deploy Validator Manager (/academy/avalanche-l1/permissioned-l1s/04-validator-manager-deployment/02-deploy-validator-manager) # Upgrade Proxy (/academy/avalanche-l1/permissioned-l1s/04-validator-manager-deployment/03-upgrade-proxy) # Set Initial Configuration (/academy/avalanche-l1/permissioned-l1s/04-validator-manager-deployment/04-set-initial-configuration) # Initialize Validator Set (/academy/avalanche-l1/permissioned-l1s/04-validator-manager-deployment/05-initialize-validator-set) # Read Deployed VMC (/academy/avalanche-l1/permissioned-l1s/04-validator-manager-deployment/06-check-contract) # Introduction (/academy/avalanche-l1/permissioned-l1s/05-validator-manager-operations/01-introduction) # Query the Validator Set (/academy/avalanche-l1/permissioned-l1s/05-validator-manager-operations/02-query-validator-set) # Add Validator (/academy/avalanche-l1/permissioned-l1s/05-validator-manager-operations/03-add-validator) # Add Validator Demo (/academy/avalanche-l1/permissioned-l1s/05-validator-manager-operations/04-add-validator-demo) # Changing Weight (/academy/avalanche-l1/permissioned-l1s/05-validator-manager-operations/05-change-weight) # Change Weight Demo (/academy/avalanche-l1/permissioned-l1s/05-validator-manager-operations/06-change-weight-demo) # Removing Validator (/academy/avalanche-l1/permissioned-l1s/05-validator-manager-operations/07-remove-validator) # Remove Validator Demo (/academy/avalanche-l1/permissioned-l1s/05-validator-manager-operations/08-remove-validator-demo) # Remove Expired Validator Registration (/academy/avalanche-l1/permissioned-l1s/05-validator-manager-operations/09-expired-validator-registration) # P-Chain Disable Validator (/academy/avalanche-l1/permissioned-l1s/05-validator-manager-operations/10-pchain-disable-validator) # PoA Manager Contract (/academy/avalanche-l1/permissioned-l1s/06-multisig-setup/01-poa-manager) # Create L1 with VMC on C-Chain (/academy/avalanche-l1/permissioned-l1s/06-multisig-setup/02-create-l1-cchain-vmc) # Creating a Safe/Ash Wallet (/academy/avalanche-l1/permissioned-l1s/06-multisig-setup/03-create-safe-wallet) # Deploy PoA Manager (/academy/avalanche-l1/permissioned-l1s/06-multisig-setup/04-deploy-poa-manager) # Transfer VMC Ownership to PoA Manager (/academy/avalanche-l1/permissioned-l1s/06-multisig-setup/05-transfer-vmc-ownership) # Read PoA Manager Contract (/academy/avalanche-l1/permissioned-l1s/06-multisig-setup/06-read-poa-manager) # Introduction (/academy/avalanche-l1/permissioned-l1s/07-poa-operations/01-introduction) # Add Validator via Multi-Sig (/academy/avalanche-l1/permissioned-l1s/07-poa-operations/02-add-validator) # Change Weight via Multi-Sig (/academy/avalanche-l1/permissioned-l1s/07-poa-operations/03-change-weight) # Remove Validator via Multi-Sig (/academy/avalanche-l1/permissioned-l1s/07-poa-operations/04-remove-validator) # What is a Blockchain? (/academy/blockchain/blockchain-fundamentals/02-what-is-a-blockchain/01-what-is-a-blockchain) # Decentralized Computer (/academy/blockchain/blockchain-fundamentals/02-what-is-a-blockchain/02-decentralized-computer) # Decentralized Applications (/academy/blockchain/blockchain-fundamentals/02-what-is-a-blockchain/03-decentralized-applications) # Use Cases (/academy/blockchain/blockchain-fundamentals/02-what-is-a-blockchain/04-use-cases) # What is a Blockchain? (/academy/blockchain/blockchain-fundamentals/02-what-is-a-blockchain) # Payments Use Case (/academy/blockchain/blockchain-fundamentals/03-payments-use-case/01-payments-use-case) # Account Balances & Transfers (/academy/blockchain/blockchain-fundamentals/03-payments-use-case/02-account-balances-transfers) # Ledger (/academy/blockchain/blockchain-fundamentals/03-payments-use-case/03-ledger) # Signatures (/academy/blockchain/blockchain-fundamentals/04-signatures/01-signatures) # Transaction Ordering through Consensus (/academy/blockchain/blockchain-fundamentals/04-tx-ordering-through-consensus/01-tx-ordering-through-consensus) # Longest Chain Consensus (/academy/blockchain/blockchain-fundamentals/04-tx-ordering-through-consensus/02-longest-chain-consensus) # Transaction Lifecycle (/academy/blockchain/blockchain-fundamentals/04-tx-ordering-through-consensus/xx-tx-lifecycle) # Sybil Protection (/academy/blockchain/blockchain-fundamentals/05-sybil-protection/01-sybil-protection) # Smart Contracts (/academy/blockchain/blockchain-fundamentals/06-smart-contracts/01-smart-contracts) # Native Tokens (/academy/blockchain/blockchain-fundamentals/07-independent-tokenomics/01-native-tokens) # ERC-20 Tokens (/academy/blockchain/blockchain-fundamentals/07-independent-tokenomics/02-erc-20-tokens) # Deploy and Transfer an ERC-20 Token (/academy/blockchain/blockchain-fundamentals/07-independent-tokenomics/03-deploy-and-transfer-erc-20-tokens) # Wrapped Native Tokens (/academy/blockchain/blockchain-fundamentals/07-independent-tokenomics/04-wrapped-tokens) # Deploy and Interact with Wrapped Token (/academy/blockchain/blockchain-fundamentals/07-independent-tokenomics/05-deploy-and-interact-wrapped-tokens) # Token Decimals (/academy/blockchain/blockchain-fundamentals/07-independent-tokenomics/06-token-decimals) # Virtual Machines and Blockchains (/academy/blockchain/blockchain-fundamentals/08-vms-and-blockchains/01-vms-and-blockchains) # What is a State Machine? (/academy/blockchain/blockchain-fundamentals/08-vms-and-blockchains/02-state-machine) # Blockchains (/academy/blockchain/blockchain-fundamentals/08-vms-and-blockchains/03-blockchains) # Variety of Virtual Machines (/academy/blockchain/blockchain-fundamentals/08-vms-and-blockchains/04-variety-of-vm) # Why Test Networks Like Fuji Exist — and Who They're For (/academy/blockchain/blockchain-fundamentals/08-vms-and-blockchains/05-testnets) # Building Programs on Blockchain (/academy/blockchain/solidity-foundry/03-smart-contracts/01-building-programs-on-blockchain) # What is Solidity (/academy/blockchain/solidity-foundry/03-smart-contracts/02-what-is-solidity) # Foundry Quickstart (/academy/blockchain/solidity-foundry/03-smart-contracts/03-foundry-quickstart) # Create a new Smart Contract (/academy/blockchain/solidity-foundry/03-smart-contracts/04-create-new-smart-contract) # Smart Contracts (/academy/blockchain/solidity-foundry/03-smart-contracts) # Hello World! Part 1 (/academy/blockchain/solidity-foundry/04-hello-world-part-1/01-intro) # Primitive Values and Types (/academy/blockchain/solidity-foundry/04-hello-world-part-1/02-primitive-value-and-types) # Functions (/academy/blockchain/solidity-foundry/04-hello-world-part-1/03-functions) # Contracts (/academy/blockchain/solidity-foundry/04-hello-world-part-1/04-contracts) # Solidity File Structure (/academy/blockchain/solidity-foundry/04-hello-world-part-1/05-solidity-file-structure) # Build Basic Smart Contract (/academy/blockchain/solidity-foundry/04-hello-world-part-1/06-build-basic-smart-contract) # Hello World Part. 2 (/academy/blockchain/solidity-foundry/05-hello-world-part-2/01-intro) # Control Flow (/academy/blockchain/solidity-foundry/05-hello-world-part-2/02-control-flow) # Data Structures (/academy/blockchain/solidity-foundry/05-hello-world-part-2/03-data-structures) # Contract Constructor (/academy/blockchain/solidity-foundry/05-hello-world-part-2/04-contract-constructor) # Modifiers (/academy/blockchain/solidity-foundry/05-hello-world-part-2/05-modifiers) # Events (/academy/blockchain/solidity-foundry/05-hello-world-part-2/06-events) # Avalanche Starter Kit (/academy/blockchain/solidity-foundry/02-avalanche-starter-kit/01-avalanche-starter-kit) # Set Up Avalanche Starter Kit (/academy/blockchain/solidity-foundry/02-avalanche-starter-kit/02-set-up) # Close and Reopen Codespace (/academy/blockchain/solidity-foundry/02-avalanche-starter-kit/03-close-and-reopen-codespace) # Create a Blockchain (/academy/blockchain/solidity-foundry/02-avalanche-starter-kit/03-create-blockchain) # Networks (/academy/blockchain/solidity-foundry/02-avalanche-starter-kit/04-networks) # Pause and Resume (/academy/blockchain/solidity-foundry/02-avalanche-starter-kit/05-pause-and-resume) # Contract Standarization (/academy/blockchain/solidity-foundry/06-contract-standarization/01-contract-standarization) # Inheritance (/academy/blockchain/solidity-foundry/06-contract-standarization/02-inheritance) # Interfaces (/academy/blockchain/solidity-foundry/06-contract-standarization/03-interfaces) # Abstract Contracts (/academy/blockchain/solidity-foundry/06-contract-standarization/04-abstract) # Intro ERC-20 Tokens (/academy/blockchain/solidity-foundry/07-erc20-smart-contracts/01-erc20-intro) # ERC20 Technical Walkthrough (/academy/blockchain/solidity-foundry/07-erc20-smart-contracts/02-technical-walkthrough) # Interacting with ERC20 Tokens (/academy/blockchain/solidity-foundry/07-erc20-smart-contracts/03-interacting-with-erc20-tokens) # Deploying your own ERC20 Token (/academy/blockchain/solidity-foundry/07-erc20-smart-contracts/04-deploying-your-erc20-token) # ERC-20 Smart Contracts (/academy/blockchain/solidity-foundry/07-erc20-smart-contracts) # Standards (/academy/blockchain/encrypted-erc/01-what-is-a-digital-asset/01-digital-assets-overview) # Current Uses in Blockchain (/academy/blockchain/encrypted-erc/01-what-is-a-digital-asset/02-current-uses) # Limitations of Current Standards (/academy/blockchain/encrypted-erc/01-what-is-a-digital-asset/03-limitations) # How Private is the Blockchain? (/academy/blockchain/encrypted-erc/02-real-privacy/01-how-private-is-the-blockchain) # Compliance (/academy/blockchain/encrypted-erc/02-real-privacy/02-compliance) # Necessities Solved with Privacy (/academy/blockchain/encrypted-erc/02-real-privacy/03-necessities-solved-with-privacy) # What Kind of Privacy Does eERC Provide? (/academy/blockchain/encrypted-erc/03-encrypted-tokens/01-privacity-eerc) # ERC-20 vs eERC (/academy/blockchain/encrypted-erc/03-encrypted-tokens/02-comparison) # Technology Behind eERC (/academy/blockchain/encrypted-erc/03-encrypted-tokens/03-technology-behind) # Deployment Modes of eERC (/academy/blockchain/encrypted-erc/04-usability-eerc/01-deployments-mode) # Use Cases of eERC (/academy/blockchain/encrypted-erc/04-usability-eerc/02-use-cases) # eERC Contracts Flow (/academy/blockchain/encrypted-erc/05-eerc-contracts-flow/01-step-by-step) # What is x402? (/academy/blockchain/x402-payment-infrastructure/02-introduction/01-what-is-x402) # The Traditional Payment Problem (/academy/blockchain/x402-payment-infrastructure/02-introduction/02-payment-problem) # The AI Agent Payment Problem (/academy/blockchain/x402-payment-infrastructure/02-introduction/03-ai-agent-problem) # x402 Use Cases (/academy/blockchain/x402-payment-infrastructure/02-introduction/04-use-cases) # x402 Payment Flow (/academy/blockchain/x402-payment-infrastructure/03-technical-architecture/01-payment-flow) # HTTP 402 Payment Required (/academy/blockchain/x402-payment-infrastructure/03-technical-architecture/02-http-payment-required) # X-PAYMENT (/academy/blockchain/x402-payment-infrastructure/03-technical-architecture/03-x-payment-header) # X-PAYMENT-RESPONSE (/academy/blockchain/x402-payment-infrastructure/03-technical-architecture/04-x-payment-response-header) # The Facilitator Role (/academy/blockchain/x402-payment-infrastructure/03-technical-architecture/05-facilitator-role) # Blockchain Settlement (/academy/blockchain/x402-payment-infrastructure/03-technical-architecture/06-blockchain-settlement) # Security Considerations (/academy/blockchain/x402-payment-infrastructure/03-technical-architecture/07-security-considerations) # Why Avalanche for x402? (/academy/blockchain/x402-payment-infrastructure/04-x402-on-avalanche/01-why-avalanche) # Network Setup (/academy/blockchain/x402-payment-infrastructure/04-x402-on-avalanche/02-network-setup) # x402 Facilitators on Avalanche (/academy/blockchain/x402-payment-infrastructure/04-x402-on-avalanche/03-facilitators) # Setting Up Your x402 Development Environment (/academy/blockchain/x402-payment-infrastructure/05-hands-on-implementation/01-environment-setup) # Making Your First x402 Payment (/academy/blockchain/x402-payment-infrastructure/05-hands-on-implementation/02-first-payment) # Understanding the Implementation (/academy/blockchain/x402-payment-infrastructure/05-hands-on-implementation/03-understanding-implementation) # AI Agent Mode Overview (/academy/blockchain/x402-payment-infrastructure/05-hands-on-implementation/04-ai-agent-mode) # Token-Based AI Chat (/academy/blockchain/x402-payment-infrastructure/05-hands-on-implementation/05-token-based-ai-chat) # Autonomous AI Agent (/academy/blockchain/x402-payment-infrastructure/05-hands-on-implementation/06-autonomous-agent) # Module 1A (/academy/entrepreneur/foundations-web3-venture/01-legal-foundations/01-introduction) # Choosing Your Legal Jurisdiction (/academy/entrepreneur/foundations-web3-venture/01-legal-foundations/02-legal-jurisdiction) # Token Classification - Utility vs Security (/academy/entrepreneur/foundations-web3-venture/01-legal-foundations/03-token-classification) # Essential Legal Documentation (/academy/entrepreneur/foundations-web3-venture/01-legal-foundations/04-legal-documentation) # Intellectual Property (IP) Protection (/academy/entrepreneur/foundations-web3-venture/01-legal-foundations/05-intellectual-property) # Ongoing Compliance Requirements (/academy/entrepreneur/foundations-web3-venture/01-legal-foundations/06-ongoing-compliance) # Applying Legal Frameworks to Your Web3 Startup (/academy/entrepreneur/foundations-web3-venture/01-legal-foundations/07-applying-frameworks) # Key Take-Aways (/academy/entrepreneur/foundations-web3-venture/01-legal-foundations/08-key-takeaways) # Flashcards (/academy/entrepreneur/foundations-web3-venture/01-legal-foundations/09-flashcards) # Test Your Knowledge (/academy/entrepreneur/foundations-web3-venture/01-legal-foundations/10-quiz) # Module 1B (/academy/entrepreneur/foundations-web3-venture/01b-security-fundamentals/01-introduction) # Assessing Your Project's Security Readiness (/academy/entrepreneur/foundations-web3-venture/01b-security-fundamentals/02-assessing-security) # Building Strong IT Governance (/academy/entrepreneur/foundations-web3-venture/01b-security-fundamentals/03-it-governance) # Secure Management of Cryptographic Assets (/academy/entrepreneur/foundations-web3-venture/01b-security-fundamentals/04-cryptographic-assets) # Data Strategy and Privacy (/academy/entrepreneur/foundations-web3-venture/01b-security-fundamentals/05-data-strategy) # Key Take-Aways (/academy/entrepreneur/foundations-web3-venture/01b-security-fundamentals/06-key-takeaways) # Flashcards (/academy/entrepreneur/foundations-web3-venture/01b-security-fundamentals/07-flashcards) # Test Your Knowledge (/academy/entrepreneur/foundations-web3-venture/01b-security-fundamentals/08-quiz) # Module 2 (/academy/entrepreneur/foundations-web3-venture/02-business-model-canvas/01-introduction) # Understanding the Business Model Canvas for Web3 (/academy/entrepreneur/foundations-web3-venture/02-business-model-canvas/02-understanding-canvas) # Applying the Canvas to Your Web3 Startup (/academy/entrepreneur/foundations-web3-venture/02-business-model-canvas/03-applying-canvas) # Key Take-Aways (/academy/entrepreneur/foundations-web3-venture/02-business-model-canvas/04-key-takeaways) # Flashcards (/academy/entrepreneur/foundations-web3-venture/02-business-model-canvas/05-flashcards) # Test Your Knowledge (/academy/entrepreneur/foundations-web3-venture/02-business-model-canvas/06-quiz) # Additional Learning (/academy/entrepreneur/foundations-web3-venture/02-business-model-canvas/07-additional-learning) # Module 3 (/academy/entrepreneur/foundations-web3-venture/03-user-personas/01-introduction) # Finding Your First Web3 Users (/academy/entrepreneur/foundations-web3-venture/03-user-personas/02-finding-first-users) # Asking the Right Questions (/academy/entrepreneur/foundations-web3-venture/03-user-personas/03-asking-right-questions) # Why User Engagement Matters in Web3 (/academy/entrepreneur/foundations-web3-venture/03-user-personas/04-why-engagement-matters) # Implementing a User Engagement Strategy (/academy/entrepreneur/foundations-web3-venture/03-user-personas/05-engagement-strategy) # Key Take-Aways (/academy/entrepreneur/foundations-web3-venture/03-user-personas/06-key-takeaways) # Flashcards (/academy/entrepreneur/foundations-web3-venture/03-user-personas/07-flashcards) # Test Your Knowledge (/academy/entrepreneur/foundations-web3-venture/03-user-personas/08-quiz) # Additional Learning (/academy/entrepreneur/foundations-web3-venture/03-user-personas/09-additional-learning) # Module 9 (/academy/entrepreneur/fundraising-finance/09-fundraising/01-introduction) # The Fundraising Process - A Strategic Approach (/academy/entrepreneur/fundraising-finance/09-fundraising/02-fundraising-process) # Why Effective Fundraising Matters for Web3 Startups (/academy/entrepreneur/fundraising-finance/09-fundraising/03-why-fundraising-matters) # Implementing Your Fundraising Strategy (/academy/entrepreneur/fundraising-finance/09-fundraising/04-implementing-strategy) # Key Take-Aways (/academy/entrepreneur/fundraising-finance/09-fundraising/05-key-takeaways) # Additional Learning (/academy/entrepreneur/fundraising-finance/09-fundraising/06-additional-learning) # Flashcards (/academy/entrepreneur/fundraising-finance/09-fundraising/07-flashcards) # Test Your Knowledge (/academy/entrepreneur/fundraising-finance/09-fundraising/08-quiz) # Prepare for your Mock Pitch (/academy/entrepreneur/fundraising-finance/09-fundraising/09-prepare-mock-pitch) # Module 10 (/academy/entrepreneur/fundraising-finance/10-grants/01-introduction) # Successful Grant Applications (/academy/entrepreneur/fundraising-finance/10-grants/02-success-criteria) # Common Mistakes (/academy/entrepreneur/fundraising-finance/10-grants/03-common-mistakes) # Maximizing Support (/academy/entrepreneur/fundraising-finance/10-grants/04-beyond-funding) # Key Take-Aways (/academy/entrepreneur/fundraising-finance/10-grants/05-key-takeaways) # Flashcards (/academy/entrepreneur/fundraising-finance/10-grants/06-flashcards) # Test Your Knowledge (/academy/entrepreneur/fundraising-finance/10-grants/07-quiz) # Module 11 (/academy/entrepreneur/fundraising-finance/11-pitching/01-introduction) # Effective Pitch Storytelling (/academy/entrepreneur/fundraising-finance/11-pitching/02-effective-pitch-storytelling) # Building Credibility (/academy/entrepreneur/fundraising-finance/11-pitching/03-building-credibility) # Advanced Techniques (/academy/entrepreneur/fundraising-finance/11-pitching/04-advanced-techniques) # Prepare for Pitch Success (/academy/entrepreneur/fundraising-finance/11-pitching/05-pitch-success) # Why This Matters (/academy/entrepreneur/fundraising-finance/11-pitching/06-why-it-matters) # Applying Concepts (/academy/entrepreneur/fundraising-finance/11-pitching/07-applying) # Key Take-Aways (/academy/entrepreneur/fundraising-finance/11-pitching/08-key-takeways) # Flashcards (/academy/entrepreneur/fundraising-finance/11-pitching/09-flashcards) # Test Your Knowledge (/academy/entrepreneur/fundraising-finance/11-pitching/10-quiz) # Module 4 (/academy/entrepreneur/go-to-market/04-gtm-strategies/01-introduction) # Building a Structured GTM Framework (/academy/entrepreneur/go-to-market/04-gtm-strategies/02-structured-framework) # Web3-Specific Market Considerations (/academy/entrepreneur/go-to-market/04-gtm-strategies/03-web3-market-considerations) # Optimizing Your GTM Execution (/academy/entrepreneur/go-to-market/04-gtm-strategies/04-optimizing-execution) # Applying GTM Principles to Your Startup (/academy/entrepreneur/go-to-market/04-gtm-strategies/05-applying-principles) # Key Take-Aways (/academy/entrepreneur/go-to-market/04-gtm-strategies/06-key-takeaways) # Flashcards (/academy/entrepreneur/go-to-market/04-gtm-strategies/07-flashcards) # Test Your Knowledge (/academy/entrepreneur/go-to-market/04-gtm-strategies/08-quiz) # Module 5 (/academy/entrepreneur/go-to-market/05-sales/01-introduction) # Defining Your Value Proposition (/academy/entrepreneur/go-to-market/05-sales/02-defining-value-proposition) # Understanding Your Buyer (/academy/entrepreneur/go-to-market/05-sales/03-understanding-your-buyer) # Selling During the MVP Stage (/academy/entrepreneur/go-to-market/05-sales/04-selling-during-mvp) # Managing First Customer Relationships (/academy/entrepreneur/go-to-market/05-sales/05-managing-first-customers) # Effective Outreach and Sales Calls (/academy/entrepreneur/go-to-market/05-sales/06-outreach-and-sales-calls) # Managing Leads and Follow-ups (/academy/entrepreneur/go-to-market/05-sales/07-managing-leads) # Expanding Your Sales Team (/academy/entrepreneur/go-to-market/05-sales/08-expanding-sales-team) # Key Take-Aways (/academy/entrepreneur/go-to-market/05-sales/09-key-takeaways) # Flashcards (/academy/entrepreneur/go-to-market/05-sales/10-flashcards) # Test Your Knowledge (/academy/entrepreneur/go-to-market/05-sales/11-quiz) # Additional Learning (/academy/entrepreneur/go-to-market/05-sales/12-additional-learning) # Module 6 (/academy/entrepreneur/go-to-market/06-pricing/01-introduction) # Understanding Web3 Pricing Fundamentals (/academy/entrepreneur/go-to-market/06-pricing/02-understanding-web3-pricing-fundamentals) # Building Your Pricing Framework (/academy/entrepreneur/go-to-market/06-pricing/03-building-your-pricing-framework) # Selecting the Right Pricing Model (/academy/entrepreneur/go-to-market/06-pricing/04-selecting-the-right-pricing-model) # Cost Estimation in Decentralized Systems (/academy/entrepreneur/go-to-market/06-pricing/05-cost-estimation-in-decentralized-systems) # Regional Adaptation Strategies (/academy/entrepreneur/go-to-market/06-pricing/06-regional-adaptation-strategies) # Enterprise Client Approach (/academy/entrepreneur/go-to-market/06-pricing/07-enterprise-client-approach) # Key Take-Aways (/academy/entrepreneur/go-to-market/06-pricing/08-key-takeaways) # Flashcards (/academy/entrepreneur/go-to-market/06-pricing/09-flashcards) # Test Your Knowledge (/academy/entrepreneur/go-to-market/06-pricing/10-quiz) # Module 7 (/academy/entrepreneur/web3-community-architect/07-community-building/01-introduction) # Understanding Your Foundation (/academy/entrepreneur/web3-community-architect/07-community-building/02-understanding-your-foundation) # The Social Media Strategy Blueprint (/academy/entrepreneur/web3-community-architect/07-community-building/03-social-media-strategy-blueprint) # Content Creation Without a Full Team (/academy/entrepreneur/web3-community-architect/07-community-building/04-content-creation-without-full-team) # Scalable Content Strategies (/academy/entrepreneur/web3-community-architect/07-community-building/05-scalable-content-strategies) # Content Iteration and Community Observation (/academy/entrepreneur/web3-community-architect/07-community-building/06-content-iteration-and-observation) # Building Authentic Community Engagement (/academy/entrepreneur/web3-community-architect/07-community-building/07-building-authentic-engagement) # Common Community Building Mistakes (/academy/entrepreneur/web3-community-architect/07-community-building/08-common-community-mistakes) # Event Planning for Community Building (/academy/entrepreneur/web3-community-architect/07-community-building/09-event-planning-for-community) # Key Take-Aways (/academy/entrepreneur/web3-community-architect/07-community-building/10-key-takeaways) # Additional Resources (/academy/entrepreneur/web3-community-architect/07-community-building/11-additional-learning) # Flashcards (/academy/entrepreneur/web3-community-architect/07-community-building/12-flashcards) # Test Your Knowledge (/academy/entrepreneur/web3-community-architect/07-community-building/13-quiz) # Module 8 (/academy/entrepreneur/web3-community-architect/08-mastering-tokenomics/01-introduction) # Introduction to Tokenomics (/academy/entrepreneur/web3-community-architect/08-mastering-tokenomics/02-tokenomics) # Token Mechanics for Ecosystem Growth (/academy/entrepreneur/web3-community-architect/08-mastering-tokenomics/03-token-mechanics) # Pitfalls and Real-World Examples (/academy/entrepreneur/web3-community-architect/08-mastering-tokenomics/04-pitfalls) # Why Tokenomics Matters for Web3 Startups (/academy/entrepreneur/web3-community-architect/08-mastering-tokenomics/05-why-they-matter) # Applying Tokenomics to Your Startup (/academy/entrepreneur/web3-community-architect/08-mastering-tokenomics/06-applying-startup) # Key Take-Aways (/academy/entrepreneur/web3-community-architect/08-mastering-tokenomics/07-key-takeaway) # Flashcards (/academy/entrepreneur/web3-community-architect/08-mastering-tokenomics/08-flashcard) # Test Your Knowledge (/academy/entrepreneur/web3-community-architect/08-mastering-tokenomics/09-quiz) # Additional Resources (/academy/entrepreneur/web3-community-architect/08-mastering-tokenomics/10-additional-learning) # 0xGasless (/integrations/0xgasless) --- title: 0xGasless category: [ 'x402' ] available: ["C-Chain"] description: 0xGasless offers x402 facilitator for the AVALANCHE Chain open payment standard, AgentKit SDK for on-chain agent execution for developers to build user friendly blockchain applications logo: /images/0xGasless.png developer: 0xGasless website: https://0xgasless.com/ documentation: https://docs.0xgasless.com/ --- ## Overview 0xGasless AgentKit is an SDK for on-chain execution by AI agents, covering transaction signing, smart contract interaction, token swaps, contract deployment, and wallet management — all without gas. It supports the x402 Payment Protocol with the 0xGasless x402 Facilitator service (https://x402.0xgasless.com/) for the Avalanche Chain open payment standard. It also includes a full account abstraction stack on ERC-4337 and ERC-7702, so developers can build dApps with gasless transactions, batching, and session keys. ## What we are offering 0xGasless offers an x402 Facilitator for the on-chain open payment standard and an AI AgentKit for gasless smart accounts and autonomous on-chain AI agents via ERC-4337, with paymasters, bundlers, and LLM-ready tools across EVM chains. ### 0xGasless x402 - Supports x402 Protocol - x402 Facilitator ( https://x402.0xgasless.com/ ) ### AA SDK - Bundler-as-a-Service - Paymaster-as-a-Service - ERC-4337 Smart Wallets - ERC-7702 ### AgentKit - Open-source [GitHub repo](https://github.com/0xgasless/agentkit) - [NPM package](https://www.npmjs.com/package/@0xgasless/agentkit) - 0xGasless Claude MCP: [NPM Package](https://www.npmjs.com/package/@0xgasless/mcp) & [Github Repo](https://github.com/0xgasless/mcp) ## C‑Chain (Avalanche) notes: - **Native gas token**: AVAX (sponsored mode requires a funded paymaster). - Many stablecoins (e.g., USDT) use 6 decimals, format amounts correctly. - Ensure your API key is enabled for 43114 and that your paymaster gas tank is funded. ## Getting Started 1. **Sign up**: Create an account on the [0xGasless Platform](https://dashboard.0xgasless.com/) to access the dashboard and developer tools. 2. **Read the documentation**: See the [0xGasless docs](https://docs.0xgasless.com/) for installation guides, API references, and integrations. 3. **Fund the paymaster**: Follow the [guide](https://docs.0xgasless.com/Getting%20Started/create-a-page) and use the [dashboard](https://dashboard.0xgasless.com/) to create and fund the paymaster. 4. **Integrate the SDK**: Obtain API keys and configure the smart account, bundler, and paymaster using the SDK [docs](https://docs.0xgasless.com/Getting%20Started/create-a-page#step-1-obtain-required-api-keys). 5. **Explore modular options**: The SDK is modular; customize each step of the account abstraction cycle as needed. 6. **Test and deploy**: Use multi-chain support, test thoroughly, and keep the paymaster gas tank funded. ## Documentation - For guides, API references, and integration examples, visit [0xGasless Documentation](https://docs.0xgasless.com/). - For x402 details, visit (https://docs.0xgasless.com/x402/) and for the 0xGasless x402 facilitator service, visit (https://x402.0xgasless.com/) - Dashboard for keys/paymasters: [0xGasless Dashboard](https://dashboard.0xgasless.com/). ## **Use Cases** - **Autonomous Agent Services**: Deploy an AI agent (like a data-eval agent or a code reviewer) that charges other agents via x402 to perform a task, such as fine-tuning a model or verifying a pull request. - **Monetizing Your APIs**: Automatically charge other users or agents a small fee (e.g., $0.001) every time they call an API endpoint you've deployed. - **Pay-Per-Query Dashboards**: Build and host a DeFi savings dashboard or an analytics tool that charges users a micro-payment for each new query or data refresh. - **AI Agents**: Streamline payment processes with smart accounts. Account abstraction provides a secure infrastructure for agents interacting with money. - **DeFi Applications**: Improve accessibility and usability with gasless transactions and easy onboarding. - **Enterprise Applications**: Manage complex access controls and transaction rules with scalable, secure integrations. - **Crypto Wallets**: Develop wallets using smart accounts for improved security, usability, and gasless transactions. - **Digital Identity Management**: Manage digital identities with customizable transaction and access rules for secure, scalable identity. - **Blockchain Games**: Onboard users via social logins to manage in-game assets and progress. - **NFT Marketplaces**: Let users interact with platforms using their existing social accounts to improve acquisition and engagement. # 3Sigma (/integrations/3sigma) --- title: 3Sigma category: Security Audits available: ["C-Chain", "All Avalanche L1s"] description: "3Sigma provides good-quality security audits with additional expertise in economic modeling for tokenomics and protocol design." logo: /images/3sigma.png developer: 3Sigma website: https://threesigma.xyz/ --- ## Overview 3Sigma is a security audit provider for blockchain projects on Avalanche, with additional expertise in economic modeling. Beyond standard security audits, they analyze tokenomics and protocol economics, covering both technical security and economic design. This is particularly useful for projects where economic incentives and security are closely linked. ## Features - **Smart Contract Audits**: Thorough code reviews and vulnerability assessments. - **Economic Modeling**: Specialized analysis of tokenomics and protocol economics. - **20% Ecosystem Discount**: Referral discount available for Avalanche ecosystem projects. - **Protocol Security**: Full examination of protocol design and implementation. - **Incentive Analysis**: Evaluation of economic incentives and potential game-theoretic vulnerabilities. - **Dual Expertise**: Combined security and economic analysis capabilities. - **YFI Experience**: Notable experience with YFI economic modeling. ## Getting Started 1. **Initial Contact**: Reach out through their website to discuss your project requirements. 2. **Mention Ecosystem**: Reference the Avalanche ecosystem for potential referral discounts. 3. **Assessment Process**: - Technical security audit - Economic model analysis (if applicable) - Identification of both security and economic vulnerabilities - Recommendations 4. **Report Delivery**: Receive detailed findings covering both security and economic aspects. 5. **Implementation Support**: Guidance on addressing identified issues. ## Use Cases - **DeFi Protocols**: Combined security and economic analysis for financial applications. - **Novel Tokenomics**: Evaluation of innovative economic models and token designs. - **Yield-Generating Protocols**: Assessment of yield mechanisms and associated risks. - **Governance Systems**: Review of governance incentives and security. - **Complex Economic Mechanisms**: Analysis of intricate economic interactions within protocols. # Publish your integration (/integrations/README) --- title: Publish your integration description: README guide for adding yourself to the integrations page category: "null" --- # Contributing to Avalanche Integrations Welcome! This guide will help you add your integration to the [Avalanche Integrations page](https://build.avax.network/integrations). ## Quick Start 1. Create a new `.mdx` file in this directory 2. Follow the template structure below 3. Add your logo to `/public/images/` 4. Submit a pull request ## File Structure ### Naming Convention Use lowercase with hyphens for your filename: - ✅ `your-integration.mdx` - ✅ `awesome-protocol.mdx` - ❌ `YourIntegration.mdx` - ❌ `your_integration.mdx` ### Frontmatter (Required) Every integration must include frontmatter with these fields: ```yaml --- title: Your Integration Name category: Category Name available: ["C-Chain"] description: "Brief one-sentence description of what your integration does." logo: /images/your-logo.png developer: Your Company Name website: https://yourwebsite.com/ documentation: https://docs.yourwebsite.com/ --- ``` #### Field Details | Field | Type | Description | Example | |-------|------|-------------|---------| | `title` | string | Display name of your integration | `"Chainlink"` | | `category` | string | Category (see list below) | `"Oracles"` | | `available` | array | Supported networks | `["C-Chain"]`, `["C-Chain", "All Avalanche L1s"]` | | `description` | string | One-sentence overview | `"Decentralized oracle network providing reliable data feeds."` | | `logo` | string | Path to logo in `/public/images/` | `/images/chainlink.png` | | `developer` | string | Company or developer name | `"Chainlink Labs"` | | `website` | string | Main website URL | `https://chain.link/` | | `documentation` | string | Docs URL | `https://docs.chain.link/` | ### Optional Frontmatter Fields ```yaml featured: true # Set to true to appear in Featured section (requires approval) ``` ## Categories Choose the most appropriate category for your integration: ### Infrastructure & Development - **RPC Providers** - Blockchain node infrastructure and API services - **Indexers** - Blockchain data indexing and querying - **Oracles** - External data feeds and price information - **Developer Tools** - SDKs, frameworks, and development utilities ### DeFi & Trading - **DEX Liquidity** - Decentralized exchanges and liquidity protocols - **Lending Protocols** - Lending, borrowing, and money markets - **DeFi** - Other DeFi protocols and financial primitives ### Identity & Compliance - **KYC / Identity Verification** - KYC/KYB providers and identity solutions - **Account Abstraction** - Smart account and wallet solutions ### Security & Auditing - **Security Audits** - Smart contract auditing services - **Security** - Security tools and monitoring ### Other Categories - **Analytics** - On-chain analytics and dashboards - **NFT** - NFT platforms and tooling - **Wallets** - Cryptocurrency wallets - **Bridges** - Cross-chain bridges - **Payments** - Payment processing and fiat on/off ramps *Don't see your category? New categories are automatically created when needed.* ## Content Structure Your integration page should include these sections: ### 1. Overview (Required) Explain what your integration does and why it's valuable for Avalanche developers. ```markdown ## Overview [Your Integration] is a [type of service] that provides [main functionality]. Built on Avalanche's C-Chain, it enables developers to [key benefits]. ``` ### 2. Features (Required) List key features using bullet points: ```markdown ## Features - **Feature Name**: Brief description of the feature - **Another Feature**: What it does and why it matters - **Third Feature**: Benefits for Avalanche developers ``` ### 3. Getting Started (Optional but Recommended) *Note: For simple integrations without code examples, you can skip this section and just provide a Documentation link.* If including Getting Started: - Keep it simple and focused - Show Avalanche-specific configuration - Include working code examples ```markdown ## Getting Started To begin using [Your Integration]: 1. **Sign Up**: Create an account at [your website] 2. **Get API Key**: Obtain your API credentials 3. **Configure**: Set up for Avalanche C-Chain ``` ### 4. Documentation (Required) Link to your full documentation: ```markdown ## Documentation For detailed guides and API references, visit the [Your Integration Documentation](https://docs.yoursite.com/). ``` ### 5. Use Cases (Recommended) Show practical applications: ```markdown ## Use Cases [Your Integration] serves various needs: - **Use Case 1**: Description of how it's used - **Use Case 2**: Another practical application - **Use Case 3**: Additional scenarios ``` ### 6. Conclusion (Recommended) Brief closing statement: ```markdown ## Conclusion [Your Integration] provides [summary of value] for blockchain applications on Avalanche, offering [key differentiators]. ``` ## Complete Example ```mdx --- title: Example Protocol category: DeFi available: ["C-Chain", "All Avalanche L1s"] description: "Example Protocol provides decentralized lending and borrowing on Avalanche with competitive rates." logo: /images/example-protocol.png developer: Example Labs website: https://example.protocol/ documentation: https://docs.example.protocol/ --- ## Overview Example Protocol is a decentralized lending platform on Avalanche that enables users to lend and borrow crypto assets with competitive interest rates. Built specifically for Avalanche's high-performance infrastructure, it provides efficient DeFi services with low transaction costs. ## Features - **Lending Markets**: Supply crypto assets to earn interest - **Borrowing**: Borrow against collateral with flexible terms - **Multi-Asset Support**: Support for major cryptocurrencies and stablecoins - **Low Fees**: Benefit from Avalanche's low transaction costs - **High Performance**: Fast transaction confirmation on Avalanche C-Chain ## Documentation For integration guides and API documentation, visit the [Example Protocol Documentation](https://docs.example.protocol/). ## Use Cases Example Protocol serves various DeFi needs: - **Yield Generation**: Earn passive income by lending crypto assets - **Liquidity Access**: Borrow without selling your holdings - **Portfolio Management**: Efficient asset management with lending/borrowing ## Conclusion Example Protocol brings efficient DeFi lending to Avalanche, offering users competitive rates and reliable service powered by Avalanche's fast, low-cost infrastructure. ``` ## Logo Guidelines ### File Format - **Preferred**: PNG or SVG - **Size**: 200x200px recommended (will be displayed at 40x40px) - **Background**: Transparent or solid color ### File Naming - Use lowercase with hyphens: `your-integration.png` - Match your MDX filename: `example-protocol.mdx` → `example-protocol.png` ### Location Place your logo in `/public/images/` directory. ## Code Examples (When Applicable) If your integration requires code, follow these guidelines: ### Use Avalanche-Specific Values ```typescript // ✅ Good - Shows Avalanche configuration const provider = new Provider({ chainId: 43114, // Avalanche C-Chain rpcUrl: "https://api.avax.network/ext/bc/C/rpc" }); ``` ```typescript // ❌ Bad - Generic example const provider = new Provider({ chainId: 1, // Ethereum }); ``` ### Keep It Simple ```typescript // ✅ Good - Clear and focused import { YourSDK } from 'your-sdk'; const client = new YourSDK({ apiKey: process.env.API_KEY, network: 'avalanche' }); const result = await client.query({ /* ... */ }); ``` ```typescript // ❌ Bad - Too complex for getting started // Multiple files, complex error handling, etc. ``` ## Submission Process ### 1. Prepare Your Files - [ ] Create `.mdx` file in `/content/integrations/` - [ ] Add logo to `/public/images/` - [ ] Test locally with `yarn dev` ### 2. Test Locally ```bash yarn dev ``` Visit `http://localhost:3000/integrations` to preview your integration. ### 3. Submit Pull Request - Fork the [builders-hub repository](https://github.com/ava-labs/builders-hub) - Create a new branch: `git checkout -b add-your-integration` - Commit your changes: `git commit -m "Add Your Integration"` - Push to your fork: `git push origin add-your-integration` - Open a Pull Request ### 4. PR Checklist - [ ] MDX file follows the template structure - [ ] All required frontmatter fields are filled - [ ] Logo is added to `/public/images/` - [ ] All links are tested and working - [ ] Content is proofread and formatted - [ ] No code examples include placeholder/dummy code ## Style Guidelines ### Writing Style - **Concise**: Keep descriptions brief and scannable - **Professional**: Maintain a professional, technical tone - **Avalanche-Focused**: Emphasize Avalanche-specific benefits - **Consistent**: Follow the style of existing integrations ### Formatting - Use proper Markdown/MDX syntax - Include line breaks between sections - Use bullet points for lists - Use bold for emphasis: `**Important**` - Use code blocks for technical content ### Common Mistakes to Avoid ❌ **Don't**: - Include sales/marketing language - Use excessive superlatives ("best ever", "revolutionary") - Include contact information in the content (only in frontmatter) - Add instructions or code examples for simple reference integrations - Use placeholder text like "coming soon" ✅ **Do**: - Focus on technical capabilities - Provide accurate, factual information - Include working examples when applicable - Link to official documentation - Keep descriptions professional ## Need Help? - **Questions?** Open an issue on [GitHub](https://github.com/ava-labs/builders-hub/issues) - **Technical Issues?** Check existing integrations for reference - **Style Questions?** Review [similar integrations](https://github.com/ava-labs/builders-hub/tree/master/content/integrations) ## Additional Resources - [Builders Hub Repository](https://github.com/ava-labs/builders-hub) - [Avalanche Documentation](https://docs.avax.network/) - [MDX Documentation](https://mdxjs.com/) --- **Thank you for contributing to the Avalanche ecosystem!** 🔺 # Aave (/integrations/aave) --- title: Aave category: DeFi available: ["C-Chain"] description: "Aave is a leading decentralized lending protocol deployed on Avalanche for lending, borrowing, flash loans, and stable rate borrowing." logo: /images/aave.png developer: Aave website: https://aave.com/ documentation: https://docs.aave.com/ --- ## Overview Aave on Avalanche is a decentralized, non-custodial lending protocol. Depositors supply liquidity to earn interest, while borrowers take overcollateralized loans or flash loans (uncollateralized, single-transaction loans). Running on C-Chain means faster confirmations and lower fees than Ethereum mainnet. ## Features - **Multiple Asset Markets**: Supports a range of cryptocurrencies and stablecoins. - **Variable Interest Rates**: Rates adjust dynamically with market demand. - **Stable Rate Borrowing**: Fixed-rate loans for predictable costs. - **Flash Loans**: Uncollateralized loans that must be repaid within one transaction. - **Credit Delegation**: Delegate borrowing power to other addresses. - **Risk Management**: Configurable risk parameters and liquidation thresholds. - **Governance**: Protocol changes are voted on by AAVE token holders. - **Safety Module**: Stakers backstop the protocol against shortfall events. ## Getting Started 1. **Access Platform**: Visit [Aave](https://aave.com/) and select the Avalanche network. 2. **Connect Wallet**: Connect your Web3 wallet with AVAX for gas fees. 3. **Supply or Borrow**: - Deposit assets to earn interest - Use deposits as collateral for borrowing - Monitor health factor 4. **Manage Positions**: Track and adjust positions as market conditions change. ## Documentation For protocol documentation, visit the [Aave Documentation](https://docs.aave.com/). ## Use Cases - **Yield Generation**: Earn interest by supplying assets to lending pools. - **Leveraged Positions**: Borrow against collateral for trading or yield farming. - **Flash Loans**: Execute arbitrage, liquidations, or collateral swaps in a single transaction. - **Stable Rate Loans**: Lock in a fixed borrow rate. - **Credit Delegation**: Allow trusted parties to borrow against your collateral. # Agora (/integrations/agora) --- title: Agora category: Stablecoins as a Service available: ["C-Chain", "All Avalanche L1s"] description: "Agora provides AUSD, an institutional-grade digital dollar designed for integration across blockchain ecosystems." logo: /images/agora.png developer: Agora website: https://www.agora.finance/ documentation: https://docs.agora.finance/ --- ## Overview Agora is a digital dollar infrastructure provider offering AUSD, an institutional-grade stablecoin designed for enterprise adoption. Agora provides the infrastructure for businesses to integrate stable digital dollars into their operations for cross-border payments, treasury management, and blockchain-based financial services. ## Features - **AUSD Stablecoin**: Fully-backed digital dollar with transparent reserve management - **Institutional Grade**: Designed for enterprise compliance and regulatory requirements - **Multi-Chain Support**: Available across multiple blockchain networks including Avalanche - **API Integration**: APIs for enterprise integration - **Compliance Framework**: Built-in AML/KYC support for institutional requirements - **Reserve Transparency**: Regular attestations and transparent reserve reporting ## Getting Started 1. **Contact Agora**: Reach out through the [Agora website](https://www.agora.finance/) for enterprise access 2. **Complete Onboarding**: Work with Agora's team to complete necessary compliance steps 3. **API Integration**: Use Agora's APIs to integrate AUSD into your applications 4. **Deploy**: Launch AUSD-powered features in your Avalanche applications ## Documentation For technical documentation and integration guides, visit the [Agora Documentation](https://docs.agora.finance/). ## Use Cases - **Enterprise Payments**: Enable efficient cross-border payments with AUSD - **Treasury Management**: Manage corporate treasury with stable digital dollars - **B2B Transactions**: Facilitate business-to-business payments on blockchain - **DeFi Integration**: Use AUSD across Avalanche DeFi protocols # Alchemy (/integrations/alchemy) --- title: Alchemy category: RPC Endpoints available: ["C-Chain"] description: Alchemy is a blockchain developer platform that provides a suite of APIs and tools for building and scaling blockchain applications. logo: /images/alchemy.png developer: Alchemy website: https://www.alchemy.com/ documentation: https://docs.alchemy.com/ --- ## Overview Alchemy is a blockchain development platform that provides tools and services for building, managing, and scaling blockchain applications. It offers access to various blockchain networks, including Ethereum, Polygon, and more, through reliable and scalable RPC nodes. ## Features - **Scalable Infrastructure**: High-performance, scalable nodes supporting various blockchain networks. - **Enhanced APIs**: A suite of APIs that simplify interaction with blockchain networks. - **Developer Tools**: Tools such as Alchemy Build, Monitor, and Notify for debugging, tracking, and alerting on blockchain transactions and events. - **Reliable Uptime**: Distributed and redundant infrastructure for maximum uptime. - **Analytics and Insights**: Analytics tools for blockchain data and application performance monitoring. ## Getting Started 1. **Sign Up**: Create an account on [Alchemy](https://www.alchemy.com/). 2. **Create an App**: After signing in, create a new application in your Alchemy dashboard. Choose the blockchain network you want to connect to. 3. **Get API Key**: Once your application is set up, you’ll be provided with an API key. This key is used to authenticate your requests to the Alchemy RPC node. 4. **Integrate with Your Project**: Use the API key to connect your blockchain application to the Alchemy network. You can integrate using the provided SDKs or directly via HTTP requests. ## Documentation For more details, visit the [Alchemy Documentation](https://docs.alchemy.com/). ## Use Cases - **Decentralized Applications (DApps)**: Power your DApps with reliable and scalable blockchain infrastructure. - **Analytics and Monitoring**: Use Alchemy’s tools to monitor blockchain events and transactions in real-time. - **Smart Contracts**: Deploy and interact with smart contracts on multiple blockchain networks. # Allbridge Core (/integrations/allbridge-core) --- title: Allbridge Core category: ["Crosschain Solutions"] available: ["C-Chain"] description: Allbridge Core enables native stablecoin transfers across blockchains without wrapping — powering frictionless DeFi and multi-chain liquidity. logo: /images/allbridge-core.svg developer: Allbridge website: https://core.allbridge.io/ documentation: https://docs-core.allbridge.io/ --- ## Overview [Allbridge Core](https://core.allbridge.io) is a **cross-chain liquidity protocol** designed for **native stablecoin transfers** between major ecosystems such as Avalanche, Ethereum, Solana, and Tron. Unlike traditional bridges that rely on wrapped or synthetic assets, Allbridge Core maintains liquidity pools of native stablecoins on each chain. When users transfer funds, the value is transmitted via a messaging protocol and redeemed in the target network - **securely and with native tokens**. Key features include: - **Native Transfers**: Move native stablecoins (USDT, USDC, USDe, etc.) across chains without wrapping. - **Messaging-Agnostic Design**: Integrate using Allbridge Messaging, Circle’s CCTP, and LayerZero OFT - **Flexible Fee Options**: Pay bridge fees in stablecoins or gas tokens for a smoother UX. - **Optional Gas Top-Up**: Users can receive AVAX on the destination chain to cover initial transactions. - **Developer Tools**: Full SDK and REST API for quick integration into dApps, wallets, and exchanges. ## Getting Started Allbridge Core SDK enables you to onboard cross-chain functionality into your protocol. Follow these instructions to get started using our SDK. 1. **Installation**: * **Using npm** ```txt $ npm install @allbridge/bridge-core-sdk ``` * **Using yarn** ```txt $ yarn add @allbridge/bridge-core-sdk ``` * **Using pnpm** ```txt $ pnpm add @allbridge/bridge-core-sdk ``` 2. **Initialize the SDK instance with your node RPC URLs**: ```tsx import { AllbridgeCoreSdk, nodeRpcUrlsDefault } from "@allbridge/bridge-core-sdk"; // Connections to blockchains will be made through your rpc-urls passed during initialization const sdk = new AllbridgeCoreSdk({ ...nodeRpcUrlsDefault, TRX: "your trx-rpc-url", ETH: "your eth-rpc-url" }); ``` ## Documentation For more details, visit the [Allbridge Core Documentation](https://docs-core.allbridge.io/sdk/get-started) ## Use Cases 1. **Stablecoin Swaps**: Transfer your stables between EVM and non-EVM compatible chains. 2. **User Onboarding**: The optional gas top-up reduces barriers to entry into the Avalanche ecosystem. 3. **dApp and Wallet Integrations**: Integrate cross-chain swaps directly into wallets, DEXs, or other dApps via our SDK/REST API. 4. **Flexible Transfers**: Choose between multiple supported messaging protocols to match your bridging needs. # Allnodes (/integrations/allnodes) --- title: Allnodes category: Validator Infrastructure available: ["C-Chain", "All Avalanche L1s"] description: "Allnodes is a non-custodial platform for hosting nodes and staking, providing reliable validator infrastructure for Avalanche networks." logo: /images/allnodes.png developer: Allnodes website: https://www.allnodes.com/ documentation: https://help.allnodes.com/ --- ## Overview Allnodes is a non-custodial platform for hosting blockchain nodes and staking. It supports over 80 blockchain protocols including Avalanche, providing infrastructure for running validators, full nodes, and staking operations. Both individual users and institutions can use it to participate in Avalanche network validation. ## Features - **Non-Custodial Hosting**: Retain full control of your assets while Allnodes manages the infrastructure - **99.99% Uptime SLA**: Enterprise-grade reliability for validator operations - **24/7 Monitoring**: Continuous monitoring and automatic issue resolution - **Multi-Protocol Support**: Support for Avalanche and 80+ other blockchain networks - **Easy Setup**: User-friendly interface for deploying and managing nodes - **Staking Rewards Dashboard**: Real-time tracking of staking rewards and performance ## Getting Started 1. **Create Account**: Sign up at [Allnodes](https://www.allnodes.com/) 2. **Select Avalanche**: Choose Avalanche from the supported protocols 3. **Configure Node**: Select node type and configure your validator 4. **Deploy**: Launch your node with Allnodes handling the infrastructure 5. **Monitor**: Track performance through the Allnodes dashboard ## Documentation For setup guides and FAQs, visit the [Allnodes Help Center](https://help.allnodes.com/). ## Use Cases - **Validator Operations**: Run Avalanche validators with professional infrastructure - **Staking Services**: Provide staking services to users with reliable nodes - **L1 Validation**: Validate Avalanche L1 networks - **Full Node Access**: Run full nodes for application infrastructure # Anchorage Digital (/integrations/anchorage) --- title: Anchorage Digital category: Stablecoins as a Service available: ["C-Chain", "All Avalanche L1s"] description: "Anchorage Digital is a federally chartered digital asset bank providing institutional-grade custody and stablecoin infrastructure services." logo: /images/anchorage.png developer: Anchorage Digital website: https://www.anchorage.com/ documentation: https://www.anchorage.com/who-we-serve --- ## Overview Anchorage Digital is the first federally chartered digital asset bank in the United States, providing institutional-grade infrastructure for digital assets. Through its regulated platform, Anchorage offers custody, staking, trading, and stablecoin services that enable enterprises and institutions to securely interact with blockchain technology while maintaining full regulatory compliance. ## Features - **Federal Bank Charter**: The only federally chartered digital asset bank in the US, providing the highest level of regulatory clarity and compliance - **Institutional Custody**: Enterprise-grade custody solutions with insurance coverage and multi-signature security - **Stablecoin Infrastructure**: Full-service stablecoin minting, burning, and reserve management capabilities - **Regulatory Compliance**: SOC 2 Type 2 certified with AML/KYC frameworks - **24/7 Operations**: Round-the-clock operations with dedicated institutional support - **Integration APIs**: API infrastructure for integration with existing systems ## Getting Started To access Anchorage Digital's stablecoin services: 1. **Contact Anchorage**: Reach out through the [Anchorage website](https://www.anchorage.com/) to discuss your institutional needs 2. **Complete Onboarding**: Work with Anchorage's compliance team to complete the institutional onboarding process 3. **Access Services**: Once approved, access stablecoin infrastructure through Anchorage's secure platform ## Documentation For detailed information about Anchorage Digital's platform and services, visit the [Anchorage Digital services page](https://www.anchorage.com/who-we-serve). ## Use Cases - **Stablecoin Issuance**: Issue and manage stablecoins with full regulatory compliance and reserve management - **Institutional Treasury**: Manage digital asset treasury operations with bank-grade security - **Enterprise Custody**: Secure storage and management of digital assets for institutions - **Regulated Trading**: Access compliant trading services for digital assets # Angle (/integrations/angle) --- title: Angle category: Assets available: ["C-Chain"] description: "Angle Protocol provides decentralized stablecoins including EURA, offering over-collateralized and capital-efficient stable assets on blockchain." logo: /images/angle.png developer: Angle Protocol website: https://www.angle.money/ documentation: https://docs.angle.money/ --- ## Overview > **Warning: Winding Down:** Angle Protocol announced the orderly wind-down of EURA and USDA stablecoins (AIP-112). Users must redeem by March 1, 2027. The team has pivoted to Merkl, a DeFi incentive platform. Angle Protocol is a decentralized stablecoin protocol that issued EURA (euro) and USDA (dollar) stable assets. It used direct deposit modules, algorithmic market operations, and hedging strategies to maintain pegs. The team now focuses on [Merkl](https://merkl.xyz/), a DeFi incentive distribution platform. # ANKR (/integrations/ankr) --- title: ANKR category: Validator Infrastructure available: ["C-Chain", "All Avalanche L1s"] description: "ANKR provides decentralized Web3 infrastructure including node hosting, staking, and RPC services for Avalanche networks." logo: /images/ankr.png developer: ANKR website: https://www.ankr.com/ documentation: https://www.ankr.com/docs/ --- ## Overview ANKR is a decentralized Web3 infrastructure platform for blockchain development and validation. With a globally distributed network of nodes, ANKR offers validator hosting, liquid staking, and RPC services for Avalanche and many other blockchain networks. ## Features - **Distributed Infrastructure**: Global network of nodes for high availability - **Validator Hosting**: Professional infrastructure for running Avalanche validators - **Liquid Staking**: Stake AVAX while maintaining liquidity through ankrAVAX - **RPC Services**: High-performance RPC endpoints for Avalanche - **AppChains**: Support for custom Avalanche L1 deployments - **Developer Tools**: APIs and SDKs for Web3 development ## Getting Started 1. **Visit ANKR**: Go to [ANKR](https://www.ankr.com/) and explore available services 2. **Choose Service**: Select validator hosting, staking, or RPC services 3. **Configure**: Set up your infrastructure through ANKR's platform 4. **Deploy**: Launch your Avalanche infrastructure 5. **Monitor**: Use ANKR's dashboard for performance monitoring ## Documentation For more details, visit [ANKR Docs](https://www.ankr.com/docs/). ## Use Cases - **Validator Operations**: Run Avalanche validators on ANKR's distributed infrastructure - **Liquid Staking**: Stake AVAX and receive liquid staking tokens - **Application Infrastructure**: Use ANKR RPC for dApp backend services - **L1 Infrastructure**: Deploy infrastructure for Avalanche L1 networks # API3 (/integrations/api3) --- title: API3 category: Oracles available: ["C-Chain"] description: API3 is a first-party oracle solution that enables dApps to access APIs in a decentralized and trust-minimized way. logo: /images/api3.png developer: API3 Alliance website: https://api3.org/ documentation: https://docs.api3.org/ --- ## Overview API3 is an oracle solution that connects dApps with real-world data. Unlike third-party oracles, API3 uses first-party oracles where data providers operate their own nodes, which improves data authenticity and minimizes trust assumptions. ## Features - **First-Party Oracles**: Data providers run their own oracles, removing intermediaries and reducing trust concerns. - **Decentralized Governance**: The API3 DAO governs the network with a community-driven approach. - **dAPIs (Decentralized APIs)**: Decentralized APIs aggregated and delivered by first-party oracles for accessing off-chain data. - **Airnode**: API3’s Airnode lets any API provider deploy a first-party oracle without technical expertise. - **Data Transparency**: Full transparency in how data is sourced and delivered to smart contracts. ## Getting Started 1. **Sign Up**: Visit the [API3 website](https://api3.org/) to learn more and get involved with the community. 2. **Explore Airnode**: Check out the [Documentation](https://docs.api3.org/) to understand how to deploy your own first-party oracle. 3. **Integrate dAPIs**: Utilize API3’s decentralized APIs by following the integration guides available in the [API3 documentation](https://docs.api3.org/). ## Documentation For more details, visit the [API3 Documentation](https://docs.api3.org/). ## Use Cases - **DeFi Platforms**: Integrate off-chain data, such as price feeds, into DeFi applications. - **Insurance dApps**: Use real-world data for automated insurance claims and payouts via smart contracts. - **Supply Chain Management**: Track and verify supply chain data using first-party oracles. # Apollo (/integrations/apollo) --- title: Apollo category: Assets available: ["C-Chain"] description: "Apollo Global Management offers tokenized investment products including ACRED, bringing alternative asset management to blockchain infrastructure." logo: /images/apollo.webp developer: Apollo Global Management website: https://www.apollo.com/ documentation: https://www.apollo.com/ --- ## Overview Apollo Global Management is an alternative investment manager with over $650 billion in assets under management. Through tokenized offerings like ACRED, Apollo brings its credit, private equity, and real asset strategies to blockchain, giving institutional and qualified investors access to alternative investments with the efficiency and transparency of on-chain settlement. # Artemis (/integrations/artemis) --- title: Artemis category: Analytics & Data available: ["C-Chain"] description: "Artemis provides institutional-grade blockchain analytics, protocol metrics, and cross-chain data for the Avalanche ecosystem." logo: /images/artemis.webp developer: Artemis website: https://www.artemis.xyz/ documentation: https://docs.artemis.xyz/ --- ## Overview Artemis is an institutional-grade analytics platform that provides blockchain data, protocol metrics, and cross-chain insights for the Avalanche ecosystem. It offers standardized metrics, real-time data, and analytics tools for investors, researchers, and developers. # Ash (/integrations/ash) --- title: Ash category: Blockchain as a Service available: ["All Avalanche L1s"] description: Ash is the one-stop shop for L1 development and operation on Avalanche, 100% cloud-agnostic. logo: /images/ash.png developer: E36 Knots website: https://ash.center/ documentation: https://ash.center/docs/toolkit/ --- ## Overview Ash is a suite of open-source tools and services for Avalanche L1 development and operations. Ash is 100% cloud-agnostic and can be used to provision Avalanche validator nodes, L1s, block explorers and more! ## Features - **Ash Console**: A dashboard for **Appchain development and operation** on Avalanche. Manage validators, create L1s, and monitor your network **on your own infrastructure** (AWS, GCP and Azure are supported, with more cloud providers to come). - **Ansible Avalanche Collection**: The [Ansible Avalanche Collection](https://github.com/AshAvalanche/ansible-avalanche-collection) provides [Ansible](https://www.ansible.com/) roles, playbooks and modules to manage Avalanche nodes, L1s and more on any infrastructure. - **Ash CLI**: The [Ash CLI](https://ash.center/docs/toolkit/ash-cli/introduction) aims to boost Avalanche developers' productivity by providing a set of commands to interact with Avalanche and Ash services. - **Ash Wallet**: A free-to-use shared infrastructure bringing all the features of [Safe](/integrations/gnosis-safe) (prev. Gnosis Safe) to the Avalanche L1s ecosystem. Read the [official announcement](https://ashavax.hashnode.dev/announcing-ash-wallet-for-avalanche-l1s) to learn more. ## Getting Started There are many ways to get started with Ash depending on your use case: - **One command Devnet**: Learn how to set up an [Avalanche devnet in a single command](https://ash.center/docs/console/guides/blueprint/) with the Ash Console. Perfect for infrastructure beginners! - **HyperChain on AWS**: Are you an HyperSDK developer? Check out the [HyperSDK devnet on AWS](https://ash.center/docs/toolkit/ansible-avalanche-collection/tutorials/hypersdk-devnet-aws) tutorial to learn how to deploy your HyperChain at scale with Terraform and Ansible. - **Avalanche L1s Exploration**: The Ash CLI is the perfect tool to explore Avalanche networks from the command line. Check out our [examples](https://ash.center/docs/toolkit/ash-cli/tutorials/network-exploration) of what you can do with it. ## Documentation For more details, visit the [Ash Console](https://ash.center/docs/console/) and [Ash toolkit](https://ash.center/docs/toolkit/) documentation. ## Use Cases - **Avalanche Validator Nodes**: Deploy and manage Avalanche validator nodes on your preferred cloud infrastructure. - **L1 Development**: Create and operate custom Avalanche L1s for specialized blockchain applications or private networks. - **Add Safe-like features to your L1**: Bring the features of Safe (account abstraction, multisig wallets, etc.) to your Subnet-EVM based L1. # AvaCloud (/integrations/avacloud) --- title: AvaCloud category: Blockchain as a Service available: ["All Avalanche L1s"] description: AvaCloud is the leading managed blockchain service that empowers organizations to effortlessly build, deploy, and scale high-performance decentralized L1 networks. logo: /images/avacloud.png developer: Ava Labs website: https://avacloud.io/ documentation: https://docs.avacloud.io/ featured: true --- ## Overview AvaCloud, developed by Ava Labs, is a managed service for deploying and operating Avalanche L1 networks. It provides a no-code portal for launching chains, along with built-in features like interoperability, gas relaying, Safe multisig, VRF, and wallet-as-a-service. The goal is to let teams run their own L1 without managing validator infrastructure or chain configuration manually. ## Features - **No-Code L1 Deployment**: Launch and configure Avalanche L1s through the AvaCloud Portal. - **Built-In Modules**: Enable interoperability, gas relayer, Safe, VRF, node sale, wallet-as-a-service, and privacy features with a few clicks. - **Developer Tools**: SDKs and APIs for smart contract deployment and chain interaction. - **Automated Infrastructure**: Managed validators, nodes, and chain upgrades. - **Scalability**: Cloud resources scale with your chain’s usage. - **Support**: Documentation and enterprise support from Ava Labs. ## Getting Started 1. **Explore**: Visit the [AvaCloud website](https://avacloud.io/) to see available services. 2. **Read the Docs**: Check the [AvaCloud Documentation](https://docs.avacloud.io/) for setup guides and API references. 3. **Create an Account**: Sign up for AvaCloud to access the portal. 4. **Deploy Your L1**: Use the portal to configure and launch your Avalanche L1 network. 5. **Monitor**: Track chain performance and health through the dashboard. ## Documentation For guides, API references, and support, visit the [AvaCloud Documentation](https://docs.avacloud.io/). ## Use Cases - **Custom L1 Networks**: Deploy application-specific Avalanche L1s without managing infrastructure. - **dApps**: Build and scale decentralized applications on your own chain. - **Enterprise Chains**: Run permissioned or public chains for enterprise use cases. - **Prototyping**: Quickly spin up test chains to validate ideas. # Avalanche Explorer (/integrations/avalanche-explorer) --- title: Avalanche Explorer category: Explorers available: ["C-Chain", "All Avalanche L1s"] description: Avalanche L1 Explorer is an analytics tool for exploring Avalanche L1 transactions, addresses, statistics, and other platform activities. logo: /images/avax.png developer: Ava Labs website: https://subnets.avax.network/ documentation: https://build.avax.network/docs featured: true --- ## Overview Avalanche Explorer is an analytics tool developed by Ava Labs for exploring transactions, addresses, statistics, and other activities on Avalanche L1s. It provides real-time data across various Avalanche L1s for developers, validators, and other ecosystem participants. ## Features - **Avalanche L1-Specific Data**: Explore real-time data for individual Avalanche L1s, including transactions, addresses, and other key metrics. - **Analytics**: Statistics on network performance and usage across Avalanche L1s. - **User-Friendly Interface**: Navigate through different Avalanche L1s and explore blockchain data with an intuitive interface. - **Validator Monitoring**: Track validator activities, including performance metrics and staking information, across Avalanche L1s. - **API Access**: Utilize the Avalanche Explorer API to integrate Avalanche L1 data into your applications or services. - **Historical Data**: Access historical data to analyze trends and network activity over time. ## Getting Started 1. **Visit Avalanche Explorer**: Head to the [Avalanche L1 Explorer website](https://subnets.avax.network/) to start exploring the data. 2. **Explore Avalanche L1 Data**: Use the explorer to view real-time transactions, address activity, and network statistics for individual Avalanche L1s. 3. **Monitor Validators**: Access detailed validator information to track performance and staking activities within Avalanche L1s. 4. **Access Documentation**: Visit the [Avalanche Documentation](https://build.avax.network/docs) for technical details. 5. **Integrate with API**: Developers can use the API to pull data into their own platforms or applications for further analysis or display. ## Documentation For more information on using Avalanche Explorer and integrating its data into your projects, check out the [Avalanche Documentation](https://build.avax.network/docs). ## Use Cases - **Developers**: Monitor Avalanche L1-specific blockchain activity for development, debugging, and smart contract interaction. - **Validators**: Track and optimize validator performance across Avalanche L1s with detailed, real-time data. - **Blockchain Enthusiasts**: Explore the Avalanche network and Avalanche L1s, tracking transactions, addresses, and network activity. - **Analysts and Researchers**: Access and analyze Avalanche L1 data for research and analytics purposes. # Avalanche Stables (/integrations/avalanchestables) --- title: Avalanche Stables category: Analytics & Data available: ["C-Chain", "All Avalanche L1s"] description: "Avalanche Stables provides analytics and tracking for stablecoin activity across the Avalanche network." logo: /images/avalanchestables.svg developer: Avalanche Stables website: https://avalanchestables.com/ documentation: https://avalanchestables.com/ --- ## Overview Avalanche Stables is an analytics platform focused on tracking stablecoin activity across the Avalanche ecosystem. It provides real-time data, historical trends, and insights into stablecoin circulation, transfers, and usage patterns on Avalanche networks. ## Features - **Stablecoin Analytics**: - Real-time stablecoin metrics - Circulation tracking - Transfer volume monitoring - Market share analysis - **Data Visualization**: - Interactive charts - Historical trend analysis - Comparative statistics - Market dynamics - **Network Coverage**: - C-Chain tracking - L1 network monitoring - Cross-chain movements - Protocol integration - **Market Insights**: - Supply changes - Liquidity metrics - Usage patterns - Protocol adoption ## Getting Started To use Avalanche Stables: 1. **Access Platform**: - Visit [Avalanche Stables](https://avalanchestables.com/) - Explore stablecoin dashboards - View real-time metrics 2. **Analyze Data**: - Track stablecoin circulation - Monitor transfer volumes - Compare different stablecoins 3. **Monitor Trends**: - View historical data - Identify market patterns - Track protocol adoption ## Documentation For more information, visit the [Avalanche Stables website](https://avalanchestables.com/). ## Use Cases - **Market Research**: Analyze stablecoin trends and adoption - **DeFi Analytics**: Track stablecoin usage in DeFi protocols - **Liquidity Monitoring**: Monitor stablecoin liquidity across networks - **Investment Decisions**: Make informed decisions based on stablecoin data - **Protocol Development**: Understand stablecoin integration opportunities # Avant (/integrations/avant) --- title: Avant category: DeFi available: ["C-Chain"] description: "Avant is a DeFi protocol on Avalanche offering stable-value tokens (avUSD) and yield through market-neutral strategies." logo: /images/avant.png developer: Avant Protocol website: https://avantprotocol.com documentation: https://docs.avantprotocol.com/ --- ## Overview Avant Protocol is a DeFi protocol on Avalanche's C-Chain focused on sustainable yield through market-neutral strategies. Users mint avUSD by depositing stablecoins (USDC, USDT) and can stake it as savUSD to earn yield generated by delta-neutral trading strategies. ## Features - **avUSD Stablecoin**: Mint a stable-value token by depositing USDC or USDT. - **savUSD Yield**: Stake avUSD to receive savUSD, a yield-generating version of the token. - **Market-Neutral Strategies**: Yield comes from delta-neutral trading, managed by on-chain asset managers. - **Fast Finality**: Built on Avalanche's C-Chain for quick transaction processing. - **Low Fees**: Cost-efficient operations on C-Chain. ## Getting Started 1. **Access Platform**: Visit [Avant Protocol](https://avantprotocol.com). 2. **Connect Wallet**: Link your Web3 wallet and ensure you have AVAX for gas fees. 3. **Mint avUSD**: Deposit USDC or USDT to mint avUSD. 4. **Stake for Yield**: Stake avUSD to receive savUSD and earn yield. ## Documentation For detailed information and guides, visit the [Avant Protocol Documentation](https://docs.avantprotocol.com/). ## Use Cases - **Yield Generation**: Earn yield on stablecoins through market-neutral strategies. - **Stable Value Token**: Hold avUSD as a DeFi-native stable asset. - **DeFi Composability**: Use avUSD and savUSD across Avalanche DeFi protocols. - **Treasury Management**: Put idle stablecoins to work with delta-neutral yield. # Avascan (/integrations/avascan) --- title: Avascan category: Explorers available: ["C-Chain", "All Avalanche L1s"] description: Avascan is a block explorer for the Avalanche network, providing real-time data on transactions, blocks, validators, and more. logo: /images/avascan.jpg developer: Routescan website: https://avascan.info/ documentation: https://docs.avascan.info/ --- ## Overview Avascan is a block explorer for the Avalanche network, providing a full view of blockchain activity. It shows real-time data on transactions, blocks, validators, and other key metrics for developers, validators, and anyone monitoring the Avalanche blockchain. ## Features - **Real-Time Data**: Access up-to-the-minute information on transactions, block details, and validator activities. - **User-Friendly Interface**: Navigate easily through the Avalanche network's data with an intuitive and well-designed interface. - **Validator Insights**: Explore detailed information on validators, including performance metrics and staking details. - **Multi-Chain Support**: Supports multiple chains within the Avalanche network, providing a unified view across different chains. - **Advanced Search**: Search to quickly find transactions, addresses, and block information. - **API Access**: Offers API access for developers to integrate Avascan's data into their applications and services. ## Getting Started 1. **Visit Avascan**: Go to the [Avascan website](https://avascan.info/) to explore its features and navigate the Avalanche network. 2. **Use the Search Functionality**: Use the search bar to find specific transactions, blocks, or validators. 3. **Explore Validator Data**: Dive into detailed validator statistics, including staking information and performance metrics. 4. **Access Documentation**: Refer to the [Avascan Documentation](https://docs.avascan.info/) for guidance on using the explorer, API integration, and advanced features. 5. **Integrate with API**: Developers can use the API to fetch and display Avascan data within their own applications. ## Documentation For more details on using Avascan, including how to access its API and utilize its features, visit the [Avascan Documentation](https://docs.avascan.info/). ## Use Cases - **Blockchain Developers**: Monitor and analyze blockchain activity for development and debugging. - **Validators**: Track and optimize validator performance with real-time data and historical insights. - **Researchers and Analysts**: Analyze Avalanche blockchain data in depth. - **General Users**: Explore and verify transactions, blocks, and network status in a user-friendly environment. # Amazon AWS (/integrations/aws-avalanche) --- title: Amazon AWS category: Validator Infrastructure available: ["C-Chain", "All Avalanche L1s"] description: "Amazon Web Services provides cloud infrastructure for running Avalanche nodes, validators, and blockchain applications." logo: /images/aws.png developer: Amazon Web Services website: https://aws.amazon.com/ documentation: https://docs.aws.amazon.com/managed-blockchain/ --- ## Overview Amazon Web Services (AWS) provides the cloud infrastructure to run Avalanche nodes, validators, and blockchain applications at scale. With AWS's global data center network, compute services, and enterprise security, organizations can deploy reliable Avalanche infrastructure with flexibility and scale. ## Features - **Global Infrastructure**: Deploy across AWS's worldwide network of regions - **Scalable Compute**: EC2 instances optimized for blockchain workloads - **High Availability**: Multi-AZ deployments for maximum uptime - **Enterprise Security**: Security controls and compliance certifications - **Managed Blockchain**: AWS Managed Blockchain service for simplified deployment - **Cost Optimization**: Flexible pricing with reserved and spot instances ## Getting Started 1. **AWS Account**: Create or access your [AWS account](https://aws.amazon.com/) 2. **Select Region**: Choose appropriate AWS region for your deployment 3. **Configure Infrastructure**: Set up EC2 instances, storage, and networking 4. **Deploy Node**: Install and configure Avalanche node software 5. **Monitor**: Use AWS CloudWatch for monitoring and alerting ## Documentation For AWS blockchain documentation, visit [AWS Managed Blockchain Docs](https://docs.aws.amazon.com/managed-blockchain/). ## Use Cases - **Validator Hosting**: Run Avalanche validators on AWS infrastructure - **Full Node Operations**: Deploy full nodes for application backends - **L1 Infrastructure**: Host infrastructure for Avalanche L1 networks - **Development Environments**: Create dev/test environments for Avalanche development # Axelar (/integrations/axelar) --- title: Axelar category: Crosschain Solutions available: ["C-Chain"] description: Axelar is a decentralized interoperability network securely connecting 80+ blockchains, enabling seamless multichain token transfers, smart contract calls, and application deployment through a single programmable interface. logo: /images/axelar.jpeg developer: Axelar Foundation website: https://axelar.network/ documentation: https://docs.axelar.dev/ --- ## Overview Axelar is a proof-of-stake blockchain that connects 80+ chains, enabling cross-chain token transfers and smart contract calls through a single API. Developers can build multichain applications without managing separate bridge integrations for each destination chain. Axelar is built on the Cosmos SDK with Tendermint consensus and secured by a dynamic validator set. It has processed over $2 billion in cross-chain volume and is used by major DeFi protocols, NFT platforms, and dApps. # Axiym (/integrations/axiym) --- title: Axiym category: Payments available: ["C-Chain"] description: Axiym is a Liquidity-as-a-Service (LaaS) solution funding global cross-border payments by assisting with pre-funding, enhancing speed and reducing capital intensity through stablecoin settlement integration. logo: /images/axiym.jpeg developer: Axiym website: https://axiym.io/ documentation: https://axiym.io/ --- ## Overview Axiym is a Liquidity-as-a-Service (LaaS) platform for global cross-border payments, solving the pre-funding problem. It provides liquidity solutions that improve payment speed while reducing the capital requirements that make international payments slow and expensive. The platform integrates with global payment ecosystems, offering stablecoin settlement to Money Service Businesses (MSBs) and payment networks. Through direct integrations with major payment networks including Terrapay, Lulu Financial, and others, Axiym uses blockchain-based liquidity to address the $150+ trillion cross-border payments industry by eliminating pre-funding requirements, enabling instant settlements, and reducing the working capital needed to operate payment corridors globally. ## Features - **Liquidity-as-a-Service (LaaS)**: Provide instant liquidity for cross-border payment corridors without capital deployment. - **Pre-Funding Elimination**: Remove the need for expensive pre-funded accounts in multiple countries. - **Stablecoin Settlement**: Enable instant settlement using stablecoins as the settlement layer. - **Payment Network Integration**: Direct connections to major MSBs and payment networks globally. - **Capital Efficiency**: Reduce working capital requirements by 70-90% for payment operators. - **Instant Settlement**: Move from T+2/T+3 settlement to real-time settlement. - **Multi-Currency Support**: Support for numerous fiat currency corridors. - **Regulatory Compliance**: Compliant liquidity solutions meeting global regulatory standards. - **API Integration**: Simple APIs for payment companies to access liquidity. - **Treasury Optimization**: Help payment firms optimize treasury operations. - **Risk Management**: Built-in tools for managing FX and liquidity risk. - **Global Reach**: Support for major payment corridors worldwide. - **Avalanche Integration**: Leverage Avalanche's infrastructure for efficient settlement. ## Getting Started 1. **Partnership Discussion**: Contact Axiym to discuss your cross-border payment volumes and corridors. 2. **Needs Assessment**: Evaluate your liquidity requirements: - Identify payment corridors and currencies - Assess current pre-funding costs and capital tied up - Determine settlement speed requirements - Review regulatory requirements by corridor 3. **Integration Planning**: Design your Axiym integration: - Choose API or direct integration approach - Determine which corridors to enable with Axiym liquidity - Plan migration from pre-funded to on-demand liquidity model - Set up treasury and reconciliation processes 4. **Technical Implementation**: Integrate Axiym's platform: - Connect to Axiym's liquidity APIs - Integrate with your payment routing system - Implement settlement workflows - Test in pilot corridors 5. **Launch and Scale**: Go live with Axiym liquidity: - Start with pilot corridors - Monitor performance and cost savings - Scale to additional corridors - Optimize treasury management ## Liquidity-as-a-Service Model Axiym's LaaS model transforms cross-border payments: **Traditional Model Problems**: - Payment companies must pre-fund accounts in every destination country - Billions in working capital tied up unproductively - Slow settlement times (T+2 or T+3) - Currency risk from holding multiple currencies - Complex treasury management across jurisdictions **Axiym Solution**: - On-demand liquidity provided when payments need settlement - Minimal working capital required - Instant settlement using stablecoins - Reduced currency risk - Simplified treasury operations - Lower total cost of payments This shift from capital-intensive to capital-efficient payments is significant for the industry. ## How It Works Axiym's platform operates through a streamlined process: 1. **Payment Initiation**: Customer initiates cross-border payment through MSB or payment network 2. **Liquidity Request**: Payment company requests liquidity from Axiym for destination currency 3. **Instant Provision**: Axiym provides stablecoin or local currency liquidity instantly 4. **Settlement**: Payment settles with final recipient in destination currency 5. **Reconciliation**: Automatic reconciliation and reporting for payment company 6. **Capital Return**: Payment company's capital is freed for other uses This process happens in real-time, eliminating the delays and capital requirements of traditional pre-funding. ## Payment Network Partners Axiym works with major global payment networks: **Terrapay**: Global digital payments network connecting mobile wallets, banks, and cash networks. **Lulu Financial**: Leading remittance and financial services provider in GCC and Asia. **Additional MSBs**: Integration with numerous other Money Service Businesses and payment networks. These partnerships show Axiym's ability to serve high-volume, institutional payment flows. ## Avalanche Integration Axiym uses Avalanche's infrastructure for several key advantages: **Fast Finality**: Avalanche's sub-second finality enables truly instant payment settlement. **Low Costs**: Minimal transaction costs make micro-payments and frequent settlements economically viable. **High Throughput**: Avalanche can handle large payment volumes without congestion. **Institutional Adoption**: Avalanche's focus on regulated financial applications aligns with payment industry needs. **Stablecoin Support**: Native support for major stablecoins used in cross-border payments. **Compliance Tools**: Avalanche's infrastructure supports necessary compliance and monitoring. ## Use Cases **Money Service Businesses (MSBs)**: Enable MSBs to operate globally without massive pre-funding requirements. **Remittance Companies**: Reduce capital needs while improving speed for remittance senders. **Payment Processors**: Provide liquidity for instant cross-border payment processing. **Neobanks**: Enable digital banks to offer international transfers without correspondent banking relationships. **Payroll Platforms**: Support global payroll with instant cross-border payments. **B2B Payment Platforms**: Facilitate business-to-business international payments. **Cryptocurrency Exchanges**: Provide off-ramp liquidity in multiple currencies. **Fintech Platforms**: Embed international payment capabilities without capital deployment. ## Capital Efficiency Benefits The financial impact of Axiym's solution is significant: **Traditional Pre-Funding**: Payment companies might need $100M+ in pre-funded accounts globally. **With Axiym**: Same payment volume with $10-30M in working capital—a 70-90% reduction. **Additional Benefits**: - Freed capital can be deployed for business growth - Reduced currency risk from holding multiple currencies - Lower borrowing costs (less capital needed) - Simplified balance sheet - Improved return on equity ## Stablecoin Settlement Axiym uses stablecoins as settlement rails: **USD Stablecoins**: Primary settlement using USDC and other dollar-pegged stablecoins. **Instant Conversion**: Real-time conversion between stablecoins and local currencies. **Global Reach**: Stablecoins enable settlement to any jurisdiction with local currency conversion. **24/7 Settlement**: Unlike traditional banking, stablecoin settlement never closes. **Transparent**: All settlement transactions are recorded on-chain. **Lower Costs**: Stablecoin rails are cheaper than correspondent banking. ## Technology Infrastructure Axiym provides comprehensive payment liquidity infrastructure: - **Liquidity APIs**: APIs for requesting liquidity on-demand - **Settlement Engine**: Real-time settlement processing - **Currency Conversion**: Automatic stablecoin to local currency conversion - **Treasury Dashboard**: Monitor liquidity usage and costs - **Reporting Tools**: Reporting for finance and compliance - **Risk Management**: Tools for managing FX and liquidity exposure - **Webhooks**: Real-time notifications for settlement events - **Integration SDKs**: SDKs for major programming languages ## Regulatory Compliance Axiym operates with focus on compliance: - **MSB Licenses**: Appropriate licenses for money transmission - **KYC/AML**: Integration with payment partners' existing compliance processes - **Transaction Monitoring**: Real-time monitoring for suspicious activity - **Reporting**: Regulatory reporting for relevant jurisdictions - **Sanctions Screening**: Automated screening against global sanctions lists - **Data Privacy**: Compliance with GDPR and other data protection regulations ## Competitive Advantages **Capital Efficiency**: Dramatic reduction in working capital requirements. **Speed**: Instant settlement vs. multi-day traditional settlement. **Global Reach**: Support for major payment corridors worldwide. **Stablecoin Native**: Purpose-built for blockchain-based settlement. **Proven Partners**: Working with major MSBs and payment networks. **Scalable**: Technology can handle massive payment volumes. **Lower Costs**: Reduced total cost of cross-border payments. ## Market Opportunity The cross-border payments market represents enormous opportunity: - **$150+ Trillion**: Annual cross-border payment volume globally - **$40+ Billion**: Annual fees paid for cross-border transfers - **Billions Tied Up**: Massive working capital locked in pre-funded accounts - **Growing Market**: International payments growing faster than domestic - **Pain Point**: Pre-funding is universally recognized as major inefficiency Axiym addresses this market with an on-chain solution. ## Pricing Axiym offers competitive liquidity pricing: - **Usage-Based Fees**: Pay only for liquidity actually used - **Lower than Pre-Funding**: Total cost lower than maintaining pre-funded accounts - **Transparent Pricing**: Clear fee structure with no hidden costs - **Volume Discounts**: Reduced rates for high-volume corridors - **Custom Arrangements**: Tailored pricing for large payment networks Contact Axiym for pricing based on your payment volumes and corridors. ## Benefits for Payment Companies **Reduced Capital Requirements**: Free up 70-90% of working capital. **Faster Settlement**: Move from days to instant settlement. **Expanded Reach**: Enter new corridors without capital deployment. **Lower Costs**: Reduce total cost of operating payment business. **Simplified Operations**: Streamlined treasury and reconciliation. **Better Customer Experience**: Enable instant payments to customers. **Competitive Advantage**: Offer better pricing and speed than competitors. ## Ecosystem Integration Axiym connects multiple ecosystem participants: **Payment Networks**: MSBs and payment processors accessing liquidity. **Banks**: Banking partners providing local currency on/off-ramps. **Stablecoin Issuers**: Partnerships with major stablecoin providers. **Liquidity Providers**: Network of liquidity providers funding the platform. **Blockchain Infrastructure**: Using Avalanche and other networks for settlement. **Compliance Providers**: Integration with KYC/AML and monitoring services. # BailSec (/integrations/bailsec) --- title: BailSec category: Audit Firms available: ["C-Chain"] description: BailSec provides smart contract auditing and blockchain security services for protocols on Avalanche and other EVM chains. logo: /images/bailsec.svg developer: BailSec website: https://bailsec.io/ documentation: https://bailsec.io/ --- ## Overview BailSec is a blockchain security firm specializing in smart contract audits and security assessments for Web3 protocols. With a team of experienced security researchers and auditors, BailSec helps development teams identify and remediate vulnerabilities before deployment on Avalanche and other blockchain networks. The firm focuses on delivering thorough, methodical security reviews that ensure protocols are production-ready and protected against potential exploits. BailSec's approach combines deep technical expertise in smart contract security with clear communication and practical recommendations, making security accessible and actionable for development teams of all sizes. ## Services - **Smart Contract Audits**: Security audits for Solidity and EVM-compatible contracts. - **Security Assessments**: Full evaluation of protocol architecture and implementation. - **Vulnerability Detection**: Identification of critical, high, medium, and low severity issues. - **Code Quality Review**: Assessment of code structure, documentation, and best practices. - **Gas Optimization**: Recommendations for improving efficiency and reducing costs. - **Post-Audit Support**: Assistance during vulnerability remediation process. - **Re-Audits**: Verification audits after fixes are implemented. - **Security Documentation**: Detailed audit reports with findings and recommendations. - **Consulting Services**: Advisory on security best practices and protocol design. ## Audit Methodology BailSec follows a structured audit process: 1. **Scoping**: Define audit scope, timeline, and deliverables clearly 2. **Documentation Review**: Understand protocol design and intended functionality 3. **Automated Analysis**: Run security scanning and analysis tools 4. **Manual Review**: Thorough line-by-line code examination 5. **Vulnerability Testing**: Test for known attack patterns and edge cases 6. **Report Generation**: Compile findings with severity ratings 7. **Team Presentation**: Review findings with development team 8. **Remediation Support**: Provide guidance during issue resolution 9. **Re-Audit**: Verify all issues are properly addressed ## Avalanche Expertise BailSec has experience auditing protocols on Avalanche C-Chain, understanding platform-specific considerations: - Avalanche smart contract security patterns - EVM compatibility nuances on Avalanche - Cross-chain bridge security - Subnet-specific implementations - High-throughput protocol optimization ## Access Through Areta Marketplace Avalanche builders can connect with BailSec through the [Areta Audit Marketplace](https://areta.market/avalanche): - **Fast Matching**: Submit request and receive quotes within 48 hours - **Competitive Quotes**: Compare BailSec with other leading audit firms - **Transparent Pricing**: Clear costs without hidden fees or gatekeepers - **Subsidy Programs**: Potential eligibility for up to $10k audit cashback - **Streamlined Process**: Simplified engagement compared to direct outreach - **Avalanche-Specific**: Marketplace designed for Avalanche ecosystem ## Audit Focus Areas **DeFi Protocols**: Decentralized exchanges, lending platforms, yield aggregators, and derivatives. **NFT Projects**: NFT minting contracts, marketplaces, and gaming platforms. **Token Contracts**: ERC-20, ERC-721, ERC-1155, and custom token implementations. **Governance Systems**: DAO contracts, voting mechanisms, and treasury management. **Bridge Protocols**: Cross-chain bridges and messaging protocols. **Infrastructure**: Protocol infrastructure, oracles, and system contracts. ## Why Choose BailSec **Thorough Analysis**: Comprehensive review combining automated tools and manual examination. **Experienced Team**: Security researchers with extensive smart contract auditing experience. **Clear Communication**: Detailed, understandable reports with actionable recommendations. **Competitive Pricing**: Professional audits at reasonable rates. **Timely Delivery**: Efficient processes ensuring timely audit completion. **Avalanche Experience**: Understanding of Avalanche ecosystem and protocols. **Post-Audit Support**: Available for questions during remediation. ## Getting Started To engage BailSec for an audit: 1. **Via Areta Marketplace** (Recommended for Avalanche): - Visit [areta.market/avalanche](https://areta.market/avalanche) - Submit your audit request with project details - Receive competitive quote from BailSec - Choose based on pricing, timeline, and fit 2. **Direct Contact**: - Visit [bailsec.io](https://bailsec.io/) - Submit audit inquiry - Discuss scope and requirements - Receive audit proposal ## Deliverables BailSec delivers: - **Audit Report**: Complete findings with severity classifications and detailed analysis - **Executive Summary**: High-level overview for stakeholders and investors - **Recommendations**: Specific suggestions for improvements and optimizations - **Remediation Guide**: Clear guidance for addressing identified issues - **Re-Audit Report**: Verification that all issues have been resolved - **Security Badge**: Post-audit badge to display on your website ## Pricing BailSec offers competitive pricing: - Pricing based on codebase size and complexity - Transparent quote process - Flexible payment terms - Volume discounts for multiple audits Contact via Areta marketplace or directly for detailed pricing. # Balcony (/integrations/balcony) --- title: Balcony category: Tokenization Platforms available: ["C-Chain", "All Avalanche L1s"] description: "Balcony is a tokenization platform enabling the creation and management of tokenized real-world assets on blockchain." logo: /images/balcony.png developer: Balcony website: https://www.balcony.io/ documentation: https://www.balcony.io/developers --- ## Overview Balcony is a tokenization platform for bringing real-world assets on-chain. It covers the full lifecycle — issuance, investor management, compliance, and secondary trading support — so asset issuers can tokenize securities, real estate, or fund shares while meeting regulatory requirements. ## Features - **Asset Tokenization**: Infrastructure for creating and issuing tokenized real-world assets. - **Compliance Built-In**: Regulatory compliance frameworks for securities issuance. - **Lifecycle Management**: Manage tokenized assets from issuance through redemption. - **Investor Management**: Cap table management, transfer restrictions, and investor communications. - **Distribution Tools**: Primary issuance and secondary trading support. - **Reporting Dashboard**: Reporting and analytics for issuers. ## Getting Started 1. **Contact Balcony**: Reach out through [Balcony](https://www.balcony.io/). 2. **Asset Assessment**: Work with Balcony to assess tokenization requirements. 3. **Platform Setup**: Configure your tokenization infrastructure. 4. **Token Creation**: Create and issue tokenized assets. 5. **Management**: Manage tokenized assets through the platform. ## Documentation For more information, visit [Balcony Documentation](https://www.balcony.io/developers). ## Use Cases - **Real Estate Tokenization**: Fractional ownership of real estate assets. - **Private Securities**: Issue and manage tokenized private securities. - **Fund Tokenization**: Tokenized investment fund structures. - **Alternative Assets**: Tokenize art, commodities, or other alternative investments. # Banxa (/integrations/banxa) --- title: Banxa category: Fiat On-Ramp available: ["C-Chain"] description: Banxa provides global fiat-to-crypto payment infrastructure with support for 100+ cryptocurrencies and 150+ fiat currencies across multiple payment methods. logo: /images/banxa.png developer: Banxa website: https://banxa.com/ documentation: https://docs.banxa.com/ --- ## Overview Banxa is a fiat-to-crypto payment provider available in 150+ countries. It lets users buy and sell crypto through local payment methods (cards, bank transfers, Apple Pay, etc.) with built-in KYC/AML and fraud prevention. Banxa supports AVAX and other Avalanche-based tokens, so developers can embed fiat on/off-ramps directly into their applications. ## Features - **Global Coverage**: 150+ countries with localized payment methods and currencies. - **Payment Methods**: Credit/debit cards, bank transfers, Apple Pay, Google Pay, and regional options (iDEAL, Bancontact, SEPA, etc.). - **Multi-Currency Support**: 150+ fiat currencies and 100+ cryptocurrencies including AVAX. - **On-Ramp and Off-Ramp**: Buy crypto with fiat and sell crypto back to fiat. - **Customizable Widget**: White-label payment widget that matches your app's branding. - **Integration Options**: Web SDK, mobile SDKs, REST API, and hosted checkout. - **KYC/AML Compliance**: Built-in identity verification meeting regulatory requirements globally. - **Fraud Prevention**: ML-based fraud detection to protect merchants and users. - **Fast Settlements**: Quick transaction processing and settlement. - **Partner Dashboard**: Analytics and reporting for tracking transactions and conversion rates. - **Webhooks**: Real-time transaction status notifications. ## Getting Started To integrate Banxa into your application: 1. **Create a Banxa Account**: Register at [Banxa's Partner Portal](https://banxa.com/) to become a partner. 2. **Complete Onboarding**: Go through Banxa's partner onboarding process to set up your account and access credentials. 3. **Obtain API Credentials**: Receive your API keys and subdomain for both sandbox and production environments. 4. **Choose Integration Method**: Banxa offers multiple integration options: - **Hosted Checkout**: Redirect users to Banxa's hosted payment page - **iFrame Widget**: Embed Banxa's payment widget within your application - **Direct API**: Build a fully custom interface using Banxa's REST API - **SDK Integration**: Use Banxa's SDK for simplified integration 5. **Configure Payment Options**: Select which payment methods and cryptocurrencies to enable for your users. 6. **Test in Sandbox**: Use Banxa's sandbox environment with test credentials to verify your integration. 7. **Go Live**: After testing, switch to production credentials and start processing real transactions. ## Avalanche Support Banxa supports AVAX and other tokens on Avalanche C-Chain. Users can buy Avalanche tokens with their local currency and preferred payment method. ## Documentation For integration guides, API references, and webhook setup, visit: - [Banxa Documentation](https://docs.banxa.com/) - [API Reference](https://docs.banxa.com/docs/api-reference) - [Integration Guide](https://docs.banxa.com/docs/getting-started) - [Partner Portal](https://banxa.com/partners/) ## Use Cases on Avalanche **Wallets**: Let users buy AVAX and Avalanche tokens directly inside your wallet app. **DeFi Platforms**: Give users a fiat on-ramp to acquire the tokens they need for your protocol. **NFT Marketplaces**: Allow fiat purchases of NFTs on Avalanche. **GameFi**: Let players buy in-game tokens with their local payment method. **Exchanges**: Add a fiat gateway for crypto purchases. **dApps**: Reduce onboarding friction with localized payment options. ## Pricing Banxa operates on a transaction fee model with pricing that includes: - **Transaction Fees**: Typically 1% to 5% depending on payment method and region - **Payment Method Fees**: Variable rates based on local payment processing costs - **Volume Discounts**: Available for high-volume partners with significant transaction volumes - **Revenue Sharing**: Partnership opportunities with customized fee arrangements - **No Setup Fees**: No upfront costs to integrate Banxa The exact fee structure varies by region, payment method, and partnership arrangement. For detailed pricing information and custom enterprise solutions, contact Banxa's partnership team. ## Compliance and Security - **Global Licensing**: Registered with AUSTRAC, FinCEN, and multiple European regulators. - **KYC/AML Compliance**: Identity verification and screening meeting international standards. - **Data Protection**: GDPR compliant. - **PCI DSS Certified**: Certified for secure card processing. - **Fraud Detection**: ML-based fraud prevention. - **Regular Audits**: Independent third-party compliance and security audits. ## Partner Benefits - **Dedicated Support**: Access to Banxa's partner success team for integration assistance - **Marketing Resources**: Co-marketing opportunities and promotional materials - **API Stability**: Reliable, well-documented API with high uptime SLA - **Regular Updates**: Continuous addition of new payment methods and supported regions - **Analytics Dashboard**: Detailed transaction data and conversion analytics - **Custom Solutions**: Tailored integration and feature development for enterprise partners # Bastion (/integrations/bastion) --- title: Bastion category: Stablecoins as a Service available: ["C-Chain", "All Avalanche L1s"] description: "Bastion provides institutional stablecoin infrastructure with enterprise-grade security, compliance, and API services." logo: /images/bastion.png developer: Bastion website: https://www.bastion.com/ --- ## Overview Bastion provides stablecoin infrastructure for institutions. It handles issuance, management, compliance (AML/KYC), and reserve auditing so that businesses can deploy regulated stablecoin products on Avalanche and other chains. ## Features - **Enterprise Security**: Bank-grade security for stablecoin operations. - **Compliance Tools**: Built-in AML/KYC and regulatory compliance frameworks. - **API Services**: REST APIs for programmatic stablecoin management. - **Multi-Chain Support**: Deploy stablecoins on multiple chains including Avalanche. - **White-Label Solutions**: Customizable stablecoin products under your own brand. - **Reserve Management**: Transparent, audited reserve management. ## Getting Started 1. **Contact Bastion**: Reach out through their [website](https://www.bastion.com/) to discuss requirements. 2. **Solution Design**: Work with their team to design your stablecoin implementation. 3. **Integration**: Implement Bastion's APIs in your systems. 4. **Launch**: Deploy your stablecoin on Avalanche. ## Use Cases - **Branded Stablecoins**: Launch white-label stablecoin products. - **Payment Infrastructure**: Build compliant payment systems on stablecoin rails. - **Corporate Treasury**: Manage digital asset treasury operations. - **Cross-Border Payments**: Use stablecoins for faster, cheaper international transfers. ## Conclusion Bastion handles the compliance and infrastructure side of stablecoin issuance, letting institutions deploy regulated stablecoin products on Avalanche. # Benqi (/integrations/benqi) --- title: Benqi category: DeFi available: ["C-Chain"] description: "Benqi is an Avalanche-native algorithmic liquidity market protocol, enabling users to effortlessly lend, borrow, and earn interest with their digital assets." logo: /images/benqi.jpg developer: Benqi Finance website: https://benqi.fi/ documentation: https://docs.benqi.fi/ --- ## Overview Benqi is an Avalanche-native lending and borrowing protocol. Users can supply crypto assets to earn interest, borrow against collateral, and liquid-stake AVAX via sAVAX. Built specifically for C-Chain, it takes advantage of low fees and fast finality. ## Features - **Lending Markets**: Supply crypto assets to earn variable interest. - **Borrowing**: Borrow against supplied collateral at competitive rates. - **Safety Module**: QI stakers backstop the protocol against shortfall events. - **Liquid Staking**: Stake AVAX and receive sAVAX, keeping your liquidity available. - **Governance**: Protocol decisions are voted on by QI token holders. - **Flash Loans**: Uncollateralized loans for arbitrage or liquidations within a single transaction. ## Getting Started 1. **Access Platform**: Visit [Benqi](https://benqi.fi/). 2. **Connect Wallet**: Connect your Web3 wallet with AVAX for transactions. 3. **Supply Assets**: - Deposit supported assets to earn interest - Use deposits as collateral for borrowing 4. **Manage Positions**: Monitor health factors and adjust positions as needed. ## Documentation For protocol documentation and guides, visit the [Benqi Documentation](https://docs.benqi.fi/). ## Use Cases - **Passive Income**: Earn interest by supplying assets to lending markets. - **Leveraged Trading**: Borrow against collateral to increase exposure. - **Liquid Staking**: Stake AVAX via sAVAX without locking up liquidity. - **Flash Loans**: Execute arbitrage or liquidation strategies in one transaction. - **Treasury Management**: Put idle crypto assets to work earning yield. # Biconomy (/integrations/biconomy) --- title: Biconomy category: Wallets and Account Abstraction available: ["C-Chain"] description: Leverage the Biconomy Smart Accounts Platform to endlessly add UX capability by easily & securely plugging in programmable modules. logo: /images/biconomy.png developer: Biconomy website: https://www.biconomy.io/ documentation: https://docs.biconomy.io/ --- ## Overview Biconomy provides account abstraction infrastructure for dApps. It lets developers offer gasless transactions, social login, and batched operations through Smart Accounts — programmable wallets that sit on top of standard EOAs. This removes most of the UX friction that makes onboarding new users difficult. ## Features - **Account Abstraction**: Smart Accounts abstract wallet complexity so users don't need to understand gas, nonces, or signing flows. - **Gasless Transactions**: Sponsors gas on behalf of users so they can interact with your dApp without holding native tokens. - **Modular Architecture**: Plug in programmable modules (session keys, batch transactions, spending limits) to customize wallet behavior. - **Multi-Chain Support**: Deploy across multiple chains with the same Smart Account setup. - **Social Login**: Onboard users with email or social accounts instead of requiring a browser extension wallet. ## Getting Started 1. **Sign Up**: Create an account on the [Biconomy Dashboard](https://dashboard.biconomy.io/) to get your API keys. 2. **Set Up Smart Accounts**: Configure Smart Accounts for your dApp through the dashboard. 3. **Integrate Gasless Transactions**: Follow the [quickstart guide](https://docs.biconomy.io/about#quickstart) to sponsor gas for your users. 4. **Add Modules**: Attach programmable modules (session keys, batch calls) as needed. 5. **Deploy on Multiple Chains**: Use the same Smart Account configuration across supported networks. ## Documentation For guides, API references, and integration examples, visit the [Biconomy Documentation](https://docs.biconomy.io/). ## Use Cases - **DeFi Applications**: Let users interact with lending, swapping, or staking protocols without worrying about gas. - **Gaming dApps**: Hide blockchain mechanics behind the scenes so players focus on gameplay. - **NFT Marketplaces**: Remove the need for users to manage gas fees or sign multiple transactions when minting or trading NFTs. # Bilira (/integrations/bilira) --- title: Bilira category: Assets available: ["C-Chain"] description: "Bilira provides TRYB (Turkish Lira stablecoin), offering a regulated tokenized representation of the Turkish Lira on blockchain." logo: /images/bilira.png developer: Bilira website: https://www.bilira.co/ documentation: https://www.bilira.co/ --- ## Overview Bilira is a regulated stablecoin issuer providing TRYB, a Turkish Lira-backed stablecoin for users and businesses in Turkey and globally. With regulatory compliance and transparent reserve management, Bilira facilitates payments, remittances, and digital asset transactions denominated in Turkish Lira. # BitGo (/integrations/bitgo) --- title: BitGo category: Custody available: ["C-Chain", "All Avalanche L1s"] description: "BitGo provides institutional digital asset custody, wallets, and security solutions with multi-signature technology and insurance coverage." logo: /images/bitgo.png developer: BitGo website: https://www.bitgo.com/ documentation: https://developers.bitgo.com/ --- ## Overview BitGo is a regulated digital asset custodian serving institutions, exchanges, and enterprises. It provides multi-signature wallets, hot/cold storage, insurance coverage, and compliance tooling. BitGo holds trust company status in multiple jurisdictions. ## Features - **Regulated Custody**: Trust company status in multiple jurisdictions. - **Multi-Signature Wallets**: Configurable multi-sig policies for security and access control. - **Hot and Cold Wallets**: Hot wallets for active operations, cold storage for long-term holdings. - **Insurance Coverage**: Insurance protection for custodied assets. - **Compliance Infrastructure**: Built-in tools for regulatory reporting. - **API Integration**: REST APIs for programmatic wallet and custody management. - **Multi-Asset Support**: Hundreds of supported cryptocurrencies and tokens. - **HSM Protection**: Hardware security modules and offline key storage. ## Documentation For guides and API documentation, visit the [BitGo Developer Documentation](https://developers.bitgo.com/). ## Use Cases - **Exchange Custody**: Secure custody for exchange-held assets. - **Institutional Investment**: Custody for funds, family offices, and institutional investors. - **Corporate Treasury**: Digital asset management for enterprise treasuries. - **Trading Operations**: Hot wallet infrastructure for active trading with multi-sig controls. # Blackhole (/integrations/blackhole) --- title: Blackhole category: DeFi available: ["C-Chain"] description: "Blackhole is a decentralized exchange protocol providing liquidity solutions and trading services on Avalanche." logo: /images/blackhole.png developer: Blackhole website: https://blackhole.xyz documentation: https://docs.blackhole.xyz --- ## Overview Blackhole is a decentralized exchange on Avalanche C-Chain. It provides token swaps, liquidity pools, and yield farming with the low fees and fast finality you get on C-Chain. ## Features - **Token Swaps**: Swap tokens with optimized routing for competitive pricing. - **Liquidity Pools**: Provide liquidity to trading pairs and earn a share of swap fees. - **Yield Farming**: Earn additional rewards through liquidity mining and staking programs. - **Low Fees**: Transactions cost fractions of a cent on C-Chain. - **Fast Finality**: Trades confirm in under a second. - **Multi-Asset Trading**: Access a range of tokens in the Avalanche ecosystem. ## Getting Started 1. **Access Platform**: Visit [Blackhole](https://blackhole.xyz). 2. **Connect Wallet**: Connect your wallet (MetaMask, Core, etc.) and ensure you have AVAX for fees. 3. **Trade Tokens**: - Select tokens for swapping - Review exchange rates and fees - Confirm transactions 4. **Provide Liquidity**: Add liquidity to pools to earn trading fees and rewards. ## Documentation For more information, visit [Blackhole Docs](https://docs.blackhole.xyz). ## Use Cases - **Token Swapping**: Trade tokens with minimal slippage. - **Liquidity Mining**: Earn rewards by providing liquidity to trading pairs. - **Yield Farming**: Participate in farming programs for additional returns. # BlackRock (/integrations/blackrock) --- title: BlackRock category: Assets available: ["C-Chain"] description: "BlackRock is the world's largest asset manager offering tokenized funds including BUIDL, providing institutional-grade investment solutions." logo: /images/blackrock.png developer: BlackRock website: https://www.blackrock.com/ documentation: https://www.blackrock.com/us/individual/products/326614/ishares-blockchain-and-tech-etf --- ## Overview BlackRock is the world's largest asset manager with over $10 trillion in assets under management, bringing traditional financial products to blockchain through tokenization. Through tokenized funds like BUIDL (BlackRock USD Institutional Digital Liquidity Fund), BlackRock gives institutional investors access to blockchain-native investment products that combine traditional finance compliance with on-chain efficiency and transparency. # Blockdaemon (/integrations/blockdaemon) --- title: Blockdaemon category: Validator Infrastructure available: ["C-Chain", "All Avalanche L1s"] description: "Blockdaemon is an institutional-grade blockchain infrastructure platform providing node management, staking, and API services." logo: /images/blockdaemon.png developer: Blockdaemon website: https://www.blockdaemon.com/ documentation: https://docs.blockdaemon.com/ --- ## Overview Blockdaemon is an institutional blockchain infrastructure platform used by enterprises, custodians, and financial institutions. It provides node management, staking infrastructure, and APIs for 60+ blockchain protocols including Avalanche, enabling organizations to participate in blockchain networks with enterprise-grade reliability and compliance. ## Features - **Enterprise Grade**: SOC 2 Type 2 certified infrastructure for institutional requirements - **Staking Infrastructure**: Professional staking services for institutional clients - **Node Management**: Fully managed node operations with 99.9% uptime SLA - **Ubiquity API**: Unified blockchain API for multi-chain access - **Global Distribution**: Geographically distributed infrastructure for resilience - **Compliance Ready**: Infrastructure designed for regulated financial institutions ## Getting Started 1. **Contact Blockdaemon**: Reach out through [Blockdaemon](https://www.blockdaemon.com/) for enterprise access 2. **Solution Design**: Work with Blockdaemon to design your infrastructure needs 3. **Deployment**: Deploy validators and nodes through Blockdaemon's platform 4. **Integration**: Connect your systems using Blockdaemon's APIs 5. **Operations**: Manage and monitor through enterprise dashboard ## Documentation For technical documentation, visit [Blockdaemon Docs](https://docs.blockdaemon.com/). ## Use Cases - **Institutional Staking**: Run compliant staking operations for funds and institutions - **Custodian Infrastructure**: Provide blockchain infrastructure for custody services - **Enterprise Validation**: Participate in Avalanche validation with enterprise-grade infrastructure - **Multi-Chain Operations**: Manage infrastructure across multiple blockchains # Blocknative (/integrations/blocknative) --- title: Blocknative category: Developer Tooling available: ["C-Chain"] description: "Blocknative provides real-time blockchain infrastructure including mempool monitoring, transaction simulation, and gas estimation." logo: /images/blocknative.png developer: Blocknative website: https://www.blocknative.com/ documentation: https://docs.blocknative.com/ --- ## Overview Blocknative provides real-time blockchain infrastructure that helps developers build better Web3 experiences. With mempool monitoring, transaction simulation, and gas estimation services, Blocknative enables applications to provide users with accurate transaction information and improved reliability. ## Features - **Mempool Monitoring**: Real-time visibility into pending transactions - **Transaction Simulation**: Preview transaction outcomes before submission - **Gas Estimation**: Accurate gas price recommendations - **Wallet Notifications**: Real-time transaction status updates - **MEV Protection**: Tools to protect against MEV extraction - **API Access**: Clean APIs for all services ## Getting Started To use Blocknative: 1. **Sign Up**: Create account at [Blocknative](https://www.blocknative.com/) 2. **Get API Key**: Access credentials from dashboard 3. **Choose Service**: Select mempool, simulation, or gas services 4. **Integrate**: Implement Blocknative APIs in your application 5. **Monitor**: Use dashboard for service monitoring ## Documentation For API documentation, visit [Blocknative Docs](https://docs.blocknative.com/). ## Use Cases - **Transaction Monitoring**: Track pending transactions in real-time - **Gas Optimization**: Provide users with accurate gas estimates - **DeFi Applications**: Simulate trades before execution - **User Experience**: Improve transaction reliability # Blockpass (/integrations/blockpass) --- title: Blockpass category: KYC / Identity Verification available: ["C-Chain"] description: "Blockpass provides KYC identity verification services for blockchain applications, enabling compliant user onboarding." logo: /images/blockpass.png developer: Blockpass website: https://www.blockpass.org/ documentation: https://www.blockpass.org/ --- ## Overview Blockpass is a KYC identity verification platform for blockchain and fintech applications. Users verify their identity once and can reuse those credentials across multiple platforms, reducing repeated onboarding friction. Businesses get document verification, biometric checks, and AML screening out of the box. ## Features - **Reusable Identity**: Users verify once and reuse credentials across participating platforms. - **Global Compliance**: Meets regulatory requirements across multiple jurisdictions. - **Document Verification**: Automated verification of government-issued IDs. - **Biometric Authentication**: Facial recognition and liveness detection. - **AML Screening**: Automated screening against sanctions lists and PEP databases. - **Data Privacy**: Users control their personal information. - **Multi-Chain Support**: Works with Avalanche and other blockchain networks. ## Documentation For more information, visit the [Blockpass website](https://www.blockpass.org/). ## Use Cases - **User Onboarding**: Streamlined KYC for new users joining your platform. - **Regulatory Compliance**: Meet KYC/AML requirements across jurisdictions. - **Reusable Credentials**: Users carry verified identity across platforms without re-verifying. - **Fraud Prevention**: Biometric checks and document verification reduce identity fraud. # Blockscout (/integrations/blockscout) --- title: Blockscout category: Explorers available: ["C-Chain", "All EVM Chains"] description: Blockscout is an open-source, multichain block explorer for EVM-based networks, providing tools for searching, analyzing, and verifying blockchain data across hundreds of chains. It supports transaction tracking, contract verification, token and NFT analytics, and developer APIs. logo: /images/blockscout.png developer: Blockscout website: https://www.blockscout.com/ documentation: https://docs.blockscout.com/ --- ## Overview Blockscout is an open-source block explorer for all EVM-compatible networks, including Avalanche C-Chain. It offers a unified interface for exploring blocks, transactions, accounts, tokens, and NFTs across hundreds of supported chains. ## Features - **Multichain Support**: Explore and analyze data across 1000+ EVM-based blockchains and rollups. - **Transaction & Address Search**: Track transactions, monitor wallet activity, and inspect smart contract interactions. - **Smart Contract Verification**: Verify, publish, and interact with smart contracts directly from the explorer. - **Token & NFT Analytics**: Analyze ERC20 token movements, NFT collections, and associated metadata. - **ENS Lookup**: Resolve and search Ethereum Name Service addresses. - **Developer APIs**: Access REST and JSON RPC APIs, tagging, debuggers, and more for custom integrations. - **Open Source**: Fully open-source and community-driven. ## Getting Started 1. **Explore Blockscout**: Visit [Blockscout](https://www.blockscout.com/) to browse supported networks and try the explorer features. 2. **Self-Host for Avalanche**: For Avalanche-specific or private deployments, see the [Avalanche L1 Toolbox Self-Hosted Explorer Guide](https://build.avax.network/console/layer-1/explorer-setup). 3. **Verify Contracts**: Use the explorer to verify and interact with smart contracts on your network. 4. **Integrate APIs**: Leverage Blockscout's APIs for analytics, dApp integrations, and custom dashboards. 5. **Review Documentation**: Access full guides and references at the [Blockscout Documentation](https://docs.blockscout.com/). ## Documentation For more details, visit the [Blockscout Documentation](https://docs.blockscout.com/). ## Use Cases - **Network Transparency**: Provide users and developers with a transparent view of blockchain activity. - **Smart Contract Development**: Verify, debug, and interact with contracts on EVM chains. - **Token & NFT Analytics**: Track token transfers, NFT mints, and ecosystem trends. - **Custom Deployments**: Launch your own explorer instance for private or public networks (see [Avalanche L1 Toolbox](https://build.avax.network/tools/l1-toolbox#selfHostedExplorer)). - **dApp & Wallet Integrations**: Use Blockscout APIs to power analytics, dashboards, and wallet features. # Blocksec (/integrations/blocksec) --- title: Blocksec category: Security Audits available: ["C-Chain", "All Avalanche L1s"] description: "BlockSec provides high-quality security audits with particular expertise in USP exploit detection and blockchain security." logo: /images/blocksec.jpeg developer: BlockSec website: https://blocksec.com/ --- ## Overview BlockSec is a security audit provider specializing in blockchain security with particular expertise in identifying USP exploits. Their team combines academic research with practical security experience to deliver audits for Avalanche projects. Noted in the ecosystem as "extremely helpful," BlockSec emphasizes practical vulnerability detection and remediation. ## Features - **Smart Contract Audits**: Thorough code reviews and vulnerability assessments. - **USP Exploit Detection**: Specialized expertise in identifying and preventing exploit vectors. - **Security Monitoring**: Real-time monitoring solutions for blockchain projects. - **Threat Intelligence**: Insights on emerging attack patterns and vulnerabilities. - **Academic Approach**: Research-based methodology for security analysis. - **Incident Response**: Support during security incidents and exploits. - **Continuous Protection**: Ongoing security monitoring and assessment. ## Getting Started 1. **Initial Contact**: Reach out through their website to discuss your audit requirements. 2. **Audit Planning**: Define the scope, timeline, and specific security objectives. 3. **Audit Process**: - Smart contract code review - Automated vulnerability scanning - Manual security analysis - Exploit testing 4. **Findings Report**: Detailed documentation of security issues with severity ratings. 5. **Remediation Guidance**: Specific recommendations for addressing identified vulnerabilities. ## Use Cases - **DeFi Protocols**: Security validation for financial applications. - **NFT Marketplaces**: Identifying vulnerabilities in NFT-related smart contracts. - **Cross-Chain Applications**: Security assessment of bridge and cross-chain mechanisms. - **Exchange Platforms**: Audit of exchange and trading contract implementations. - **Protocol Implementations**: Verification of protocol security properties. # Blockworks (/integrations/blockworks) --- title: Blockworks category: Analytics & Data available: ["C-Chain"] description: "Blockworks provides institutional-grade research, market analysis, and data insights for the Avalanche ecosystem and broader crypto markets." logo: /images/blockworks.jpg developer: Blockworks website: https://blockworks.co/ documentation: https://blockworks.co/ --- ## Overview Blockworks is a financial media and research platform providing institutional-grade analysis, market insights, and data for the cryptocurrency ecosystem. Their coverage includes in-depth research on Avalanche and its ecosystem for developers, investors, and institutions. # Bridge (/integrations/bridge-orchestrator) --- title: Bridge category: Orchestrators available: ["C-Chain", "All Avalanche L1s"] description: "Bridge is a payment orchestration platform enabling stablecoin and fiat integration for modern applications." logo: /images/bridge.png developer: Bridge website: https://www.bridge.xyz/ documentation: https://apidocs.bridge.xyz/ --- ## Overview Bridge is a payment orchestration platform that integrates stablecoins and traditional payment methods into any application. As an orchestration layer, Bridge handles multi-rail payments, currency conversions, and compliance requirements, so developers can build payment experiences without managing the underlying infrastructure. ## Features - **Payment Orchestration**: Unified API for managing complex payment flows - **Multi-Rail Support**: Support for stablecoins, bank transfers, and card payments - **Automatic Routing**: Smart routing to optimize payment success and cost - **Real-Time Conversions**: Instant conversions between fiat and crypto - **Compliance Integration**: Built-in KYC/AML for regulatory compliance - **Developer APIs**: REST APIs with SDKs ## Getting Started 1. **Create Account**: Sign up at [Bridge](https://www.bridge.xyz/) and complete verification 2. **Access APIs**: Get your API credentials from the developer dashboard 3. **Implement Integration**: Use Bridge SDKs or REST APIs to add payment orchestration 4. **Test and Launch**: Use sandbox environment for testing before going live ## Documentation For complete API documentation and integration guides, visit [Bridge Docs](https://apidocs.bridge.xyz/). ## Use Cases - **Multi-Rail Payments**: Accept payments via multiple methods with automatic routing - **Cross-Border Transfers**: Enable efficient international payments - **Crypto-Fiat Bridges**: Convert between traditional and digital currencies - **Embedded Finance**: Add financial services to any application # Bridge (/integrations/bridge-stablecoin) --- title: Bridge category: Stablecoins as a Service available: ["C-Chain", "All Avalanche L1s"] description: "Bridge provides stablecoin orchestration infrastructure for integrating digital dollars into any application." logo: /images/bridge.png developer: Bridge website: https://www.bridge.xyz/ documentation: https://apidocs.bridge.xyz/ --- ## Overview Bridge is a stablecoin orchestration platform that lets developers and businesses integrate digital dollars into their applications. Bridge handles stablecoin minting, burning, transfers, and conversions, so developers can focus on the user experience while Bridge manages the underlying stablecoin operations. ## Features - **Stablecoin Orchestration**: Unified API for managing multiple stablecoin operations - **Instant Conversions**: Real-time conversions between fiat and stablecoins - **Multi-Stablecoin Support**: Support for major stablecoins including USDC, USDT, and more - **Developer-First APIs**: Clean, well-documented APIs for rapid integration - **Compliance Built-In**: Integrated KYC/AML and regulatory compliance - **Global Coverage**: Support for users and transactions worldwide ## Getting Started 1. **Sign Up**: Create an account on the [Bridge platform](https://www.bridge.xyz/) 2. **Get API Keys**: Access your API credentials from the developer dashboard 3. **Integrate**: Use Bridge's APIs to add stablecoin functionality to your application 4. **Go Live**: Launch your stablecoin-powered features on Avalanche ## Documentation For API references and integration tutorials, visit the [Bridge Documentation](https://apidocs.bridge.xyz/). ## Use Cases - **Payment Apps**: Build payment applications with instant stablecoin settlements - **Fintech Products**: Add crypto capabilities to traditional fintech applications - **NFT Marketplaces**: Enable stablecoin payments for digital asset purchases - **Cross-Border Transfers**: Facilitate international money transfers using stablecoins # BWARE Labs (/integrations/bware-labs) --- title: BWARE Labs category: RPC Endpoints available: ["C-Chain"] description: "BWARE Labs provides decentralized blockchain API infrastructure with high-performance RPC endpoints for Web3 applications." logo: /images/bware.png developer: BWARE Labs website: https://bwarelabs.com/ documentation: https://docs.bwarelabs.com/ --- ## Overview > **Shut Down:** Bware Labs was acquired by Alchemy in August 2024, and Blast API was deprecated on October 31, 2025. All Blast RPC endpoints are no longer available. Use [Alchemy](/integrations/alchemy) instead. BWARE Labs is a decentralized blockchain infrastructure provider offering high-performance RPC endpoints and API services for Web3 applications. Through its Blast API platform, BWARE Labs delivers reliable, scalable access to multiple blockchain networks including Avalanche, with a focus on performance, reliability, and decentralization. ## Features - **Decentralized Infrastructure**: Distributed network of node providers for enhanced reliability. - **High Performance**: Optimized RPC endpoints for fast response times and low latency. - **Multi-Chain Support**: Access to multiple blockchain networks through a single platform. - **Load Balancing**: Intelligent routing to ensure optimal performance and uptime. - **Analytics Dashboard**: Monitor API usage and performance metrics in real-time. - **Scalable Infrastructure**: Automatically scales to meet application demands. - **WebSocket Support**: Real-time blockchain data through WebSocket connections. - **Archive Nodes**: Access to complete historical blockchain data. ## Documentation For integration guides and API documentation, visit the [BWARE Labs Documentation](https://docs.bwarelabs.com/). ## Use Cases BWARE Labs serves various blockchain development needs: - **DApp Development**: Reliable RPC infrastructure for decentralized applications. - **High-Traffic Applications**: Scalable infrastructure for applications with demanding requirements. - **Real-Time Data**: WebSocket connections for real-time blockchain events. - **Historical Queries**: Archive node access for historical blockchain data. # Certora (/integrations/certora) --- title: Certora category: Security Audits available: ["C-Chain", "All Avalanche L1s"] description: "Certora provides high-quality formal verification and security audits for smart contracts with specialized expertise in mathematical verification." logo: /images/certora.png developer: Certora website: https://www.certora.com/ documentation: https://docs.certora.com/en/latest/ --- ## Overview Certora is a security firm specializing in formal verification and security audits for smart contracts. Using their proprietary Certora Prover, they mathematically verify security properties of smart contracts, providing assurance beyond traditional auditing. Their team of formal verification experts is highly recommended by security teams in the Avalanche ecosystem. ## Features - **Formal Verification**: Mathematical verification of smart contract properties using the Certora Prover. - **Smart Contract Audits**: Security reviews combining formal methods with traditional auditing. - **Custom Specification Development**: Creation of formal specifications for critical security properties. - **Automated Analysis**: Advanced automated tools for detecting vulnerabilities. - **Security Research**: Ongoing research into formal verification techniques for blockchain security. - **Developer Education**: Resources for implementing formally verifiable smart contracts. ## Getting Started 1. **Initial Contact**: Reach out to Certora at [nooly@certora.com](mailto:nooly@certora.com) or [moran@certora.com](mailto:moran@certora.com) to discuss your project. 2. **Scoping Process**: Define the audit scope, formal verification requirements, and timeline. 3. **Audit and Verification Process**: - Formal specification development - Mathematical verification using the Certora Prover - Traditional security audit methodologies - Vulnerability assessment 4. **Results Delivery**: Detailed report of findings, verification results, and remediation recommendations. 5. **Post-Audit Support**: Assistance with implementing fixes and re-verification as needed. ## Documentation For more details, visit the [Certora Documentation](https://docs.certora.com/en/latest/). ## Use Cases - **High-Value DeFi Protocols**: Mathematical verification of financial properties. - **Complex Smart Contract Systems**: Formal verification of intricate contract interactions. - **Novel Financial Mechanisms**: Validation of innovative DeFi mechanisms and algorithms. - **Protocol Implementations**: Verification of protocol specification compliance. - **Security-Critical Applications**: Projects where security failures would have significant consequences. # Chainbase (/integrations/chainbase) --- title: Chainbase category: RPC Endpoints available: ["C-Chain"] description: "Chainbase provides blockchain data infrastructure with RPC services and data APIs for Web3 development." logo: /images/chainbase.jpeg developer: Chainbase website: https://chainbase.com/ documentation: https://docs.chainbase.com/ --- ## Overview Chainbase provides RPC services, data APIs, and indexing for Web3 applications. It supports multiple networks including Avalanche, giving developers access to both raw blockchain data and structured, indexed information through a single platform. ## Features - **RPC Infrastructure**: Reliable RPC endpoints for blockchain access. - **Data APIs**: Structured APIs for querying indexed blockchain data. - **Multi-Chain Support**: Unified access across multiple networks. - **Indexing Services**: Pre-indexed data for faster queries. - **Real-Time Data**: Current blockchain state and live updates. - **Historical Data**: Full historical data with archive node support. - **Developer Tools**: SDKs for integration. - **Scalable Infrastructure**: Infrastructure that scales with your application. ## Documentation For detailed guides and API references, visit the [Chainbase Documentation](https://docs.chainbase.com/). ## Use Cases - **DApp Infrastructure**: RPC and data access for decentralized applications. - **Data Analytics**: Structured data for blockchain analytics. - **Wallet Services**: Token balances, transactions, and account information. - **DeFi Platforms**: DeFi protocol data and real-time updates. - **NFT Applications**: NFT metadata and ownership tracking. # Chainlink CCIP (/integrations/chainlink-ccip) --- title: Chainlink CCIP category: Crosschain Solutions available: ["C-Chain"] description: "Chainlink CCIP (Cross-Chain Interoperability Protocol) enables secure cross-chain messaging and token transfers on Avalanche with built-in security features and DON support." logo: /images/chainlink.png developer: Chainlink website: https://chain.link/cross-chain documentation: https://docs.chain.link/ccip --- ## Overview Chainlink CCIP (Cross-Chain Interoperability Protocol) is an interoperability protocol for cross-chain messaging and token transfers on Avalanche. Built by Chainlink, CCIP uses decentralized oracle networks (DONs) for security and reliable cross-chain communication. ## Features - **Programmable Token Transfers**: Transfer tokens across chains with custom logic. - **Cross-Chain Messages**: Send arbitrary messages between supported networks. - **Risk Management**: Built-in risk management through circuit breakers. - **DON Security**: Secured by decentralized oracle networks. - **Anti-Fraud Network**: Additional security layer monitoring cross-chain transactions. - **Native Token Transfers**: Support for native token movements across chains. - **Developer Tools**: SDKs and testing environments. ## Getting Started 1. **Access Documentation**: Review the [CCIP Documentation](https://docs.chain.link/ccip). 2. **Choose Integration**: - Token transfers - Message passing - Combined transfers and messages 3. **Implement Protocol**: - Install CCIP contracts - Configure for Avalanche - Test cross-chain operations 4. **Monitor Transfers**: Track messages and transfers through CCIP explorer. ## Documentation For more details, visit the [Chainlink CCIP Documentation](https://docs.chain.link/ccip). ## Use Cases - **Token Bridges**: Build secure token transfer bridges. - **Cross-Chain dApps**: Develop applications spanning multiple networks. - **DeFi Protocols**: Create cross-chain DeFi applications. - **Gaming**: Implement cross-chain gaming mechanics. - **Governance**: Enable cross-chain governance systems. # Chainlink Data Feeds (/integrations/chainlink-data-feeds) --- title: Chainlink Data Feeds category: Data Feeds available: ["C-Chain"] description: Chainlink Data Feeds are the quickest way to connect your smart contracts to the real-world data such as asset prices and reserve balances. logo: /images/chainlink.png developer: Chainlink Labs website: https://chain.link/ documentation: https://chain.link/data-feeds --- ## Overview Chainlink Data Feeds are the quickest way to connect your smart contracts to the real-world data such as asset prices and reserve balances. If you already started a project and need to integrate Chainlink, you can [add Chainlink to your existing project](https://docs.chain.link/resources/create-a-chainlinked-project?parent=dataFeeds#installing-into-existing-projects) with the `@chainlink/contracts` NPM package. ## Data Feed Best Practices Before you use Data Feeds, read and understand the best practices on the [Selecting Quality Data Feeds](https://docs.chain.link/data-feeds/selecting-data-feeds) page. For best practices about data for specific asset types, see the following sections: - [Best Practices for ETF and Forex feeds](https://docs.chain.link/data-feeds/selecting-data-feeds#etf-and-forex-feeds) - [Best Practices for Exchange Rate Feeds](https://docs.chain.link/data-feeds/selecting-data-feeds#exchange-rate-feeds) - [Risk Categories](https://docs.chain.link/data-feeds/selecting-data-feeds#data-feed-categories) ## Types of data feeds Data feeds provide many different types of data for your applications. - [Price Feeds](https://docs.chain.link/data-feeds#price-feeds) - [Proof of Reserve Feeds](https://docs.chain.link/data-feeds#proof-of-reserve-feeds) - [Rate and Volatility Feeds](https://docs.chain.link/data-feeds#rate-and-volatility-feeds) ## Using Data Feeds on the Avalanche Primary Network To consume price data, your smart contract should reference `AggregatorV3Interface`, which defines the external functions implemented by Data Feeds. ```solidity // SPDX-License-Identifier: MIT pragma solidity ^0.8.7; import {AggregatorV3Interface} from "@chainlink/contracts/src/v0.8/shared/interfaces/AggregatorV3Interface.sol"; /** * THIS IS AN EXAMPLE CONTRACT THAT USES HARDCODED * VALUES FOR CLARITY. * THIS IS AN EXAMPLE CONTRACT THAT USES UN-AUDITED CODE. * DO NOT USE THIS CODE IN PRODUCTION. */ contract DataConsumerV3 { AggregatorV3Interface internal dataFeed; /** * Network: Avalanche Primary Network * Aggregator: AVAX/USD * Address: 0x0A77230d17318075983913bC2145DB16C7366156 */ constructor() { dataFeed = AggregatorV3Interface( 0x0A77230d17318075983913bC2145DB16C7366156 ); } /** * Returns the latest answer. */ function getChainlinkDataFeedLatestAnswer() public view returns (int) { // prettier-ignore ( /* uint80 roundID */, int answer, /*uint startedAt*/, /*uint timeStamp*/, /*uint80 answeredInRound*/ ) = dataFeed.latestRoundData(); return answer; } } ``` ### Primary Network Price Feed Addresses | **Status** | **Pair** | **Contract Address** | **Asset Name** | **Asset Type** | **Market Hours** | |------------|----------------------------|-------------------------------------------------|-----------------------------|----------------|------------------| | 🟢 | AAVE / USD | 0x3CA13391E9fb38a75330fb28f8cc2eB3D9ceceED | Aave | Crypto | Crypto | | 🔵 | AAVE Network Emergency Count (Avalanche) | 0x41185495Bc8297a65DC46f94001DC7233775EbEe | N/A | N/A | N/A | | 🟢 | ADA / USD | 0x69C2703b8F1A85a2EF6aBDd085699a9F909BE053 | Cardano | Crypto | Crypto | | 🟡 | ALPHA / USD | 0x7B0ca9A6D03FE0467A31Ca850f5bcA51e027B3aF | Alpha Finance | Crypto | Crypto | | 🟢 | AVAX / USD | 0x0A77230d17318075983913bC2145DB16C7366156 | Avalanche | Crypto | Crypto | | 🟡 | AXS / USD | 0x155835C5755205597d62703a5A0b37e57a26Ee5C | Axie Infinity | Crypto | Crypto | | 🟡 | BAT / USD | 0xe89B3CE86D25599D1e615C0f6a353B4572FF868D | Basic Attention Token | Crypto | Crypto | | 🟠 | BEAM / USD | 0x3427232b88Ce4e7d62A03289247eE0cA5324f6ba | Beam | Crypto | Crypto | ### Primary Network Proof of Reserve Addresses | **Status** | **Asset** | **Contract Address** | **Reserve Type** | **Data Source** | **Reporting/Auditor** | |------------|--------------------------------|---------------------------------------------------|-----------------|----------------|-------------------------| | 🔵 | AAVE.e PoR | 0x14C4c668E34c09E1FBA823aD5DB47F60aeBDD4F7 | Cross-chain | Cross-chain | Wallet address | | 🔵 | BTC.b PoR | 0x99311B4bf6D8E3D3B4b9fbdD09a1B0F4Ad8e06E9 | Cross-chain | Cross-chain | Wallet address | | 🔵 | DAI.e PoR | 0x976D7fAc81A49FA71EF20694a3C56B9eFB93c30B | Cross-chain | Cross-chain | Wallet address | | 🔵 | Ion Digital Total Reserve | 0x121C188f76831f504bD29C753074B37a4177cEc3 | Off-chain | Instruxi | Third-party auditor | | 🔵 | LINK.e PoR | 0x943cEF1B112Ca9FD7EDaDC9A46477d3812a382b6 | Cross-chain | Cross-chain | Wallet address | | 🔵 | USDC.e PoR | 0x63769951E4cfDbDC653dD9BBde63D2Ce0746e5F2 | Cross-chain | Cross-chain | Wallet address | | 🔵 | USDT.e PoR | 0x94D8c2548018C27F1aa078A23C4158206bE1CC72 | Cross-chain | Cross-chain | Wallet address | | 🔵 | WBTC.e PoR | 0xebEfEAA58636DF9B20a4fAd78Fad8759e6A20e87 | Cross-chain | Cross-chain | Wallet address | ### Primary Network Rate and Volatility Feed Addresses | **Asset** | **Contract Address** | |-----------------------------|---------------------------------------------------| | BTC-USD 30-Day Realized Vol | 0x93b9B82158846cefa8b4040f22A3Bff05c365226 | | ETH-USD 30-Day Realized Vol | 0x89983A2FDd082FA40d8062BCE3986Fc601D2d29B | # Chainlink (/integrations/chainlink-oracles) --- title: Chainlink category: Oracles available: ["C-Chain"] description: "Chainlink is a decentralized oracle network that enables smart contracts to securely interact with real-world data." logo: /images/chainlink.png developer: Chainlink Labs website: https://chain.link/ documentation: https://docs.chain.link/ --- ## Overview Chainlink is a decentralized oracle network that provides reliable, tamper-proof inputs and outputs for smart contracts on any blockchain. By connecting smart contracts with real-world data, events, and payments, Chainlink enables advanced dApps across various industries. Its decentralized oracle network lets smart contracts securely interact with off-chain data. ## Features - **Decentralized Oracles**: Chainlink uses a network of decentralized oracles to fetch and deliver reliable, tamper-resistant data to smart contracts. - **Cross-Chain Interoperability**: Chainlink enables secure data exchange between different blockchains for cross-chain smart contract functionality. - **Data Integrity**: Multiple oracles and data sources prevent single points of failure. - **Verifiable Randomness**: Chainlink VRF (Verifiable Random Function) provides provably fair and tamper-proof randomness to smart contracts, essential for gaming, lotteries, and NFT minting. - **Price Feeds**: Chainlink's widely used price feeds provide decentralized financial applications with accurate and reliable price data, supporting DeFi protocols like lending, borrowing, and stablecoins. ## Getting Started 1. **Visit Chainlink**: Explore the [Chainlink website](https://chain.link/) to understand its offerings and network capabilities. 2. **Set Up an Oracle**: Refer to the [documentation](https://docs.chain.link/docs/running-a-chainlink-node) to set up your own Chainlink node and participate in the decentralized oracle network. 3. **Integrate Price Feeds**: Use Chainlink's [Price Feeds](https://docs.chain.link/docs/get-the-latest-price) to integrate price data into your DeFi application. 4. **Use Chainlink VRF**: Integrate [Chainlink VRF](https://docs.chain.link/docs/chainlink-vrf) for provably fair randomness in your smart contracts. 5. **Explore Cross-Chain Solutions**: Utilize Chainlink's [Cross-Chain Interoperability Protocol (CCIP)](https://docs.chain.link/ccip) for secure cross-chain data transfer and smart contract execution. ## Documentation For more details, visit the [Chainlink Documentation](https://docs.chain.link/). ## Use Cases - **DeFi Protocols**: Integrate price feeds to support lending, borrowing, and other financial activities. - **Gaming and NFTs**: Use Chainlink VRF for fair randomness in gaming outcomes, lotteries, and NFT minting. - **Insurance dApps**: Fetch real-world data (weather, market prices) to trigger automated insurance payouts. - **Cross-Chain Applications**: Enable cross-chain data transfer and smart contract execution using Chainlink's CCIP. # Chainlink VRF (/integrations/chainlink-vrf) --- title: Chainlink VRF category: VRF available: ["C-Chain", "All Avalanche L1s"] description: Chainlink VRF (Verifiable Random Function) is a provably fair and verifiable random number generator (RNG) that enables smart contracts to access random values without compromising security or usability. logo: /images/chainlink.png developer: Chainlink Labs website: https://chain.link/vrf documentation: https://docs.chain.link/vrf --- ## Overview Chainlink VRF (Verifiable Random Function) is a provably fair and verifiable random number generator (RNG) for smart contracts. For each request, Chainlink VRF generates one or more random values and a cryptographic proof of how those values were determined. The proof is published and verified on-chain before any consuming application can use it. This means results cannot be tampered with by any single entity, including oracle operators, miners, users, or smart contract developers. Use Chainlink VRF to build reliable smart contracts for any applications that rely on unpredictable outcomes: - Building blockchain games and NFTs. - Random assignment of duties and resources. For example, randomly assigning judges to cases. - Choosing a representative sample for consensus mechanisms. ### Two methods to request randomness - **Subscription:** Create a subscription account and fund its balance with either native tokens or LINK. You can then connect multiple consuming contracts to the subscription account. When the consuming contracts request randomness, the transaction costs are calculated after the randomness requests are fulfilled and the subscription balance is deducted accordingly. This method allows you to fund requests for multiple consumer contracts from a single subscription. - **Direct funding:** Consuming contracts directly pay with either native tokens or LINK when they request random values. You must directly fund your consumer contracts and ensure that there are enough funds to pay for randomness requests. ### Use Chainlink VRF on the Avalanche Primary Network To use Chainlink VRF on the Avalanche Primary Network using **subscription**, utilize the [following information from Chainlink](https://docs.chain.link/vrf/v2-5/supported-networks#avalanche-mainnet): | **Item** | **Value** | |----------------------------------------|--------------------------------------------------------------------------------------------| | **LINK Token** | 0x5947BB275c521040051D82396192181b413227A3 | | **VRF Coordinator** | 0xE40895D055bccd2053dD0638C9695E326152b1A4 | | **200 gwei Key Hash** | 0xea7f56be19583eeb8255aa79f16d8bd8a64cedf68e42fefee1c9ac5372b1a102 | | **500 gwei Key Hash** | 0x84213dcadf1f89e4097eb654e3f284d7d5d5bda2bd4748d8b7fada5b3a6eaa0d | | **1000 gwei Key Hash** | 0xe227ebd10a873dde8e58841197a07b410038e405f1180bd117be6f6557fa491c | | **Premium percentage (paying with AVAX)**| 60 | | **Premium percentage (paying with LINK)**| 50 | | **Max Gas Limit** | 2,500,000 | | **Minimum Confirmations** | 0 | | **Maximum Confirmations** | 200 | | **Maximum Random Values** | 500 | ### Use Chainlink VRF on an Avalanche L1 This repository provides **example** contracts for how an Avalanche L1 could leverage Chainlink VRF functionality (available on the C-Chain) using Teleporter. This allows newly launched L1s to immediately utilize VRF without any trusted intermediaries or third-party integration requirements. [Avalanche L1 VRF Example Contracts](https://github.com/ava-labs/subnet-vrf-contracts) ### Best Practices These are example best practices for using Chainlink VRF. To explore more applications of VRF, refer to [Chainlink's blog](https://blog.chain.link/). **Getting a random number within a range** If you need to generate a random number within a given range, use modulo to define the limits of your range. Below you can see how to get a random number in a range from 1 to 50. ```solidity function fulfillRandomWords( uint256, /* requestId */ uint256[] memory randomWords ) internal override { // Assuming only one random word was requested. s_randomRange = (randomWords[0] % 50) + 1; } ``` **Getting multiple random values** If you want to get multiple random values from a single VRF request, you can request this directly with the `numWords` argument: - If you are using the VRF v2.5 subscription method, see the full example code for an example where one request returns multiple random values. **Processing simultaneous VRF requests** If you want to have multiple VRF requests processing simultaneously, create a mapping between `requestId` and the response. You might also create a mapping between the `requestId` and the address of the requester to track which address made each request. ```solidity mapping(uint256 => uint256[]) public s_requestIdToRandomWords; mapping(uint256 => address) public s_requestIdToAddress; uint256 public s_requestId; function requestRandomWords() external onlyOwner returns (uint256) { uint256 requestId = s_vrfCoordinator.requestRandomWords( VRFV2PlusClient.RandomWordsRequest({ keyHash: keyHash, subId: s_vrfSubscriptionId, requestConfirmations: requestConfirmations, callbackGasLimit: callbackGasLimit, numWords: numWords, extraArgs: VRFV2PlusClient._argsToBytes(VRFV2PlusClient.ExtraArgsV1({nativePayment: true})) // new parameter }) ); s_requestIdToAddress[requestId] = msg.sender; // Store the latest requestId for this example. s_requestId = requestId; // Return the requestId to the requester. return requestId; } function fulfillRandomWords( uint256 requestId, uint256[] memory randomWords ) internal override { // You can return the value to the requester, // but this example simply stores it. s_requestIdToRandomWords[requestId] = randomWords; } ``` You could also map the `requestId` to an index to keep track of the order in which a request was made. **Processing VRF responses through different execution paths** If you want to process VRF responses depending on predetermined conditions, you can create an `enum`. When requesting for randomness, map each `requestId` to an enum. This way, you can handle different execution paths in `fulfillRandomWords`. See the following example: ```solidity // SPDX-License-Identifier: MIT // An example of a consumer contract that relies on a subscription for funding. // It shows how to setup multiple execution paths for handling a response. pragma solidity 0.8.19; import {LinkTokenInterface} from "@chainlink/contracts/src/v0.8/shared/interfaces/LinkTokenInterface.sol"; import {IVRFCoordinatorV2Plus} from "@chainlink/contracts/src/v0.8/vrf/dev/interfaces/IVRFCoordinatorV2Plus.sol"; import {VRFConsumerBaseV2Plus} from "@chainlink/contracts/src/v0.8/vrf/dev/VRFConsumerBaseV2Plus.sol"; import {VRFV2PlusClient} from "@chainlink/contracts/src/v0.8/vrf/dev/libraries/VRFV2PlusClient.sol"; /** * THIS IS AN EXAMPLE CONTRACT THAT USES HARDCODED VALUES FOR CLARITY. * THIS IS AN EXAMPLE CONTRACT THAT USES UN-AUDITED CODE. * DO NOT USE THIS CODE IN PRODUCTION. */ contract VRFv2MultiplePaths is VRFConsumerBaseV2Plus { // Your subscription ID. uint256 s_subscriptionId; // Avalanche Primary Network coordinator. address vrfCoordinatorV2Plus = 0xE40895D055bccd2053dD0638C9695E326152b1A4; // The gas lane to use, which specifies the maximum gas price to bump to. // For a list of available gas lanes on each network, // see https://docs.chain.link/vrf/v2-5/supported-networks bytes32 keyHash = 0x787d74caea10b2b357790d5b5247c2f63d1d91572a9846f780606e4d953677ae; uint32 callbackGasLimit = 100000; // The default is 3, but you can set this higher. uint16 requestConfirmations = 3; // For this example, retrieve 1 random value in one request. // Cannot exceed VRFCoordinatorV2_5.MAX_NUM_WORDS. uint32 numWords = 1; enum Variable { A, B, C } uint256 public variableA; uint256 public variableB; uint256 public variableC; mapping(uint256 => Variable) public requests; // events event FulfilledA(uint256 requestId, uint256 value); event FulfilledB(uint256 requestId, uint256 value); event FulfilledC(uint256 requestId, uint256 value); constructor(uint256 subscriptionId) VRFConsumerBaseV2Plus(vrfCoordinatorV2Plus) { s_vrfCoordinator = IVRFCoordinatorV2Plus(vrfCoordinatorV2Plus); s_subscriptionId = subscriptionId; } function updateVariable(uint256 input) public { uint256 requestId = s_vrfCoordinator.requestRandomWords(VRFV2PlusClient.RandomWordsRequest({ keyHash: keyHash, subId: s_subscriptionId, requestConfirmations: requestConfirmations, callbackGasLimit: callbackGasLimit, numWords: numWords, extraArgs: VRFV2PlusClient._argsToBytes(VRFV2PlusClient.ExtraArgsV1({nativePayment: true})) }) ); if (input % 2 == 0) { requests[requestId] = Variable.A; } else if (input % 3 == 0) { requests[requestId] = Variable.B; } else { requests[requestId] = Variable.C; } } function fulfillRandomWords( uint256 requestId, uint256[] memory randomWords ) internal override { Variable variable = requests[requestId]; if (variable == Variable.A) { fulfillA(requestId, randomWords[0]); } else if (variable == Variable.B) { fulfillB(requestId, randomWords[0]); } else if (variable == Variable.C) { fulfillC(requestId, randomWords[0]); } } function fulfillA(uint256 requestId, uint256 randomWord) private { // execution path A variableA = randomWord; emit FulfilledA(requestId, randomWord); } function fulfillB(uint256 requestId, uint256 randomWord) private { // execution path B variableB = randomWord; emit FulfilledB(requestId, randomWord); } function fulfillC(uint256 requestId, uint256 randomWord) private { // execution path C variableC = randomWord; emit FulfilledC(requestId, randomWord); } } ``` # Chainstack (/integrations/chainstack) --- title: Chainstack category: RPC Endpoints available: ["C-Chain"] description: Chainstack is a multi-cloud, multi-protocol blockchain Platform as a Service (PaaS) that enables developers to deploy, manage, and scale nodes and networks across multiple cloud providers. logo: /images/chainstack.webp developer: Chainstack website: https://chainstack.com/ documentation: https://docs.chainstack.com/ --- ## Overview Chainstack is a blockchain Platform as a Service (PaaS) for deploying, managing, and scaling blockchain nodes and networks. It supports multiple cloud providers and protocols, so you can pick the setup that fits your needs. ## Features - **Multi-Cloud Support**: Deploy nodes on AWS, Google Cloud, Microsoft Azure, and others. - **Multi-Protocol Compatibility**: Supports Ethereum, Hyperledger Fabric, Corda, and more. - **Scalability**: Scale your infrastructure for development, testing, or production. - **Simplified Node Management**: Dashboard for managing nodes, monitoring performance, and running updates. - **High Availability**: Automated failover keeps your nodes online. ## Getting Started 1. **Create an Account**: Sign up on the [Chainstack website](https://chainstack.com/). 2. **Deploy a Node**: Pick your protocol and cloud provider from the dashboard. 3. **Manage Your Infrastructure**: Monitor and scale your nodes as needed. Use multi-cloud deployments for redundancy. 4. **Integrate with dApps**: Connect deployed nodes to your applications. 5. **Explore Advanced Features**: Use API access, node telemetry, and analytics to tune your setup. ## Documentation For setup guides, API references, and integration instructions, visit the [Chainstack Documentation](https://docs.chainstack.com/). ## Use Cases - **Enterprise Blockchain Networks**: Deploy and manage private or consortium networks. - **Decentralized Applications (dApps)**: Run reliable nodes for your dApps on public blockchains. - **Blockchain Development and Testing**: Develop and test in a controlled environment before going to production. - **Multi-Cloud Deployments**: Deploy across multiple cloud providers for high availability. # Chaos Labs Oracles (/integrations/chaos-labs) --- title: Chaos Labs Oracles category: Oracles available: ["C-Chain"] description: "Decentralized oracle protocol built for real-time risk, providing unified price feeds, risk feeds, and proof of reserves with ultra-low latency." logo: /images/chaos.png developer: Chaos Labs website: https://chaoslabs.xyz/oracles documentation: https://docs.chaoslabs.xyz/ --- ## Overview Chaos Labs Oracles is a decentralized oracle protocol for real-time risk management, delivering unified data feeds that include prices, risk parameters, and proof of reserves. The platform provides high-precision, low-latency price data with built-in risk filtering and real-time anomaly detection, consuming fresh data 5x per second. It also supports real-time risk tuning through automated risk parameter adjustments, governance-safe automation within defined bounds, and verifiable proof of reserves with tamper-resistant attestations published on-chain. # Chronicle (/integrations/chronicle-data-feeds) --- title: Chronicle category: Oracles available: ["C-Chain"] description: Chronicle Oracles enable you to access verifiable data onchain. logo: /images/chronicle.png developer: Chronicle Labs website: https://chroniclelabs.org/ documentation: https://docs.chroniclelabs.org/ --- ## Overview [Chronicle Protocol](https://chroniclelabs.org/) is an oracle solution that provides scalable, cost-efficient, decentralized, and verifiable on-chain data feeds. For the data feeds addresses, please check out the [Chronicle Dashboard](https://chroniclelabs.org/dashboard) under **Oracles** section. Select either **Mainnets** or **Testnets**, then **Avalanche C-Chain Oracles**. ### Data Feeds Types Chronicle provides two categories of Oracles: [DeFi Oracles](https://docs.chroniclelabs.org/Products/DeFiOracle/) and [Verified Asset Oracles (VAO)](https://docs.chroniclelabs.org/Products/DeFiOracle/). DeFi Oracles include: - Cryptocurrency Oracles - Fiat Currency Oracles - Yield Rate Oracles [The Verified Asset Oracle (VAO)](https://docs.chroniclelabs.org/Products/VerifiedAssetOracle/) — formerly known as the RWA Oracle — securely and transparently verifies the integrity and quality of any offchain asset, transports the resulting data onchain, and makes it directly available to smart contracts and onchain products. ### Check the Chronicle Oracles on the Chronicle Dashboard - [Avalanche C-Chain Oracles](https://chroniclelabs.org/dashboard/oracles#blockchain=AVAX) - [Avalanche Fuji Testnet Oracles](https://chroniclelabs.org/dashboard/oracles#testnet=true&blockchain=FUJI) ## Using Chronicle DeFi Oracles on Avalanche Fuji Testnet The following example showcases how to integrate the AVAX/USD oracle on Avalanche Fuji Testnet. Chronicle contracts are read-protected by a whitelist, meaning you won't be able to read them onchain without your address being added to the whitelist. On the Testnet networks, users can add themselves to the whitelist through the SelfKisser contract; a process playfully referred to as "kissing" themselves. **To get access to production Oracles on the Mainnet, please open a support ticket in [Discord](https://discord.com/invite/CjgvJ9EspJ) in the 🆘 | support channel.** For oracle addresses, please check out the [Dashboard](https://chroniclelabs.org/dashboard/oracles). ```solidity // SPDX-License-Identifier: MIT pragma solidity ^0.8.16; /** * @title OracleReader * @notice A simple contract to read from Chronicle oracles * @dev To see the full repository, visit https://github.com/chronicleprotocol/OracleReader-Example. * @dev Addresses in this contract are hardcoded for Avalanche Fuji Testnet. * For other supported networks, check the https://chroniclelabs.org/dashboard/oracles. */ contract OracleReader { /** * @notice The Chronicle oracle to read from. * Chronicle_AVAX_USD - 0x8b3328b27436263e0BfE7597a0D97B1BEE9cC576 * Network: Avalanche Fuji Testnet */ IChronicle public chronicle = IChronicle(address(0x8b3328b27436263e0BfE7597a0D97B1BEE9cC576)); /** * @notice The SelfKisser granting access to Chronicle oracles. * SelfKisser_1:0x371A53bB4203Ad5D7e60e220BaC1876FF3Ddda5B * Network: Avalanche Fuji Testnet */ ISelfKisser public selfKisser = ISelfKisser(address(0x371A53bB4203Ad5D7e60e220BaC1876FF3Ddda5B)); constructor() { // Note to add address(this) to chronicle oracle's whitelist. // This allows the contract to read from the chronicle oracle. selfKisser.selfKiss(address(chronicle)); } /** * @notice Function to read the latest data from the Chronicle oracle. * @return val The current value returned by the oracle. * @return age The timestamp of the last update from the oracle. */ function read() external view returns (uint256 val, uint256 age) { (val, age) = chronicle.readWithAge(); } } // Copied from [chronicle-std](https://github.com/chronicleprotocol/chronicle-std/blob/main/src/IChronicle.sol). interface IChronicle { /** * @notice Returns the oracle's current value. * @dev Reverts if no value set. * @return value The oracle's current value. */ function read() external view returns (uint256 value); /** * @notice Returns the oracle's current value and its age. * @dev Reverts if no value set. * @return value The oracle's current value using 18 decimals places. * @return age The value's age as a Unix Timestamp . * */ function readWithAge() external view returns (uint256 value, uint256 age); } // Copied from [self-kisser](https://github.com/chronicleprotocol/self-kisser/blob/main/src/ISelfKisser.sol). interface ISelfKisser { /// @notice Kisses caller on oracle `oracle`. function selfKiss(address oracle) external; } ``` ## Documentation For more details, see the [Chronicle Documentation](https://docs.chroniclelabs.org/Developers/tutorials/Remix). # Circle CCTP (/integrations/circle-cctp) --- title: Circle CCTP category: Crosschain Solutions available: ["C-Chain", "All Avalanche L1s"] description: "Circle CCTP (Cross-Chain Transfer Protocol) enables native USDC transfers between Avalanche and other supported blockchains." logo: /images/circle.png developer: Circle website: https://www.circle.com/cross-chain-transfer-protocol documentation: https://developers.circle.com/stablecoins/cctp-getting-started --- ## Overview Circle's Cross-Chain Transfer Protocol (CCTP) is a permissionless on-chain utility that enables native USDC transfers between supported blockchain networks. Unlike bridged or wrapped tokens, CCTP burns USDC on the source chain and mints native USDC on the destination chain, ensuring users always receive canonical, Circle-issued USDC. ## Features - **Native USDC**: Transfer native USDC, not wrapped or bridged tokens - **Permissionless**: Anyone can use CCTP without approval - **Capital Efficient**: No liquidity pools required - **Burn and Mint**: Clean burn-and-mint mechanism - **Multi-Chain**: Support for Avalanche and other major chains - **Composable**: Can be integrated into any application ## Getting Started To use CCTP for cross-chain USDC transfers: 1. **Review Documentation**: Study the [CCTP documentation](https://developers.circle.com/stablecoins/cctp-getting-started) 2. **Understand Flow**: Learn the burn-and-mint message flow 3. **Integrate Contracts**: Integrate CCTP contracts into your application 4. **Test on Testnet**: Test integration on supported testnets 5. **Deploy**: Launch cross-chain USDC functionality ## Documentation For full guides, visit [Circle CCTP Documentation](https://developers.circle.com/stablecoins/cctp-getting-started). ## Use Cases - **Cross-Chain Payments**: Send USDC between Avalanche and other chains - **DeFi Bridging**: Move USDC between DeFi ecosystems - **Application Integration**: Build cross-chain USDC flows into dApps - **Treasury Management**: Manage USDC across multiple chains # Circle (/integrations/circlepay) --- title: Circle category: Assets available: ["C-Chain"] description: Circle provides regulated digital currencies (USDC and EURC) and enterprise-grade payment infrastructure for building on-chain applications on Avalanche. logo: /images/circle.png developer: Circle website: https://www.circle.com/ documentation: https://developers.circle.com/ --- ## Overview Circle is a global financial technology company that provides payment infrastructure for the digital economy. Circle issues USDC and EURC, two of the most trusted and widely-used stablecoins, which are fully regulated digital currencies backed 1:1 by cash and short-term U.S. Treasury bonds. USDC is natively available on Avalanche C-Chain, enabling fast, low-cost transactions for payments, DeFi, and enterprise applications. Circle also offers developer services including Cross-Chain Transfer Protocol (CCTP), smart contract wallets, payment gateways, and gasless transaction infrastructure for building on-chain applications on Avalanche. ## Features - **Regulated Stablecoins**: USDC and EURC are issued by Circle and regulated by U.S. financial authorities, providing trust and transparency. - **Native Avalanche Support**: USDC is natively available on Avalanche C-Chain for fast, low-cost transactions. - **Cross-Chain Transfer Protocol (CCTP)**: Transfer USDC natively between Avalanche and other supported chains without wrapped tokens or liquidity pools. - **Smart Contract Wallets**: Programmable wallets with built-in security and recovery features, compatible with Avalanche. - **Gasless Transactions (Paymaster)**: Sponsor gas fees for users on Avalanche so they don't need AVAX to get started. - **Payment Gateway**: Accept USDC payments on Avalanche with Circle's payment APIs and convert to fiat if needed. - **Minting and Redemption Services**: Institutional access to mint and redeem USDC directly with Circle. - **Developer SDKs**: Official SDKs for integrating Circle services into your Avalanche applications. - **Testnet Support**: Free testnet USDC from Circle's faucet for development and testing on Avalanche Fuji testnet. - **Transparent Reserves**: Monthly attestations from independent accounting firms verify 1:1 backing. - **API-First Design**: RESTful APIs with thorough documentation. ## Getting Started To integrate Circle USDC into your Avalanche application: 1. **Get Testnet Tokens**: Visit the [Circle Faucet](https://faucet.circle.com/) to get free testnet USDC for development on Avalanche Fuji. 2. **Understand USDC on Avalanche**: USDC uses a 6-decimal precision token standard on Avalanche C-Chain. You can interact with it using standard Web3 libraries like viem, ethers.js, or web3.js. 3. **Implement USDC Transfers**: Use the USDC smart contract on Avalanche to send and receive payments in your application. 4. **Integrate Cross-Chain Transfers**: Use Circle's Cross-Chain Transfer Protocol (CCTP) to move USDC between Avalanche and other supported chains. 5. **Set Up Webhooks**: Configure webhooks to receive real-time notifications about USDC transactions and user activities. ## USDC on Avalanche **Avalanche C-Chain Mainnet**: `0xB97EF9Ef8734C71904D8002F8b6Bc66Dd9c48a6E` For testnet addresses, see the [USDC Contract Addresses documentation](https://developers.circle.com/stablecoins/docs/usdc-on-test-networks). ## Documentation For guides, API references, and SDK documentation, visit: - [Circle Developer Portal](https://developers.circle.com/) - [USDC on Avalanche Guide](https://developers.circle.com/stablecoins/docs/usdc-on-test-networks) - [Cross-Chain Transfer Protocol (CCTP) Documentation](https://developers.circle.com/stablecoins/docs/cctp-getting-started) - [Smart Contract Wallet Documentation](https://developers.circle.com/w3s/docs/programmable-wallets-quickstart) - [API Reference](https://developers.circle.com/api-reference) ## Use Cases on Avalanche Common ways developers use Circle on Avalanche: **Payment Applications**: Build payment apps on Avalanche that accept USDC for instant, low-cost global transfers. **DeFi Platforms**: Use USDC as a stable trading pair, collateral, or yield-generating asset in Avalanche DeFi protocols. **NFT Marketplaces**: Accept USDC payments for NFTs on Avalanche, providing price stability and easier accounting. **Cross-Chain Applications**: Use CCTP to build applications that move USDC between Avalanche and other blockchains. **Web3 Gaming**: Use USDC for in-game purchases, rewards, and player-to-player trading on Avalanche with real-world value. **Remittance Services**: Enable fast, low-cost international money transfers using USDC on Avalanche. **Merchant Solutions**: Accept USDC payments from customers on Avalanche with instant settlement and optional fiat conversion. **Enterprise Treasury**: Use USDC on Avalanche for corporate treasury management, payroll, and B2B payments. ## Pricing Circle offers transparent pricing for its services: **USDC Transfers**: - On-chain transfers pay only network gas fees - No Circle fees for basic USDC transfers between wallets **Institutional Services**: - Circle Mint: 0.03% fee for minting and redeeming USDC (institutional accounts) - Minimum fees and volume requirements apply **Developer Services**: - Smart Contract Wallets: Pricing based on wallet creation and transaction volume - Paymaster (Gas Sponsorship): Pay-as-you-go based on gas costs sponsored - Circle APIs: Free tier available, paid plans for higher volume **CCTP**: - Free to use protocol - Only pay standard gas fees on source and destination chains For enterprise pricing and custom arrangements, contact [Circle's sales team](https://www.circle.com/en/contact). ## Circle Developer Resources - **Testnet Faucet**: Get free testnet USDC at [faucet.circle.com](https://faucet.circle.com/) - **Developer Discord**: Join the [Circle Discord](https://discord.gg/circle) to connect with other developers - **Developer Grants**: Apply for grants to build innovative applications with Circle's technology - **Sample Applications**: Explore open-source demos and reference implementations - **Circle Research**: Access whitepapers and technical research on stablecoin technology # Coinbase Developer Platform (CDP) (/integrations/coinbase-cloud) --- title: Coinbase Developer Platform (CDP) category: Validator Infrastructure available: ["C-Chain", "All Avalanche L1s"] description: "Coinbase Developer Platform (CDP), formerly Coinbase Cloud, provides enterprise blockchain infrastructure including validator services, staking, and developer APIs." logo: /images/coinbase.png developer: Coinbase website: https://www.coinbase.com/developer-platform documentation: https://docs.cdp.coinbase.com/ --- ## Overview Coinbase Developer Platform (CDP), formerly known as Coinbase Cloud and Bison Trails, is Coinbase's enterprise blockchain infrastructure platform providing institutional-grade validator services, staking solutions, and developer APIs. Built on Coinbase's security expertise and scale, CDP enables institutions to participate in Avalanche and other blockchain networks. ## Features - **Coinbase Security**: Enterprise security backed by Coinbase's industry-leading practices - **Validator Infrastructure**: Professional validator hosting and management - **Staking Services**: Institutional staking with detailed reporting - **Participate API**: Unified API for staking operations across networks - **Query & Transact**: APIs for blockchain data and transaction management - **SOC 2 Certified**: Compliance-ready infrastructure for institutions ## Getting Started To use Coinbase Developer Platform: 1. **Access Platform**: Visit [Coinbase Developer Platform](https://www.coinbase.com/developer-platform) 2. **Create Account**: Sign up for CDP access 3. **Select Services**: Choose validator, staking, or API services 4. **Configure**: Set up your infrastructure through the platform 5. **Operate**: Manage through the CDP dashboard ## Documentation For API references and guides, visit [CDP Docs](https://docs.cdp.coinbase.com/). ## Use Cases - **Institutional Validation**: Run Avalanche validators with Coinbase-grade security - **Enterprise Staking**: Implement staking programs for enterprises - **Developer Infrastructure**: Build applications using CDP APIs - **Custodian Support**: Infrastructure for custody providers # Coinbase (/integrations/coinbase) --- title: Coinbase category: Custody available: ["C-Chain", "All Avalanche L1s"] description: "Coinbase provides institutional custody and prime brokerage services with regulatory compliance and digital asset infrastructure." logo: /images/coinbase.png developer: Coinbase website: https://www.coinbase.com/wealth documentation: https://www.coinbase.com/wealth --- ## Overview Coinbase offers institutional custody, trading, and prime brokerage through Coinbase Institutional. As a publicly-traded, regulated entity, it provides custody solutions, trading services, and digital asset management tools with strong security and compliance. ## Features - **Regulated Custody**: Fully regulated custodian with trust company status. - **Prime Brokerage**: Institutional trading with access to multiple liquidity venues. - **Cold Storage**: 98% of assets stored offline. - **Insurance Coverage**: Insurance protection for custodied digital assets. - **Staking Services**: Institutional staking with rewards and delegation. - **OTC Trading**: Block trading desk for institutional-sized transactions. - **Multi-Asset Support**: Wide range of cryptocurrencies and digital assets. - **Compliance Infrastructure**: Built-in AML/KYC and regulatory compliance tools. ## Documentation For more information, visit [Coinbase Institutional](https://www.coinbase.com/wealth). ## Use Cases - **Institutional Custody**: Secure storage for funds, family offices, and institutional investors. - **Prime Services**: Trading, lending, and financing for institutional clients. - **Corporate Treasury**: Digital asset management for corporate treasuries. - **Staking Infrastructure**: Institutional staking for proof-of-stake networks. - **Fund Administration**: Custody and reporting for investment funds. # CoinGecko (/integrations/coingecko) --- title: CoinGecko category: Token Aggregators available: ["C-Chain", "All Avalanche L1s"] description: "CoinGecko provides cryptocurrency data including prices, trading volume, market cap, and analytics for tokens on Avalanche." logo: /images/coingecko.png developer: CoinGecko website: https://www.coingecko.com/ documentation: https://www.coingecko.com/api/documentation --- ## Overview CoinGecko is a cryptocurrency data aggregator that tracks market data across the Avalanche ecosystem. It covers tokens, exchanges, and DeFi protocols with price data, trading volumes, market capitalization, and other metrics. ## Features - **Market Data**: Real-time price tracking, market cap, and volume data for Avalanche tokens. - **Developer Tools**: API with extensive endpoints for market data integration. - **Token Info**: Token details including team, technology, and community metrics. - **Exchange Data**: Trading pair information across centralized and decentralized exchanges. - **DeFi Insights**: DeFi protocol tracking, TVL, and yield data on Avalanche. - **Portfolio Tracking**: Tools for monitoring cryptocurrency holdings. - **Historical Data**: Historical price and market data for analysis. ## Getting Started To utilize CoinGecko's services: 1. **Visit Platform**: Access [CoinGecko](https://www.coingecko.com/). 2. **Explore Avalanche**: Navigate to Avalanche ecosystem tokens and markets. 3. **Track Assets**: - Search for specific tokens - Monitor market metrics - Create watchlists 4. **API Integration**: For developers, integrate CoinGecko data using their API. ## Documentation For detailed API documentation and integration guides, visit the [CoinGecko API Documentation](https://www.coingecko.com/api/documentation). ## Use Cases - **Market Analysis**: Track token prices and market trends on Avalanche. - **Development**: Integrate price feeds and market data into dApps. - **Portfolio Tracking**: Monitor investment performance across multiple tokens. - **Research**: Access detailed token information and historical data. - **DeFi Integration**: Build applications with market data support. # CoinMarketCap (/integrations/coinmarketcap) --- title: CoinMarketCap category: Token Aggregators available: ["C-Chain", "All Avalanche L1s"] description: "CoinMarketCap provides market data, analytics, and insights for Avalanche ecosystem tokens." logo: /images/coinmarketcap.jpeg developer: CoinMarketCap website: https://coinmarketcap.com/ documentation: https://coinmarketcap.com/api/documentation/v1/ --- ## Overview CoinMarketCap is one of the most referenced price-tracking websites for crypto assets. It provides real-time market data, analytics, and tracking tools for the Avalanche ecosystem, covering tokens and projects across the network. ## Features - **Market Data**: Real-time price tracking, market cap rankings, and volume analytics. - **Exchange Data**: Trading pair information and exchange volumes. - **On-Chain Metrics**: Blockchain data integration for deeper analytics. - **Professional API**: API access for market data integration. - **Token Information**: Token profiles including team info, events, and announcements. - **Market Analysis**: Technical analysis and market trend tools. - **Educational Content**: Resources on crypto markets and technologies. ## Getting Started To begin using CoinMarketCap: 1. **Access Platform**: Visit [CoinMarketCap](https://coinmarketcap.com/). 2. **Explore Avalanche**: Navigate to Avalanche ecosystem tokens and markets. 3. **Track Assets**: - Search for specific tokens - Create watchlists - Monitor market metrics 4. **API Integration**: For developers, integrate market data using their professional API. ## Documentation For API documentation and integration guides, visit the [CoinMarketCap API Documentation](https://coinmarketcap.com/api/documentation/v1/). ## Use Cases - **Market Research**: Market data and analytics across Avalanche tokens. - **Development**: Integrate price and market data into applications. - **Investment Analysis**: Track market trends and token performance. - **Due Diligence**: Research token and project information. - **Market Monitoring**: Real-time tracking of market movements. # Colossus Digital (/integrations/colossus) --- title: Colossus Digital category: Custody available: ["C-Chain", "P-Chain"] description: The Colossus Institutional Hub is a unified platform for institutional staking, governance, and custody workflows across multiple chains and custody providers. logo: /images/colossus.png developer: Colossus Digital website: https://colossus.digital/ documentation: https://docs.colossus.digital/ --- ## Overview Colossus Institutional Hub is a platform for institutional staking, governance, and custody operations across PoS networks. It connects clients, custodians, and validators through a unified API and UI. It works with third-party custody platforms like Fireblocks, Ledger Enterprise, DFNS, and HSMs, providing policy-governed transaction orchestration at scale. ## Features - **Unified API & UI**: A single interface for staking, governance, and delegation across all major L1 networks and custody solutions. - **Multi-Chain Support**: Stake and manage assets across Avax C-Chain and P-Chain, and more. - **Custodian Agnostic**: Integrates with Fireblocks, Ledger Enterprise, DFNS, and other self-managed custody setups. - **Secure Transaction Crafting**: Multi-party crafting and governance-driven approval using quorum-based threshold-initiation. - **Transaction Lens & Decoder**: Transaction preview and validation tools for compliant transaction generation. - **Governance Workflows**: Define approval policies, quorum thresholds, and audit trails for every transaction. - **Auditing & Reporting**: Real-time reporting for compliance, treasury tracking, and validator operations. ## Getting Started 1. **Visit the Website**: Learn more about Colossus and the Institutional Hub at [colossus.digital](https://colossus.digital/). 2. **Explore the Docs**: Review the [Colossus Developer Docs](https://docs.colossus.digital/) to understand how to integrate. 3. **Connect a Custody Provider**: Link your Fireblocks, DFNS, or Ledger Enterprise setup to begin delegation. 4. **Configure Policies**: Define your governance approval rules, transaction initation quorum, and transaction controls. 5. **Start Staking**: Delegate assets to validators across supported networks with full custody security. ## Documentation For more details, explore the [Colossus Documentation Portal](https://docs.colossus.digital/). ## Use Cases - **Exchanges**: Offer staking-as-a-service without taking custody risk. - **Custodians**: Enable delegation and governance services directly from MPC or HSM-based vaults. - **Funds and Treasuries**: Manage validator positions and governance votes with enterprise controls. - **Staking Providers**: Connect to a two-sided marketplace to source institutional delegations. # Copper (/integrations/copper) --- title: Copper category: Custody available: ["C-Chain", "All Avalanche L1s"] description: "Copper provides institutional digital asset custody and treasury management with MPC-based security." logo: /images/copper.webp developer: Copper website: https://copper.co/ documentation: https://copper.co/ --- ## Overview Copper is a digital asset custody and treasury management platform for institutions. It provides secure storage for cryptocurrencies and digital assets, with MPC-based security and tools for managing portfolios across exchanges. ## Features - **Institutional Custody**: Secure storage for institutional asset management. - **Multi-Party Computation (MPC)**: Cryptographic key management with no single point of failure. - **Treasury Management**: Tools for managing digital asset treasuries and portfolios. - **Trading Integration**: Connect to multiple exchanges while maintaining custody. - **Compliance Tools**: Built-in compliance and governance for regulatory requirements. - **Multi-Signature Security**: Flexible approval workflows and multi-sig capabilities. - **Cold Storage**: Offline storage for long-term asset protection. - **Insurance Coverage**: Asset protection through insurance policies. ## Documentation For more information, visit the [Copper website](https://copper.co/). ## Use Cases - **Asset Management**: Custody for institutional investment portfolios. - **Trading Operations**: Trade across multiple venues while keeping assets in custody. - **Corporate Treasury**: Corporate digital asset management. - **Fund Administration**: Custody and management for investment funds. - **Compliance**: Meet regulatory requirements with built-in governance tools. # Core (/integrations/core) --- title: Core category: Wallets and Account Abstraction available: ["C-Chain", "All Avalanche L1s"] description: "Core is the native wallet SDK for the Avalanche blockchain." logo: /images/core.svg developer: Ava Labs website: https://core.app/en/ documentation: https://docs.core.app/ featured: true --- ## Overview Core is the native wallet SDK from Ava Labs for the Avalanche blockchain. It gives developers the tools to add wallet functionality to dApps on Avalanche -- asset management, smart contract interactions, and transaction signing. ## Features - **Avalanche-Native**: Built specifically for Avalanche, with full compatibility across the network. - **Wallet Functions**: Asset management, transaction signing, and interaction with Avalanche L1s and smart contracts. - **User-Friendly Interface**: Designed for straightforward asset management and navigation. - **Security**: Private key protection following wallet development best practices. - **Cross-Platform Support**: Available for web, mobile, and desktop. ## Getting Started To start using Core in your application, follow these steps: 1. **Visit the Core Website**: Explore the [Core website](https://core.app/en/) to learn more about its features and capabilities. 2. **Access the SDK**: Go to the [Core Documentation](https://docs.core.app/) to download the SDK and access integration guides. 3. **Integrate Wallet Functions**: Follow the integration tutorials in the documentation to add wallet functionalities such as asset management, transaction signing, and smart contract interaction to your dApp. 4. **Test and Deploy**: Test the integrated wallet features in a development environment before deploying your application on the Avalanche mainnet. ## Documentation For detailed guides, API references, and integration examples, visit the [Core Documentation](https://docs.core.app/). ## Use Cases - **DeFi Applications**: Asset management, staking, and DeFi protocol interactions on Avalanche. - **NFT Marketplaces**: Wallet integration for buying, selling, and managing NFTs. - **Gaming dApps**: In-game asset management and purchases. - **General dApps**: Native wallet experience for any Avalanche application. # Covalent (/integrations/covalent) --- title: Covalent category: RPC Endpoints available: ["C-Chain"] description: "Covalent provides blockchain data APIs and RPC infrastructure for accessing on-chain data across multiple networks." logo: /images/covalent.png developer: Covalent website: https://www.covalenthq.com/ documentation: https://www.covalenthq.com/docs/ --- ## Overview Covalent provides APIs and RPC services for accessing structured on-chain data. It covers 100+ blockchain networks including Avalanche, giving developers access to historical and real-time data through a single API. ## Features - **Unified API**: One API for accessing data across 100+ blockchain networks. - **Historical Data**: Complete historical blockchain data with no gaps. - **RPC Infrastructure**: RPC nodes for direct blockchain access. - **Real-Time Updates**: Current blockchain state and events. - **Structured Data**: Pre-indexed blockchain data ready for consumption. - **Token Data**: Token balances, transfers, and price information. - **Transaction History**: Full transaction history for addresses and contracts. - **Multi-Chain Support**: Consistent API across networks. ## Documentation For detailed guides and API documentation, visit the [Covalent Documentation](https://www.covalenthq.com/docs/). ## Use Cases - **Wallet Applications**: Token balances and transaction histories. - **Analytics Platforms**: On-chain data for analysis and reporting. - **DeFi Applications**: Real-time and historical DeFi protocol data. - **Portfolio Trackers**: Multi-chain portfolio tracking. - **NFT Platforms**: NFT metadata and ownership information. # Crossmint (/integrations/crossmint) --- title: Crossmint category: Enterprise Solutions available: ["C-Chain"] description: Crossmint is an enterprise blockchain platform providing APIs for on-chain applications, supporting wallets, NFT minting, verifiable credentials, and NFT checkout with multiple payment methods. logo: /images/crossmint.jpeg developer: Crossmint website: https://crossmint.com/ documentation: https://docs.crossmint.com/ --- ## Overview Crossmint is a blockchain platform used by over 30,000 enterprises and developers. Its APIs handle wallet creation, NFT minting, credential management, and checkout flows for NFTs. ## Features - **Wallets**: Embedded wallets using MPC (Multi-Party Computation), embeddable in your site and fully interoperable. - **NFT Minting**: Mint, distribute, edit, and burn NFTs. Airdrop to wallets or via email using API calls, no-code tools, QR codes, or NFC chips. - **Verifiable Credentials**: Issue and manage verifiable credentials at scale with on-chain verification. - **NFT Checkout**: Sell NFTs via credit cards, Apple Pay, Google Pay, and cross-chain tokens. ## Getting Started To start using Crossmint, follow these steps: 1. **Sign Up**: Visit [Crossmint](https://crossmint.com/) and create an account. 2. **Set Up Your Project**: Set up wallets, NFT minting tools, and verifiable credentials for your project in the Crossmint dashboard. 3. **Get API Key**: Obtain your API key to authenticate requests and integrate Crossmint's services into your application. 4. **Integration**: Use Crossmint's APIs or no-code tools to enable wallets, NFT checkout, and credential management in your app or website. ## Documentation For detailed setup instructions, API references, and more, visit the [Crossmint Documentation](https://docs.crossmint.com/). ## Use Cases Crossmint is ideal for: - **NFT Marketplaces**: Mint, sell, and manage NFTs with multiple payment methods including credit cards and cross-chain tokens. - **Enterprises**: Issue verifiable on-chain credentials at scale. - **Developers**: Integrate wallets, NFT minting, and checkout into your applications. # Cubist (/integrations/cubist) --- title: Cubist category: Wallets and Account Abstraction available: ["C-Chain", "All Avalanche L1s"] description: Low-latency API for generating keys and signing transactions inside secure hardware. logo: /images/cubist.svg developer: Cubist website: https://cubist.dev/ documentation: https://cubist.dev/ --- ## Overview Cubist is a wallet SDK that provides a low-latency API for generating cryptographic keys and signing blockchain transactions within secure hardware environments. It uses hardware security modules (HSMs) and secure enclaves to protect private keys and run sensitive operations inside a trusted execution environment. ## Features - **Hardware Security**: All key generation and transaction signing run within secure hardware (HSMs or secure enclaves), protecting against key extraction attacks. - **Low-Latency API**: Optimized for speed, with low-latency interactions with secure hardware for performance-sensitive applications. - **Flexible Integration**: Supports multiple blockchain networks and integrates with web, mobile, and desktop applications. - **Scalability**: Handles high transaction volumes for applications that need both security and throughput. - **Compliance**: Secure hardware execution helps meet security and compliance requirements for enterprise use cases. ## Getting Started 1. **Visit the Cubist Website**: Explore the [Cubist website](https://cubist.dev/) for architecture details, SDK access, and documentation. 2. **Integrate Secure Key Management**: Follow the guides to add Cubist's key management and transaction signing to your application. 3. **Optimize for Performance**: Use Cubist's low-latency API to maintain high throughput when handling large transaction volumes. 4. **Test and Deploy**: Integrate and test the SDK in a development environment, then deploy to production. ## Documentation For more details, visit the [Cubist website](https://cubist.dev/) and navigate to the documentation section. ## Use Cases - **Enterprise Blockchain Applications**: Protect sensitive operations with hardware-backed key management. - **DeFi Platforms**: Run all transaction signing within a secure enclave. - **Cryptocurrency Wallets**: Build wallets that store private keys inside hardware security modules. - **Regulated Industries**: Meet regulatory requirements in finance and healthcare with hardware-secured transaction signing. # DeFi Llama (/integrations/defilama) --- title: DeFi Llama category: Analytics & Data available: ["C-Chain", "All Avalanche L1s"] description: "DeFi Llama is a transparent, open-source DeFi analytics platform providing TVL, yield, and protocol data for Avalanche." logo: /images/defilama.jpeg developer: DeFi Llama website: https://defillama.com/ documentation: https://docs.llama.fi/ --- ## Overview DeFi Llama is the largest TVL (Total Value Locked) aggregator for DeFi protocols. It tracks TVL, yields, stablecoin data, and protocol metrics across multiple chains including Avalanche, with a focus on accuracy and transparency. All data and methodology are open source. ## Features - **TVL Tracking**: Real-time Total Value Locked across Avalanche protocols. - **Yield Analytics**: Yield farming opportunities on Avalanche. - **Protocol Insights**: Metrics for individual protocols and their performance. - **Stablecoin Data**: Stablecoin circulation and usage across Avalanche. - **Open Source**: Transparent methodology with community-driven updates. - **API Access**: Free API for developers and analysts. - **DEX Analytics**: Volume, liquidity, and other metrics for Avalanche DEXs. ## Getting Started To use DeFi Llama for Avalanche analytics: 1. **Visit Platform**: Go to [DeFi Llama](https://defillama.com/). 2. **Explore Avalanche**: Navigate to the Avalanche chain section for ecosystem-specific data. 3. **Track Protocols**: Monitor your favorite Avalanche protocols' performance and TVL. 4. **Access APIs**: Integrate DeFi Llama's data using their free API. ## Documentation For detailed API documentation and integration guides, visit the [DeFi Llama Documentation](https://docs.llama.fi/). ## Use Cases - **Market Research**: Track protocol growth and market trends on Avalanche. - **Investment Analysis**: Compare yields and TVL across protocols. - **Risk Assessment**: Monitor protocol health and market dynamics. - **Development**: Integrate DeFi analytics into applications via API. - **Protocol Comparison**: Compare performance across chains and protocols. # DexGuru (/integrations/dexguru) --- title: DexGuru category: Explorers available: ["C-Chain"] description: "DexGuru is a DeFi analytics platform providing real-time trading data, token analysis, and transaction exploration across DEXs." logo: /images/dexguru.png developer: DexGuru website: https://dex.guru/ documentation: https://docs.dex.guru/ --- ## Overview DexGuru is a DeFi analytics and trading platform that combines block explorer functionality with real-time DEX trading data. It covers multiple chains including Avalanche, letting traders and analysts track tokens, analyze trading patterns, and explore on-chain DeFi activity. ## Features - **DEX Analytics**: Real-time trading data across decentralized exchanges - **Token Explorer**: Detailed token information and metrics - **Wallet Tracking**: Monitor wallet activity and holdings - **Trading Interface**: Execute trades directly through the platform - **Price Charts**: Advanced charting with technical analysis tools - **Multi-Chain Support**: Analytics across Avalanche and other networks ## Getting Started 1. **Visit DexGuru**: Access [DexGuru](https://dex.guru/) 2. **Select Avalanche**: Choose Avalanche network from supported chains 3. **Explore Tokens**: Search for tokens and view analytics 4. **Track Wallets**: Add wallets to track activity 5. **Trade**: Execute trades through integrated DEX aggregation ## Documentation For platform guides, visit [DexGuru Documentation](https://docs.dex.guru/). ## Use Cases - **Token Analysis**: Research tokens with on-chain data - **Trading**: Access DEX trading with real-time analytics - **Wallet Monitoring**: Track whale and notable wallet activity - **Market Research**: Analyze DeFi trading patterns and trends # DEX Screener (/integrations/dexscreener) --- title: DEX Screener category: Token Aggregators available: ["C-Chain", "All Avalanche L1s"] description: "DEX Screener provides real-time analytics and trading data for decentralized exchanges on Avalanche." logo: /images/dexscreener.png developer: DEX Screener website: https://dexscreener.com/ documentation: https://docs.dexscreener.com/ --- ## Overview DEX Screener provides real-time data for decentralized exchanges on Avalanche. It covers trading pair analysis, price charts, liquidity information, and market metrics across multiple DEXs. ## Features - **Real-Time Charts**: Live price and volume charts for trading pairs on Avalanche DEXs. - **Multi-DEX Support**: Track pairs across multiple Avalanche DEXs at once. - **Price Alerts**: Custom alerts for price movements and volume. - **Liquidity Analysis**: Liquidity pool data and movement tracking. - **Trading Pair Analytics**: Volume, liquidity, and price change metrics. - **Market Sentiment**: Social metrics and trading patterns. - **API Access**: API for developers and traders. ## Getting Started To begin using DEX Screener: 1. **Access Platform**: Visit [DEX Screener](https://dexscreener.com/). 2. **Select Network**: Choose Avalanche to view C-Chain pairs. 3. **Analyze Pairs**: - Search for specific trading pairs - View real-time charts and metrics - Set up custom watchlists 4. **Configure Alerts**: Set up price and volume notifications for tracked pairs. ## Documentation For API documentation and integration guides, visit the [DEX Screener Documentation](https://docs.dexscreener.com/). ## Use Cases - **Trading Analysis**: Real-time market analysis and pair monitoring. - **Liquidity Tracking**: Monitor liquidity depths and movements across DEXs. - **Market Research**: Analyze new tokens and trading pairs on Avalanche. - **Portfolio Management**: Track multiple pairs and set price alerts. - **Development Integration**: Build trading tools using the DEX Screener API. # DIA (/integrations/dia) --- title: DIA category: Oracles available: ["C-Chain"] description: DIA is an open-source oracle platform that provides transparent, crowd-verified price oracles for DeFi applications. logo: /images/DIA.png developer: DIA website: https://diadata.org/ documentation: https://docs.diadata.org/ --- ## Overview DIA (Decentralized Information Asset) is an open-source oracle platform that delivers transparent, crowd-verified data feeds for DeFi applications. It uses community-driven data collection and validation to keep price oracles accurate and tamper-resistant. ## Features - **Open-Source Platform**: Data feeds and infrastructure are fully open-source, so developers can inspect, contribute to, and customize the platform. - **Crowd-Verified Data**: Data is sourced, validated, and verified by the community, reducing the risk of manipulation. - **Multi-Chain Support**: Oracles compatible with multiple blockchain networks, including Ethereum, Binance Smart Chain, and more. - **Customizable Oracles**: Developers can create custom data feeds tailored to specific use cases. - **DeFi Focused**: Oracles designed for DeFi applications, providing price feeds and financial data for decentralized financial products. ## Getting Started 1. **Explore DIA**: Visit the [DIA website](https://diadata.org/) to learn about the platform. 2. **Access the Documentation**: Refer to the [DIA Documentation](https://docs.diadata.org/) for API usage and custom oracle creation. 3. **Set Up Data Feeds**: Integrate DIA's price oracles into your DeFi application. 4. **Customize Oracles**: Create custom oracles for your specific data needs (price feeds, financial data, etc.). 5. **Deploy and Monitor**: Deploy your application and monitor data feed performance. ## Documentation For more details, visit the [DIA Documentation](https://docs.diadata.org/). ## Use Cases - **Lending and Borrowing Platforms**: Price oracles for accurate asset valuations in lending and borrowing. - **Decentralized Exchanges (DEXs)**: Real-time, crowd-verified price feeds for trade execution. - **Stablecoins**: Reliable price data for maintaining stablecoin pegs. - **Derivatives Markets**: Financial data feeds for decentralized derivatives products. # Dinari (/integrations/dinari) --- title: Dinari category: Tokenization Platforms available: ["C-Chain"] description: Dinari provides tokenized access to U.S. public equities for non-U.S. retail investors through a B2B2C model, partnering with neobanks and fintech platforms to offer regulated blockchain-based stock exposure. logo: /images/dinari.svg developer: Dinari website: https://dinari.com/ documentation: https://docs.dinari.com/ --- ## Overview Dinari provides non-U.S. retail investors with tokenized access to U.S. public equities through blockchain. It operates via a B2B2C model, partnering with neobanks, fintech apps, and embedded finance platforms that want to offer regulated U.S. stock exposure using on-chain infrastructure. Each token represents economic exposure to actual shares held in custody, giving users U.S. equity performance with the efficiency and 24/7 availability of blockchain. ## Features - **Tokenized U.S. Equities**: Access to major U.S. public stocks as blockchain-based tokens. - **B2B2C Distribution**: Partner with local platforms rather than directly serving end consumers. - **Non-U.S. Focus**: Specifically designed for international retail investors outside the United States. - **Regulatory Compliance**: Fully compliant structure for offering U.S. equity exposure across jurisdictions. - **Real Share Backing**: Tokens represent economic rights to actual shares held in regulated custody. - **Fractional Ownership**: Enable fractional share purchases through tokenization. - **24/7 Trading**: Potential for around-the-clock trading depending on partner platform implementation. - **Blockchain Efficiency**: Leverage blockchain for instant settlement, programmability, and reduced costs. - **White-Label Integration**: Platform partners can offer tokens under their own brand. - **Multi-Chain Support**: Deploy tokens across multiple blockchain networks. - **API Integration**: Comprehensive APIs for platform partners to integrate equity tokens. - **Corporate Actions**: Automatic handling of dividends, splits, and other corporate actions. - **Transparent Pricing**: Clear fee structure for token purchase, holdings, and redemptions. - **Custody and Safety**: Underlying shares held by regulated custodians. ## Getting Started ### For Platform Partners (Neobanks, Fintechs, Embedded Finance): 1. **Partnership Discussion**: Contact Dinari to explore partnership opportunities for your platform. 2. **Platform Assessment**: Evaluate fit for your user base: - Confirm your users are primarily non-U.S. residents - Assess demand for U.S. equity exposure - Review regulatory requirements in your operating jurisdictions - Determine integration approach and timeline 3. **Legal and Compliance**: Work with Dinari on legal structure: - Establish compliant offering structure for your markets - Navigate local securities regulations - Set up necessary licensing or partnerships - Implement KYC/AML procedures 4. **Technical Integration**: Integrate Dinari's technology: - Connect to Dinari's APIs for token issuance and redemption - Integrate with your platform's wallet infrastructure - Implement user interfaces for equity token trading - Test in sandbox environment - Ensure corporate actions handling 5. **Launch**: Go live with tokenized U.S. equities: - Offer selected U.S. stocks as tokens to your users - Enable users to buy, hold, and sell equity tokens - Provide transparent reporting on holdings and performance - Handle customer support with Dinari's assistance ### For Investors: 1. Access Dinari-powered equity tokens through partner platforms in your region 2. Complete KYC with your platform 3. Fund your account through local payment methods 4. Purchase tokenized U.S. stocks available on the platform 5. Hold tokens in your wallet or custody solution 6. Receive dividends and benefits of underlying shares 7. Sell or redeem tokens as needed ## Avalanche Support Dinari's multi-chain architecture supports deployment across various blockchain networks. As an EVM-compatible platform, Avalanche C-Chain provides an excellent environment for Dinari's tokenized equities, offering high throughput, low transaction costs, and fast finality—ideal characteristics for equity token trading and settlement. ## Supported Equities Dinari provides tokenized access to major U.S. public companies: - **Technology Stocks**: Leading tech companies like Apple, Microsoft, Google, Amazon, Meta, Tesla - **Financial Services**: Major banks and financial institutions - **Healthcare**: Pharmaceutical and healthcare companies - **Consumer**: Consumer goods and retail companies - **Industrial**: Manufacturing and industrial companies - **Energy**: Energy and resource companies The specific list of available tokenized equities may vary by platform partner and jurisdiction. ## Use Cases Dinari's infrastructure serves multiple platform types: **Neobanks**: Digital banks can offer customers investment capabilities in U.S. stocks without building brokerage infrastructure. **Fintech Apps**: Consumer fintech applications can add U.S. equity investing as a feature. **Crypto Exchanges**: Cryptocurrency exchanges can expand offerings to include tokenized stocks. **Embedded Finance Platforms**: Platforms offering embedded financial services can integrate equity investing. **Wealth Management Apps**: Digital wealth managers can provide U.S. market exposure to international clients. **Investment Platforms**: International investment platforms can offer U.S. stocks to their user base. ## Platform Partner Benefits **Turnkey Solution**: Dinari handles legal structure, custody, corporate actions, and blockchain infrastructure. **Regulatory Compliance**: Compliant framework reduces regulatory burden on platform partners. **Market Expansion**: Add popular U.S. stocks to platform offerings without brokerage licensing. **Revenue Opportunity**: Share in revenue from user trading and holdings. **Quick Integration**: API-based integration enables faster time to market. **White-Label**: Offer equity tokens under your own brand and user experience. **No Custody Burden**: Dinari manages underlying share custody and corporate actions. **Blockchain Benefits**: Leverage blockchain efficiency while offering traditional assets. ## Investment Benefits for End Users **Access to U.S. Markets**: Invest in American companies from anywhere in the world. **Lower Barriers**: Reduced minimum investments through fractional tokenization. **Local Platform**: Access through familiar local platforms in local languages. **Blockchain Efficiency**: Faster settlement and potentially lower costs than traditional brokers. **Fractional Shares**: Buy small portions of expensive stocks. **Transparent Ownership**: Clear blockchain-based record of token holdings. **Potential 24/7 Trading**: Ability to trade outside traditional market hours (platform dependent). ## Legal and Compliance Structure Dinari maintains full compliance: - **Securities Compliance**: Tokens structured as compliant securities or derivatives - **Multi-Jurisdictional**: Frameworks adaptable to different international regulations - **U.S. Securities Laws**: Proper structure for exposure to U.S. equities - **Custody Regulations**: Underlying shares held by regulated custodians - **AML/KYC**: Integrated compliance processes through platform partners - **Corporate Actions**: Legally compliant handling of dividends, splits, and proxy voting - **Regular Audits**: Third-party audits of share backing and operations ## Technology Infrastructure Dinari provides the following technology: - **Token Issuance**: Smart contracts for minting and burning equity tokens - **APIs**: RESTful APIs for platform integration - **Multi-Chain**: Deploy tokens across multiple blockchains - **Oracle Integration**: Price feeds for accurate token valuations - **Corporate Actions Engine**: Automated handling of dividends and other events - **Custody Interface**: Connection to regulated custodians holding underlying shares - **Compliance Layer**: Smart contract-based restrictions for regulatory compliance - **Reporting**: Tools for transparency and regulatory reporting ## Pricing Dinari offers competitive pricing for platform partners: - **Partnership Fees**: Revenue sharing arrangements with platform partners - **Token Fees**: Fees on token issuance and redemptions - **Management Fees**: Potential annual fees on tokenized holdings - **Platform Integration**: Setup and ongoing platform access fees - **Custom Structures**: Tailored pricing for large partners Contact Dinari for specific pricing based on your platform and expected volumes. ## Backing and Trust **Regulated Custody**: Underlying U.S. shares held by regulated custodians. **Real Share Backing**: Each token economically represents actual shares, not derivatives or synthetic exposure. **Transparent Operations**: Regular attestations and disclosures of share backing. **Experienced Team**: Leadership with backgrounds in traditional finance, fintech, and blockchain. **Compliance First**: Regulatory compliance built into the model from inception. ## Competitive Advantages **International Focus**: Purpose-built for non-U.S. investors, not U.S. retail market. **B2B2C Model**: Partner with existing platforms rather than competing with them. **Regulatory Expertise**: Navigated complex cross-border securities regulations. **Blockchain-Native**: Built for blockchain from the ground up, not adapting legacy systems. **Fractional Access**: Tokenization enables fractional ownership of any U.S. stock. **Corporate Actions**: Automated handling reduces operational complexity for partners. **Multi-Chain**: Flexibility to deploy where partner platforms operate. # DipDup (/integrations/dipdup) --- title: DipDup category: Indexers available: ["C-Chain"] description: DipDup is a Python framework for building smart contract indexers. It helps developers focus on business logic instead of writing boilerplate code to store and serve data. logo: /images/dipdup.png developer: DipDup website: https://dipdup.io/ documentation: https://dipdup.io/docs --- ## Overview DipDup is a Python framework for building smart contract indexers. It handles data storage and retrieval so developers can focus on business logic instead of writing boilerplate. It works on the Avalanche C-Chain. ## Features - **Python Framework**: Build custom smart contract indexers in Python. - **Focus on Business Logic**: DipDup handles data storage and serving; developers write only the indexing logic. - **Scalable Indexing**: Scale indexing solutions to handle increasing data and transaction volumes. - **Customizability**: Customize indexers to meet specific project needs. ## Getting Started 1. **Visit the DipDup Website**: Explore the [DipDup website](https://dipdup.io/) to learn about the framework. 2. **Access the Documentation**: Refer to the [DipDup Documentation](https://dipdup.io/docs) for step-by-step guides on setting up and using DipDup. 3. **Set Up Your Indexer**: Follow the documentation to set up a custom smart contract indexer using DipDup. 4. **Implement Business Logic**: Focus on implementing your specific business logic while DipDup handles the data management. 5. **Deploy on C-Chain**: Utilize DipDup to deploy and manage your indexer on the Avalanche C-Chain. ## Documentation For more details, visit the [DipDup Documentation](https://dipdup.io/docs). ## Use Cases - **Blockchain Developers**: Build smart contract indexers on the Avalanche C-Chain. - **Data-Intensive Applications**: Simplify indexing for applications that need reliable data storage and retrieval. - **Custom Indexer Development**: Tailor indexers to the specific requirements of your blockchain projects. # dKiT API/SDK (/integrations/dkit) --- title: dKiT API/SDK category: Crosschain Solutions available: ["C-Chain"] description: "dKit is a production-ready cross-chain DEX aggregation API/SDK that enables native swaps across 15+ blockchains including Avalanche by routing liquidity through THORChain, Chainflip, Maya, Jupiter, Garden and 1inch." logo: /images/dkit.png developer: dKit Team website: https://eldorito.club/ documentation: https://docs.eldorito.club/dkit-by-eldorito --- ## Overview dKiT is a unified cross-chain DEX aggregation layer that lets dApps execute **native** token swaps across 19+ blockchains including Avalanche’s C-Chain—via a single API/SDK. It abstracts multiple protocols and aggregators behind one interface, providing best-price discovery, non-custodial execution, and real-time tracking. ## Features - **Native Cross-Chain Swaps**: Swap native assets (e.g. BTC ↔ AVAX). - **Smart Routing**: Finds optimal routes across multiple protocols and DEX aggregators. - **Streaming Swaps**: Splits large trades to reduce price impact. - **One-Transaction UX**: “One-signature” Cross-Chain Swaps. - **Revenue & Fee Streaming**: Add custom affiliate fees and automatically stream them to your wallets. - **Broad Wallet Support**: Easily integrate 20+ wallets (EVM, Solana, Cosmos, hardware, and WalletConnect). - **Avalanche Support**: Support for Avalanche (C-Chain) cross-chain swaps. ## Getting Started 1. **Review the Docs** Browse the [dKit Documentation](https://docs.eldorito.club/dkit-by-eldorito) to understand concepts, endpoints, and workflows. 2. **Get an API Key** Request API keys on [El Dorito](https://eldorito.club/) → **Get API Key**. 3. **Install & Configure** - Use the SDK (e.g., `@doritokit/sdk`) or call the REST API directly. - Provide any required API keys (e.g., EVM explorers or UTXO helpers). - Enable Avalanche (C-Chain) within your chain list. 4. **Quote → Execute → Track** - Call `/v1/quote` to get the best route and expected output. - Execute the transaction - Poll `/v1/track` to monitor progress, including streaming status. 5. **Monetize** Configure **affiliate fees** so a share of volume routed via your app is streamed to your addresses. ## Documentation - **Introduction & Concepts**: [dKit Docs](https://docs.eldorito.club/dkit-by-eldorito) - **API Reference / Swagger**: [dKit SDK Setup](https://docs.eldorito.club/dkit-by-eldorito/set-up-the-sdk) - **Quick Start**: End-to-end example for quoting, executing, and tracking swaps - **Revenue Generation**: [dKit Docs](https://docs.eldorito.club/dkit-by-eldorito) - **Status Page**: [El Dorito](https://eldorito.club/) ## Supported Protocols & Aggregators - **THORChain** - **Chainflip** - **MayaChain** - **Jupiter** - **1inch** - **Garden Finance** ## Use Cases - **Cross-Chain Swaps in Any dApp**: Add native BTC ↔ EVM/Solana/Cosmos swaps to wallets and DEXs. - **Cross-Chain DEX / Router**: Power a trading interface with best-price discovery and cross-chain support. - **Bridgeless Asset Migration**: Let users move assets across ecosystems without using wrapped tokens or CEXs. - **Affiliate Monetization**: Stream fees on every swaps. # Dreamspace (/integrations/dreamspace) --- title: Dreamspace category: Developer Tooling available: ["C-Chain", "All Avalanche L1s"] description: Dreamspace is a drag and drop vibe coding canvas that lets anyone build apps and deploy smart contracts on any EVM-compatible chain. logo: /images/dreamspace.jpg developer: Dreamspace website: https://dreamspace.xyz documentation: https://docs.makeinfinite.com/docs/getting-started --- ## Overview Dreamspace is a no-code dApp builder that combines AI-powered development tools with a drag-and-drop interface. It lets creators, developers, and entrepreneurs turn ideas into functional Web3 applications -- DeFi protocols, NFT platforms, DAOs, or analytics dashboards -- without traditional coding. ## Key Features - **AI-Powered Smart Contract Generation**: Build and deploy secure contracts for DeFi, NFTs, or DAOs without writing code. - **Prompt-to-SQL Queries**: Query indexed blockchain data in natural language, unlocking real-time analytics and data-driven dApps with minimal effort. - **Visual Drag-and-Drop Builder**: Assemble contracts, queries, and UI components in an intuitive canvas to create full-stack dApps. - **Native Data Integration**: Powered by Space and Time’s ZK-proven data warehouse, enabling advanced data visualization and interaction for onchain apps. - **No-Code Accessibility**: Usable by creators of all skill levels, no blockchain coding experience required. ## Getting Started 1. **Launch Your Project**: Start from a blank project or describe your app idea, then click “Start Creating” on the homepage. 2. **Select a Component**: – Click the element you want to edit; it will be highlighted with a pink border and shown as Selected in the chatbot. 3. **Edit with AI**: In canvas edit mode, type a request (e.g., “change the background to blue”) into the chatbot and press enter. Wait for Dreamspace AI to apply changes. 4. **Refine & Adjust**: Use undo/redo, resize, or move components directly in the canvas. 5. **Style Your App**: Customize colors, fonts, and layouts under the “Themes” tab in the chatbot. 6. **Enhance with Data & Integrations**: Add SQL queries, charts, third-party APIs, or connect smart contracts to bring your dApp to life. ## Documentation For more details, visit the [Dreamspace documentation](https://docs.makeinfinite.com/docs/getting-started). ## Use Cases - **DeFi Protocols**: Launch staking platforms, lending apps, or liquidity pools without deep blockchain coding. - **NFT Platforms**: Build marketplaces, rarity trackers, or minting sites powered by AI-generated contracts. - **DAOs**: Create governance tools, voting systems, and treasury management apps with ease. - **Blockchain Analytics Dashboards** Query and visualize onchain data using natural language (Prompt-to-SQL). - **Rapid dApp Prototyping**: Turn ideas into functional applications quickly with drag-and-drop editing and AI-powered customization. # DSRV (/integrations/dsrv) --- title: DSRV category: Validator Infrastructure available: ["C-Chain", "All Avalanche L1s"] description: "DSRV is a blockchain infrastructure company providing institutional-grade validator services and Web3 development tools." logo: /images/dsrv.png developer: DSRV website: https://www.dsrvlabs.com/ --- ## Overview DSRV is a blockchain infrastructure company providing institutional-grade validator services, staking solutions, and Web3 development tools. Based in Asia, DSRV operates validators across 40+ blockchain networks including Avalanche, serving institutional clients with reliable infrastructure. ## Features - **Professional Validation**: Institutional-grade validator operations for Avalanche - **WELLDONE Studio**: Web3 development IDE and tools - **Multi-Chain Expertise**: Deep experience across 40+ blockchain networks - **High Reliability**: Enterprise-grade infrastructure with high availability - **Security Focus**: Security practices for institutional requirements - **Developer Tools**: Tools and services for Web3 application development ## Getting Started 1. **Visit DSRV**: Explore services at [DSRV Labs](https://www.dsrvlabs.com/) 2. **Choose Service**: Select validator, staking, or developer tools 3. **Contact Team**: Reach out for institutional services or enterprise needs 4. **Integration**: Implement DSRV's services in your operations 5. **Operations**: Manage through DSRV's platform and dashboards ## Use Cases - **Institutional Validation**: Run professional Avalanche validators - **Staking Services**: Provide staking solutions for institutions - **Developer Infrastructure**: Use WELLDONE tools for Web3 development - **L1 Validation**: Validate Avalanche L1 networks # Due (/integrations/due) --- title: Due category: Payments available: ["C-Chain"] description: Due is a blockchain-based payment network enabling instant, low-cost peer-to-peer and business payments with cryptocurrency and stablecoin support. logo: /images/due.png developer: Due Network website: https://www.opendue.com/ documentation: https://www.opendue.com/api --- ## Overview Due is a blockchain-based payment network for instant, low-cost peer-to-peer and business transactions. It supports cryptocurrencies and stablecoins, offering faster settlement, lower fees, and global accessibility without traditional banking infrastructure. The platform enables users and businesses to send and receive payments instantly across borders, accept payments in multiple currencies including stablecoins, and access financial services without the friction and costs associated with conventional payment systems. ## Features - **Instant Payments**: Real-time payment processing and settlement. - **Low Transaction Costs**: Minimal fees compared to traditional payment processors. - **Cryptocurrency Support**: Accept and send payments in multiple cryptocurrencies. - **Stablecoin Integration**: Use stablecoins for price-stable transactions. - **Cross-Border Payments**: Send money internationally without traditional banking delays. - **Peer-to-Peer Transfers**: Direct user-to-user payments without intermediaries. - **Business Payments**: Tools for merchants to accept crypto payments. - **Multi-Chain Support**: Compatible with multiple blockchain networks including Avalanche. - **No Chargebacks**: Blockchain-based finality eliminates chargeback fraud. - **Global Accessibility**: Access financial services from anywhere with internet connection. - **Simple Integration**: APIs and tools for easy merchant integration. - **Mobile-Friendly**: Mobile-optimized payment experience. ## Getting Started To use or integrate Due: 1. **Create Account**: Sign up on the Due platform. 2. **For Individual Users**: - Set up your payment wallet - Fund account with cryptocurrency or stablecoins - Send and receive payments instantly - Manage your transaction history 3. **For Businesses**: - Register as a merchant - Integrate payment acceptance tools - Configure payment methods and currencies - Start accepting crypto payments from customers 4. **Integration**: - Access Due's APIs for custom integrations - Implement payment widgets or checkout flows - Test in sandbox environment - Go live with crypto payment acceptance ## Avalanche Support Due's multi-chain architecture includes support for Avalanche C-Chain, enabling users and businesses to leverage Avalanche's high-performance infrastructure for fast, low-cost payments with AVAX and Avalanche-based stablecoins. ## Use Cases **E-Commerce**: Accept crypto payments for online stores with instant settlement. **Freelancer Payments**: Pay freelancers and contractors globally without high fees. **Remittances**: Send money internationally with lower costs than traditional services. **Peer-to-Peer**: Transfer funds directly between users instantly. **Business Services**: Accept payments for services in cryptocurrency. **Digital Goods**: Sell digital products with crypto payment options. # Dynamic (/integrations/dynamic) --- title: Dynamic category: Wallets and Account Abstraction available: ["C-Chain", "All Avalanche L1s"] description: Dynamic simplifies the creation of multi-chain experiences with its customizable suite of tools, offering onboarding, embedded wallets, efficient authentication, and scalable user management. logo: /images/dynamic.png developer: Dynamic website: https://www.dynamic.xyz/ documentation: https://docs.dynamic.xyz/ --- ## Overview Dynamic provides tools for developers to onboard users to their apps, whether crypto-native or new to Web3. It offers embedded wallets, authentication, multi-chain support, and user management. ## Features - Embedded Wallets: Spin up flexible, non-custodial embedded wallets, and focus on building experiences that matter. Abstract the complexity of crypto away from users, and onboard them with familiar and customizable UX. - Authentication: Dynamic provides secure authentication, with the ability for end-users to protect their wallets further with Passkeys, MFA, one-time codes, and more. - Multi-Chain Wallet Adaptor: End-users can connect hundreds of wallets across EVM, Solana, Bitcoin, Starknet, and more. Users can also get started with SMS, email, or social login options. - User Management Solution: Developers can assign end-users roles & permissions, present custom onboarding flows, and manage everything through a dashboard. And much more - try it all out for yourself in [Dynamic's live demo](https://demo.dynamic.xyz/)! ## Getting Started 1. Access the Documentation: The [Dynamic docs](https://docs.dynamic.xyz/) cover everything developers need to get started. 2. Get started on your own: Create a free account [here](https://app.dynamic.xyz) to explore Dynamic’s developer dashboard. 3. Talk with the team: [Schedule a meeting](https://www.dynamic.xyz/talk-to-us) to discuss what’s possible with Dynamic. 4. Try the live demo: In the [demo environment](https://demo.dynamic.xyz/), experiment with the multi-chain wallet adaptor and customize it. ## Documentation For more details, refer to the [Dynamic documentation](https://docs.dynamic.xyz/). ## Use Cases - Onchain Gaming - DeFi Platforms - L1s & L2s - Bridges - Multi-chain apps - Social & consumer apps # Eden Network (/integrations/eden) --- title: Eden Network category: Validator Infrastructure available: ["C-Chain", "All Avalanche L1s"] description: "[Deprecated] Eden Network provided MEV-protected validator infrastructure and transaction services for blockchain networks." logo: /images/eden.png developer: Eden Network website: https://www.edennetwork.io/ documentation: https://docs.edennetwork.io/ --- > **⚠️ Deprecated:** Eden Network wound down all services in August 2025. This page is kept for historical reference. ## Overview Eden Network is a blockchain infrastructure provider focused on MEV protection and transaction ordering services. Through its validator network and transaction infrastructure, Eden enables users and protocols to access fair transaction ordering and protection from negative MEV extraction, creating a more equitable blockchain experience. ## Features - **MEV Protection**: Infrastructure designed to protect users from MEV extraction - **Validator Network**: Professional validator operations with fair ordering - **Transaction Services**: Priority transaction submission and protection - **Multi-Chain Support**: Services across EVM-compatible networks - **Protocol Integration**: Easy integration for DeFi protocols - **Transparent Operations**: Clear documentation of MEV protection mechanisms ## Getting Started To use Eden Network: 1. **Visit Eden**: Explore services at [Eden Network](https://www.edennetwork.io/) 2. **Choose Service**: Select validator delegation or transaction services 3. **Integration**: Integrate Eden's services into your workflow 4. **Use Services**: Access MEV protection and priority transactions 5. **Monitor**: Track service usage and benefits ## Documentation For technical documentation, visit [Eden Network Docs](https://docs.edennetwork.io/). ## Use Cases - **MEV Protection**: Protect transactions from front-running and sandwich attacks - **DeFi Integration**: Add MEV protection to DeFi protocols - **Validator Operations**: Participate in fair transaction ordering - **Priority Transactions**: Access priority transaction inclusion # Envio (/integrations/envio) --- title: Envio category: Indexers available: ["C-Chain", "All Avalanche L1s"] description: Envio is a full-featured data indexing solution that lets developers index and aggregate real-time and historical blockchain data for any EVM. logo: /images/envio.png developer: Envio website: https://envio.dev/ documentation: https://docs.envio.dev/docs/HyperIndex/overview --- ## Overview Envio is a data indexing platform for EVM-compatible blockchains, including Avalanche C-Chain and L1s. It handles both real-time and historical blockchain data indexing and aggregation, so you can query on-chain data without building your own indexing infrastructure. ## Features - **Real-Time and Historical Indexing**: Index both live and past blockchain data across EVM-compatible chains. - **EVM Compatibility**: Works with any EVM chain, including Avalanche C-Chain and L1 networks. - **Scalable**: Indexing scales with your application as traffic and data volume grow. - **Data Aggregation**: Aggregate on-chain data across contracts and events without manual processing. - **Customizable**: Define custom indexing logic to match your application's data model. ## Getting Started 1. Check out the [Envio website](https://envio.dev/) for an overview of available features. 2. Follow the [Envio Documentation](https://docs.envio.dev/docs/HyperIndex/overview) to set up your first indexer. 3. Configure indexing for the contracts and events your app needs. 4. Deploy on Avalanche C-Chain or any supported L1. ## Documentation For setup guides and API references, see the [Envio Documentation](https://docs.envio.dev/docs/HyperIndex/overview). ## Use Cases - **DApp Backends**: Index contract events and state for frontend queries. - **Analytics**: Aggregate historical on-chain data for dashboards and reporting. - **Custom Indexers**: Build indexing pipelines tailored to specific contract interactions. # Ethena (/integrations/ethena) --- title: Ethena category: Stablecoins as a Service available: ["C-Chain"] description: "Ethena provides USDe, a synthetic dollar protocol offering a crypto-native stable asset with yield-generating capabilities." logo: /images/ethena.png developer: Ethena Labs website: https://ethena.fi/ documentation: https://docs.ethena.fi/ --- ## Overview Ethena is a synthetic dollar protocol that provides USDe, a crypto-native stable asset designed to offer stability without reliance on traditional banking infrastructure. Through delta-neutral hedging strategies using staked assets and derivatives positions, Ethena creates a scalable, censorship-resistant stablecoin that also offers yield opportunities to holders through its sUSDe staked variant. ## Features - **Synthetic Dollar (USDe)**: Crypto-native stablecoin backed by delta-neutral positions - **Internet Bond (sUSDe)**: Staked USDe that earns protocol yield - **Delta-Neutral Design**: Stability maintained through hedged positions rather than fiat reserves - **Censorship Resistant**: No reliance on traditional banking for backing - **Transparent Backing**: On-chain verification of collateral and hedge positions - **DeFi Native**: Built to integrate with DeFi protocols ## Getting Started 1. **Visit Ethena**: Access the [Ethena platform](https://ethena.fi/) to mint or acquire USDe 2. **Mint or Purchase**: Mint USDe with eligible collateral or acquire through supported exchanges 3. **Stake for Yield**: Convert USDe to sUSDe to earn protocol yield 4. **Use in DeFi**: Deploy USDe or sUSDe across Avalanche DeFi applications ## Documentation For more details, visit the [Ethena Documentation](https://docs.ethena.fi/). ## Use Cases - **Stable Value Storage**: Hold USDe as a stable digital dollar without traditional banking exposure - **Yield Generation**: Stake USDe as sUSDe to earn protocol yield - **DeFi Collateral**: Use USDe as collateral in lending and borrowing protocols - **Trading Pairs**: Utilize USDe as a stable trading pair in DEXs # Ethernal (/integrations/ethernal) --- title: Ethernal category: Explorers available: ["C-Chain", "All Avalanche L1s"] description: "Ethernal is a block explorer designed for private EVM chains and Avalanche L1s, providing local development and testing capabilities." logo: /images/ethernal.png developer: Ethernal website: https://tryethernal.com/ documentation: https://doc.tryethernal.com/ --- ## Overview Ethernal is a block explorer specifically designed for private and development EVM networks, making it ideal for Avalanche L1 development and testing. Unlike public explorers, Ethernal can connect to local or private networks, providing developers with full explorer functionality during development without requiring public network deployment. ## Features - **Private Chain Support**: Explorer for local and private EVM networks - **Development Focus**: Designed for development and testing workflows - **Contract Decoding**: Automatic ABI decoding for transactions - **Real-Time Updates**: Live transaction and block monitoring - **Self-Hosted Option**: Deploy your own Ethernal instance - **Team Collaboration**: Share explorer access with development teams ## Getting Started 1. **Sign Up**: Create account at [Ethernal](https://tryethernal.com/) 2. **Connect Network**: Configure Ethernal to connect to your Avalanche L1 3. **Sync Blocks**: Begin syncing blockchain data 4. **Upload ABIs**: Add contract ABIs for decoded transaction viewing 5. **Explore**: Use full explorer features for your network ## Documentation For setup guides, visit [Ethernal Documentation](https://doc.tryethernal.com/). ## Use Cases - **L1 Development**: Explorer for Avalanche L1 development networks - **Local Testing**: Full explorer during local development - **Team Debugging**: Shared explorer for development teams - **Private Networks**: Explorer for permissioned Avalanche networks # Etherscan (/integrations/etherscan) --- title: Etherscan category: Explorers available: ["C-Chain", "All Avalanche L1s"] description: "Etherscan provides blockchain explorer services for EVM networks, offering transaction and contract analysis tools." logo: /images/etherscan.png developer: Etherscan website: https://etherscan.io/ documentation: https://docs.etherscan.io/ --- ## Overview Etherscan is a widely used blockchain explorer for Ethereum and EVM-compatible networks. It supports transaction lookup, contract verification, token tracking, and on-chain analytics. Through Snowtrace (the Avalanche-specific Etherscan instance), these features are available for Avalanche C-Chain. ## Features - **Transaction Explorer**: Look up and inspect transactions, internal calls, and event logs. - **Contract Verification**: Verify and publish smart contract source code on-chain. - **Token Tracking**: View token transfers, holder distributions, and supply data. - **API Access**: RESTful APIs for programmatic access to blockchain data. - **Analytics Dashboard**: Charts and statistics for network activity and gas usage. - **Address Labeling**: Community-driven labels for known addresses and contracts. ## Getting Started 1. Visit [Snowtrace](https://snowtrace.io/) to explore Avalanche C-Chain data. 2. Sign up for an account to get API keys and access additional features. 3. Use the verification tools to publish your contract source code. 4. Integrate the Etherscan-compatible API into your dApp or tooling. ## Documentation For API documentation, visit [Etherscan Docs](https://docs.etherscan.io/). ## Use Cases - **Transaction Debugging**: Trace failed transactions and inspect internal calls. - **Contract Verification**: Publish source code so users can verify what they're interacting with. - **Data APIs**: Pull on-chain data into your application or analytics pipeline. - **Research**: Analyze token flows, holder patterns, and network usage. # Euler Finance (/integrations/euler) --- title: Euler Finance category: DeFi available: ["C-Chain"] description: "Euler Finance is a DeFi protocol offering advanced liquidity solutions and permissionless lending markets on Avalanche." logo: /images/euler.png developer: Euler Finance website: https://euler.finance documentation: https://docs.euler.finance --- ## Overview Euler Finance is a permissionless lending protocol deployed on Avalanche's C-Chain. It provides liquidity solutions and lending markets with a focus on capital efficiency and risk management. ## Features - **Permissionless Listings**: Any token can be listed without governance approval. - **Risk-Adjusted Borrowing**: Dynamic borrowing limits based on risk assessment. - **Capital Efficiency**: Optimized collateral utilization for maximum efficiency. - **Liquidation Protection**: Protected collateral mechanism to reduce liquidation risk. - **Flexible Interest Rates**: Reactive interest rate model that responds to market conditions. - **Multi-Collateral Support**: Use various assets as collateral for borrowing. ## Getting Started 1. **Access Platform**: Visit [Euler Finance](https://euler.finance) and select Avalanche network. 2. **Connect Wallet**: Link your Web3 wallet with AVAX for gas fees. 3. **Supply or Borrow**: - Deposit assets to earn interest - Borrow against your collateral - Monitor your positions and health factors 4. **Manage Risk**: Utilize the platform's risk management tools to optimize your positions. ## Documentation For more details, visit the [Euler Finance Documentation](https://docs.euler.finance). ## Use Cases - **Lending and Borrowing**: Efficient lending markets with competitive rates. - **Liquidity Provision**: Earn interest by supplying assets to the protocol. - **Leveraged Positions**: Access leverage for trading and yield farming strategies. - **Risk Management**: Advanced tools for managing DeFi portfolio risk. - **Capital Efficiency**: Maximize returns through optimized collateral usage. # Figment (/integrations/figment) --- title: Figment category: Validator Infrastructure available: ["C-Chain", "All Avalanche L1s"] description: "Figment is a leading blockchain infrastructure provider offering staking, data services, and institutional-grade validator operations." logo: /images/figment.png developer: Figment website: https://www.figment.io/ documentation: https://docs.figment.io/ --- ## Overview Figment is a blockchain infrastructure provider serving institutional clients with staking services, validator infrastructure, and blockchain data products. As one of the largest validators across proof-of-stake networks, Figment provides enterprise-grade reliability and security for Avalanche network participation. ## Features - **Institutional Staking**: Professional staking services for enterprises and funds - **DataHub**: Blockchain data APIs and services - **High Availability**: 99.9%+ uptime for validator operations - **Security First**: Industry-leading security practices and insurance coverage - **Compliance**: SOC 2 Type 2 certified with full audit trails - **Multi-Protocol**: Support for 50+ proof-of-stake networks ## Getting Started 1. **Contact Figment**: Reach out through [Figment](https://www.figment.io/) for institutional access 2. **Onboarding**: Complete Figment's institutional onboarding process 3. **Staking Setup**: Configure your staking delegation or validator operation 4. **Integration**: Access DataHub APIs for blockchain data needs 5. **Reporting**: Use Figment's dashboard for performance and rewards tracking ## Documentation For more details, visit [Figment Docs](https://docs.figment.io/). ## Use Cases - **Institutional Staking**: Delegate or run validators with institutional-grade infrastructure - **Fund Operations**: Manage staking operations for crypto funds - **Data Services**: Access Avalanche blockchain data via DataHub APIs - **Network Participation**: Support Avalanche decentralization with professional operations # Fireblocks (/integrations/fireblocks) --- title: Fireblocks category: Custody available: ["C-Chain", "All Avalanche L1s"] description: Fireblocks is an institutional-grade platform for securing digital assets, providing secure storage, transfer, and management of cryptocurrencies. logo: /images/fireblocks.png developer: Fireblocks website: https://www.fireblocks.com/ documentation: https://developers.fireblocks.com/ --- ## Overview Fireblocks is an institutional custody and transfer platform for digital assets. It uses MPC (Multi-Party Computation) and hardware isolation to secure asset storage and movement. Fireblocks supports Avalanche C-Chain and L1s, and provides APIs for integrating custody into your own systems. ## Features - **MPC-Based Storage**: Assets are secured using MPC key management and hardware isolation -- no single point of compromise. - **Secure Transfers**: Built-in policies and approval workflows for moving assets across wallets and exchanges. - **Multi-Wallet Management**: Manage multiple wallets from a single dashboard with configurable access controls. - **API Integration**: REST APIs for programmatic wallet creation, transfers, and policy management. - **Compliance and Governance**: Configure transaction policies, approval chains, and audit trails for regulatory requirements. - **24/7 Monitoring**: Continuous monitoring with alerts for unusual activity. ## Getting Started 1. Visit the [Fireblocks website](https://www.fireblocks.com/) to request access or start a trial. 2. Follow the [Fireblocks Developer Documentation](https://developers.fireblocks.com/) to set up API credentials and configure your workspace. 3. Create vaults and wallets for your supported assets (including AVAX and Avalanche-based tokens). 4. Set up transaction policies and approval workflows. 5. Integrate the API into your application or operational tooling. ## Documentation For API references and integration guides, see the [Fireblocks Developer Documentation](https://developers.fireblocks.com/). ## Use Cases - **Financial Institutions**: Custody and transfer digital assets with institutional-grade security controls. - **Crypto Service Providers**: Offer custody services to clients with multi-tenant wallet infrastructure. - **Enterprises**: Add digital asset management to existing treasury operations. - **Trading Desks**: Manage asset transfers across exchanges and OTC counterparties. # Flair (/integrations/flair) --- title: Flair category: Indexers available: ["C-Chain", "All Avalanche L1s"] description: Flair provides real-time and historical custom data indexing for any EVM-compatible chain. logo: /images/flair.jpg developer: Flair website: https://flair.dev/ documentation: https://docs.flair.dev/ --- ## Overview Flair provides real-time and historical data indexing for EVM-compatible blockchains, including Avalanche C-Chain and L1 networks. It lets you define custom indexing logic so your application can query exactly the on-chain data it needs. ## Features - **Custom Indexing**: Define your own indexing logic to extract and transform the specific on-chain data your app requires. - **Real-Time Data**: Subscribe to live blockchain events as they happen. - **Historical Backfill**: Index past blocks to build a complete dataset from any starting point. - **EVM Compatible**: Works with Avalanche C-Chain, L1s, and other EVM chains. - **Scalable**: Infrastructure scales with your data volume and query load. ## Getting Started 1. Check out the [Flair website](https://flair.dev/) for an overview. 2. Follow the [Flair Documentation](https://docs.flair.dev/) to set up your first indexer. 3. Define custom indexing handlers for the contracts and events you care about. 4. Deploy on Avalanche C-Chain or any supported EVM chain. ## Documentation For setup guides and API references, see the [Flair Documentation](https://docs.flair.dev/). ## Use Cases - **DApp Data Layers**: Build custom query APIs for your application’s specific data needs. - **Analytics Pipelines**: Index and aggregate on-chain activity for dashboards or reporting. - **Event-Driven Systems**: React to on-chain events in real time with custom processing logic. # FordeFi (/integrations/fordefi) --- title: FordeFi category: Wallets and Account Abstraction available: ["C-Chain", "All Avalanche L1s"] description: "FordeFi is an institutional-grade MPC wallet platform providing secure key management and transaction infrastructure." logo: /images/fordefi.png developer: FordeFi website: https://www.fordefi.com/ documentation: https://docs.fordefi.com/ --- ## Overview FordeFi is an institutional-grade MPC (Multi-Party Computation) wallet platform for secure key management and transaction infrastructure. It enables institutions to manage digital assets across multiple blockchain networks including Avalanche, with enterprise security features and policy controls. ## Features - **MPC Technology**: Multi-party computation for key security - **Policy Engine**: Configurable transaction policy controls - **Multi-Chain**: Support for Avalanche and other networks - **Institutional Grade**: Enterprise security standards - **API Access**: Full API for programmatic operations - **Audit Trail**: Complete transaction audit logging ## Getting Started 1. **Contact FordeFi**: Reach out through [FordeFi](https://www.fordefi.com/) 2. **Onboarding**: Complete institutional onboarding 3. **Configuration**: Set up wallets and policies 4. **Integration**: Integrate APIs into your systems 5. **Operations**: Begin secure asset operations ## Documentation For technical documentation, visit [FordeFi Documentation](https://docs.fordefi.com/). ## Use Cases - **Institutional Custody**: Secure custody for institutions - **Treasury Operations**: Manage corporate digital asset treasury - **DeFi Access**: Secure access to DeFi protocols - **Enterprise Wallets**: Wallet infrastructure for enterprises # Foundry (/integrations/foundry) --- title: Foundry category: Developer Tooling available: ["C-Chain", "All Avalanche L1s"] description: Foundry is a blockchain development platform that provides a suite of tools for building and deploying blockchain applications. logo: /images/foundry.png developer: Paradigm website: https://book.getfoundry.sh/ documentation: https://book.getfoundry.sh/ --- ## Overview Foundry is a Rust-based smart contract development toolchain built by Paradigm. It handles compilation, testing, deployment, and on-chain interaction from the command line. Foundry is fast, has a built-in fuzz tester, and works with Avalanche C-Chain and L1s out of the box. ## Features - **Written in Rust**: Fast compilation and test execution compared to JavaScript-based alternatives. - **Forge**: Compile and test contracts with unit tests, fuzz tests, and invariant tests -- all written in Solidity. - **Cast**: CLI tool for interacting with deployed contracts, sending transactions, and querying chain state. - **Anvil**: Local EVM node for development and testing. - **Script-Based Deployment**: Write deployment scripts in Solidity using `forge script`. ## Getting Started 1. Install Foundry by following the instructions at [book.getfoundry.sh](https://book.getfoundry.sh/). 2. Initialize a project with `forge init`. 3. Write your contracts and tests in `src/` and `test/`. 4. Run `forge test` to execute your test suite. 5. Deploy to Avalanche C-Chain or an L1 using `forge script` or `forge create`. 6. Use `cast` to interact with deployed contracts from the command line. ## Documentation For installation, tutorials, and CLI references, see the [Foundry Book](https://book.getfoundry.sh/). ## Use Cases - **Contract Development**: Write, compile, and test Solidity contracts with a fast feedback loop. - **Fuzz Testing**: Use Foundry’s built-in fuzzer to find edge cases in your contract logic. - **Scripted Deployments**: Manage multi-step deployments with Solidity-based scripts. - **On-Chain Interaction**: Query contract state and send transactions from the CLI with `cast`. # Franklin Templeton (/integrations/franklin-templeton) --- title: Franklin Templeton category: Assets available: ["C-Chain"] description: "Franklin Templeton is a global investment management firm offering tokenized funds including BENJI, bridging traditional asset management with blockchain technology." logo: /images/franklin-templeton.png developer: Franklin Templeton website: https://www.franklintempleton.com/ documentation: https://www.franklintempleton.com/digital-assets --- ## Overview Franklin Templeton is a global investment management firm with over $1.5 trillion in assets under management. Through tokenized funds like BENJI (Franklin OnChain U.S. Government Money Fund), Franklin Templeton offers investment products on blockchain, combining traditional fund management with the efficiency and transparency of distributed ledger technology. # Franklin (/integrations/franklin) --- title: Franklin category: Payments available: ["C-Chain"] description: Franklin is a modern payroll and payments platform built for the stablecoin era, managing U.S. payroll, global contractor payments, vendor invoices, and onchain operations in USD and crypto. logo: /images/franklin.png developer: Franklin website: https://hellofranklin.co/ documentation: https://www.hellofranklin.co/knowledge-center --- ## Overview Franklin is a payroll and payments platform that handles U.S. payroll, global contractor payments, vendor invoices, and on-chain operations in both USD and crypto. It combines traditional payroll compliance (tax filing, benefits, W-2s) with native support for stablecoin and token payments. The platform includes automated tax filing, benefit integrations, crypto off-ramping, and tools for grant and token distribution. ## Features - **U.S. Payroll**: Complete U.S. payroll with federal, state, and local tax compliance. - **Global Contractor Payments**: Pay contractors worldwide in USD, stablecoins, or crypto. - **Vendor Invoice Management**: Handle accounts payable with crypto or fiat payment options. - **Onchain Operations**: Native support for blockchain-based financial operations. - **Dual Currency**: Manage both USD and cryptocurrency payments side by side. - **Automated Tax Filing**: Automatic federal, state, and local tax calculations and filings. - **Benefits Integration**: Connect with health insurance, 401(k), and other benefit providers. - **Crypto Off-Ramping**: Convert crypto to fiat automatically for compliant payments. - **Grant Distribution**: Tools for distributing token grants and equity compensation. - **Token Distribution**: Manage token airdrops and distribution campaigns. - **Stablecoin Native**: Purpose-built for USDC and other stablecoin payments. - **Multi-Chain Support**: Support for Avalanche and other major blockchain networks. - **Compliance Automation**: Maintain compliance across all payment types. - **Audit-Ready Records**: Complete documentation for financial audits. - **API Integration**: Connect Franklin with existing accounting and HR systems. - **Real-Time Dashboard**: Monitor all financial operations in one place. ## Getting Started To implement Franklin for your organization: 1. **Company Setup**: Create your Franklin account and configure: - Company information and tax IDs - Bank account connections - Crypto wallet connections - Payment preferences (USD, stablecoins, crypto mix) - Integration with accounting software 2. **U.S. Payroll Setup**: Configure employee payroll: - Add employees and compensation details - Set up direct deposit information - Enroll in benefits programs - Configure tax withholdings - Set payroll schedule (weekly, bi-weekly, monthly) 3. **Contractor Management**: Set up global contractor payments: - Add contractors with payment preferences - Configure payment currencies (USD, USDC, AVAX, etc.) - Set up automatic payments or approval workflows - Manage 1099s and international tax forms 4. **Accounts Payable**: Configure vendor payments: - Add vendors and payment methods - Set up invoice approval workflows - Choose payment types (ACH, wire, crypto) - Automate recurring vendor payments 5. **Onchain Operations**: Enable blockchain features: - Connect organizational wallets (Gnosis Safe, etc.) - Configure token distribution workflows - Set up grant vesting schedules - Enable crypto payment options 6. **Launch**: Run your first payroll and payments through Franklin. ## U.S. Payroll Franklin provides complete U.S. payroll capabilities: **Automated Payroll Processing**: Run payroll on your schedule with automatic tax calculations. **Multi-State Support**: Handle employees in all 50 states with state-specific compliance. **Tax Filing**: Automatic federal, state, and local tax withholdings and filings. **Direct Deposit**: Pay employees via ACH with instant funding options. **Pay Stubs**: Digital pay stubs with detailed breakdowns. **Year-End Forms**: Automatic generation of W-2s and other tax documents. **New Hire Reporting**: Automatic new hire reporting to state agencies. Franklin's payroll meets all U.S. compliance requirements and integrates directly with its crypto payment features. ## Global Contractor Payments Pay contractors worldwide flexibly: **Multi-Currency**: Pay in USD, stablecoins (USDC, USDT), or cryptocurrencies. **Instant Payments**: Blockchain-based payments settle in seconds, not days. **Lower Costs**: Eliminate high international wire fees with crypto payments. **Contractor Portal**: Self-service portal for contractors to manage payment info. **1099 Management**: Automatic 1099 generation for U.S. contractors. **International Tax Forms**: Handle tax documentation for global contractors. **Batch Payments**: Pay multiple contractors in a single transaction. This flexibility allows contractors to receive payment in their preferred currency while maintaining your compliance. ## Vendor Invoice Management Streamline accounts payable: **Invoice Capture**: Digital invoice submission and automated data extraction. **Approval Workflows**: Route invoices through customizable approval chains. **Flexible Payment**: Pay vendors via ACH, wire, or cryptocurrency. **Recurring Payments**: Automate regular vendor payments. **Crypto-Native Vendors**: Pay Web3 service providers in their preferred crypto. **Payment Scheduling**: Schedule payments for specific dates. **Reconciliation**: Automatic matching of invoices to payments. Franklin makes it easy to manage both traditional and crypto-native vendors. ## Onchain Operations Franklin's blockchain-native features: **Token Grants**: Distribute token grants to employees with vesting schedules. **Equity in Tokens**: Issue equity compensation in native tokens. **Airdrop Management**: Execute token airdrops to community or customers. **Grant Vesting**: Automate vesting schedules and cliff periods. **Multi-Sig Integration**: Connect with Gnosis Safe and other multi-sig wallets. **Onchain Treasury**: Manage organizational crypto treasury. **DAO Payments**: Support for DAO contributor compensation. These features make Franklin ideal for crypto-native organizations managing token-based compensation. ## Stablecoin and Crypto Payments Franklin is built for the stablecoin era: **USDC Native**: First-class support for USDC on multiple chains. **Multi-Coin Support**: Accept and pay in various cryptocurrencies. **Automatic Conversion**: Convert crypto to fiat when needed for compliance. **Off-Ramping**: Built-in tools to convert crypto to USD for traditional payments. **Market Rate Payments**: Use real-time exchange rates for crypto compensation. **Tax Reporting**: Proper tax treatment of crypto-based compensation. **Compliance**: Maintain IRS compliance even with crypto payments. ## Avalanche Integration Franklin's platform supports Avalanche C-Chain: **AVAX Payments**: Pay contractors and vendors directly in AVAX. **USDC on Avalanche**: Leverage USDC on Avalanche for low-cost stablecoin payments. **Fast Settlement**: Benefit from Avalanche's sub-second finality. **Low Fees**: Avalanche's minimal transaction costs make frequent payments economical. **Token Distribution**: Distribute Avalanche-based tokens to recipients. **Treasury Management**: Manage AVAX and Avalanche assets in organizational treasury. This makes Franklin a practical option for teams building on Avalanche who need to handle payroll and payments in one place. ## Benefits Integration Franklin connects with major benefit providers: **Health Insurance**: Integration with health insurance providers and HSA/FSA. **Retirement Plans**: Connect with 401(k) providers for automatic contributions. **Commuter Benefits**: Manage pre-tax transportation benefits. **Life Insurance**: Integrate life and disability insurance deductions. **Custom Benefits**: Configure unique benefits and deductions. **Benefit Enrollment**: Digital enrollment during onboarding. **Benefit Costs**: Track and report benefit costs accurately. ## Tax Compliance Franklin handles tax compliance: **Federal Taxes**: Automatic calculation and filing of federal payroll taxes. **State Taxes**: Multi-state withholding and filing in all 50 states. **Local Taxes**: City and county tax handling where applicable. **Quarterly Filings**: Automatic 941, 940, and state quarterly filings. **Year-End Forms**: W-2, 1099, and other year-end tax document generation. **Tax Deposits**: Automatic payroll tax deposits to meet IRS deadlines. **Crypto Tax Reporting**: Proper reporting of cryptocurrency compensation. **Audit Support**: Documentation and support for payroll tax audits. ## Platform Integrations Franklin connects with essential tools: **Accounting**: QuickBooks, Xero, NetSuite for financial sync. **HR Systems**: BambooHR, Rippling, Gusto for employee data. **Wallets**: Gnosis Safe, MetaMask for crypto operations. **Banking**: Connect business bank accounts for fiat operations. **Time Tracking**: Harvest, Toggl for hourly employee time. **Expense Management**: Expensify, Ramp for reimbursements. **Communication**: Slack, Discord for team notifications. ## Use Cases Franklin serves diverse modern companies: **Web3 Startups**: Pay team in mix of USD and tokens while maintaining compliance. **Crypto Companies**: Manage payroll and payments for crypto-native businesses. **Remote-First Companies**: Pay distributed teams globally in their preferred currencies. **DAOs**: Enable decentralized organizations to handle contributor payments compliantly. **Traditional Companies Adopting Crypto**: Gradually introduce crypto payments while maintaining USD payroll. **Venture-Backed Startups**: Manage equity grants, token compensation, and cash payroll together. **Global Teams**: Pay U.S. employees and international contractors from one platform. ## Pricing Franklin offers transparent pricing: - **Payroll Pricing**: Per-employee per-month for U.S. payroll - **Contractor Payments**: Fees based on payment volume - **Platform Fee**: Monthly fee for access to all features - **Transaction Fees**: Small fees on crypto payments - **No Setup Fees**: Free to get started - **Custom Enterprise**: Tailored pricing for large organizations Contact Franklin for pricing based on your team size and payment volumes. ## Compliance and Security Franklin maintains high standards: - **SOC 2 Type II**: Independently audited security controls - **Bank-Level Security**: Multi-layer security protecting financial data - **IRS Compliance**: Full compliance with federal payroll tax requirements - **State Compliance**: Licensed in all required states for payroll - **Data Encryption**: End-to-end encryption for sensitive data - **Audit Trails**: Complete records for compliance and audits - **GDPR Compliant**: Privacy compliance for international contractors - **Insurance**: E&O insurance and cybersecurity coverage ## Competitive Advantages **Stablecoin Native**: Built from ground up for crypto and stablecoin payments. **Full-Stack Solution**: Payroll, contractor payments, AP, and onchain ops in one platform. **Compliance + Crypto**: Maintain compliance while embracing crypto payments. **Token Distribution**: Unique capabilities for grant and token distribution. **Modern UX**: Clean, intuitive interface unlike legacy payroll software. **Fast Implementation**: Get started in days, not weeks. **Flexible Payments**: Support USD, stablecoins, and crypto in one platform. **Blockchain-Native**: Deep integration with Avalanche and other networks. ## Customer Support Franklin provides support including: - **Responsive Support**: Fast response times via email, chat, and phone - **Onboarding Assistance**: Guided setup and migration support - **Tax Expertise**: Access to payroll tax specialists - **Knowledge Base**: Documentation and guides - **Training**: Platform training for finance and HR teams - **API Documentation**: Developer docs for custom integrations - **Status Updates**: Real-time platform status and maintenance notifications ## Why Choose Franklin **Modern Platform**: Built for 2024+, not retrofitted from legacy systems. **Crypto-Ready**: Native support for stablecoins and crypto, not an afterthought. **All-in-One**: Eliminate multiple tools for payroll, contractor payments, and AP. **Compliance**: Never compromise on tax and regulatory compliance. **Flexibility**: Support team preferences for USD or crypto payments. **Transparency**: Clear pricing with no hidden fees. **Forward-Thinking**: Platform evolves with the changing nature of work and money. # FrostyMetrics (/integrations/frostymetrics) --- title: FrostyMetrics category: Analytics & Data available: ["C-Chain"] description: "FrostyMetrics provides analytics APIs and SDKs for accessing real-time and historical Avalanche blockchain data." logo: /images/frostymetrics.jpeg developer: FrostyMetrics website: https://www.frostymetrics.com/ documentation: https://docs.frostymetrics.com/ --- ## Overview FrostyMetrics is an analytics platform built for Avalanche, providing developers with APIs and SDKs to access blockchain data. It supports real-time metrics, historical data, and custom analytics for applications on Avalanche. ## Features - **Data Access**: - Real-time blockchain metrics - Historical data queries - Custom data endpoints - Aggregated statistics - **Integration Tools**: - REST API - WebSocket feeds - GraphQL endpoints - SDK integration - **Analytics Types**: - Transaction metrics - Address analytics - Token statistics - Protocol data - **Developer Tools**: - Query builder - Data filtering - Custom endpoints - Rate limiting options ## Getting Started 1. **Access API**: - Register for API key - Choose subscription plan - Set up authentication 2. **Implementation**: ```javascript import { FrostyMetricsClient } from '@frostymetrics/sdk'; const client = new FrostyMetricsClient({ apiKey: 'your-api-key' }); // Fetch real-time metrics const metrics = await client.getMetrics({ type: 'transaction', timeframe: '24h' }); ``` ## Documentation For more details, visit the [FrostyMetrics Documentation](https://docs.frostymetrics.com/). ## Use Cases - **DApp Analytics**: Integrate blockchain metrics into applications - **Market Analysis**: Access historical data and trends - **Protocol Monitoring**: Track protocol performance and usage - **User Analytics**: Analyze user behavior and transactions - **Custom Dashboards**: Build custom analytics displays # Galaxy (/integrations/galaxy) --- title: Galaxy category: Assets available: ["C-Chain"] description: "Galaxy is an institutional digital assets leader offering tokenized investment products including the Galaxy CLO 2025-1, a collateralized loan obligation tokenized on Avalanche." logo: /images/galaxy.svg developer: Galaxy Digital website: https://www.galaxy.com/ documentation: https://www.galaxy.com/ --- ## Overview Galaxy Digital is a global digital assets and infrastructure firm that tokenizes institutional-grade financial products on blockchain. Galaxy CLO 2025-1 is a tokenized collateralized loan obligation issued natively on Avalanche, giving qualified investors access to institutional private credit through tokenized debt tranches with on-chain transparency. ## Key Features - **Tokenized CLO**: Galaxy CLO 2025-1 brings collateralized loan obligations on-chain, a first for structured credit markets - **Avalanche Native**: Debt tranches issued and tokenized directly on Avalanche for efficient settlement and trading - **Institutional Infrastructure**: Partnership with Anchorage Digital Bank as trustee and custodian for compliant custody - **Real-Time Transparency**: Integration with Accountable for continuous verification of loan performance and collateralization - **Scalable Structure**: Initial $75M closing with potential to scale up to $200M ## Avalanche Integration Galaxy chose Avalanche as the blockchain infrastructure for its tokenized CLO due to the network's high throughput, low transaction costs, and institutional-grade capabilities. The CLO's debt tranches are tokenized on Avalanche, enabling: - Instant settlement of secondary market trades - Enhanced structural transparency through on-chain data - Improved liquidity potential for traditionally illiquid credit instruments - Greater collateral efficiency for institutional investors ## Getting Started Galaxy CLO 2025-1 tokens are expected to be listed on INX's ATS platform, providing qualified investors with regulated market access. The investment is available to institutional and qualified investors through Galaxy's platform. For more information about Galaxy's tokenized investment products, visit [galaxy.com](https://www.galaxy.com/). ## Use Cases - **Institutional Credit Exposure**: Access tokenized private credit through regulated infrastructure - **Portfolio Diversification**: Add blockchain-native structured credit to digital asset portfolios - **Enhanced Liquidity**: Trade traditionally illiquid CLO positions through tokenized secondary markets - **Real-Time Monitoring**: Track collateral performance and loan health through on-chain verification # Gelato (/integrations/gelato) --- title: Gelato category: Blockchain as a Service available: ["All Avalanche L1s"] description: Gelato Cloud now supports Avalanche L1s delivering enterprise-grade infrastructure for building, deploying, and scaling custom Avalanche networks. logo: /images/gelato.png developer: Gelato Network website: https://gelato.network/ baas_platform: https://raas.gelato.network/ documentation: https://docs.gelato.network/ featured: true --- ## Overview Gelato Cloud is a Blockchain-as-a-Service (BaaS) platform that supports deploying and managing Avalanche L1s. It provides a fully managed environment for launching L1s, with built-in account abstraction, cross-chain interoperability, and integrations with 50+ third-party providers (Etherscan, LayerZero, MoonPay, and others). Gelato is used by projects including Open Campus (Animoca Brands), Kraken's Ink, and Fox News Verify. ## Features - **Managed Deployment**: Deploy and scale Avalanche L1s on globally distributed infrastructure with 99.99% uptime SLAs. - **Cross-Chain Interoperability**: Built-in support for cross-chain messaging and bridging to connect your L1 to other networks. - **Full-Stack Infrastructure**: Includes Account Abstraction, Oracles, VRF, Functions, Relayers, and Node Sale Infrastructure. - **Gelato Marketplace**: 50+ pre-integrated services -- block explorers, indexers, bridges, smart wallets, security tooling -- available out of the box. ## Getting Started 1. **Deploy a Testnet**: Launch a custom Avalanche L1 testnet starting at $99/month. 2. **Configure Infrastructure**: Set up Account Abstraction, Oracles, VRF, and third-party integrations for your L1. 3. **Monitor**: Use Gelato's dashboards to track performance, usage, and costs. 4. **Go to Mainnet**: Launch on mainnet with support for Node Sales, Native Yield integrations, and marketplace services. ## Documentation For guides on deploying and managing Avalanche L1s with Gelato BaaS, visit the [Gelato Documentation](https://docs.gelato.network/). # GMX (/integrations/gmx) --- title: GMX category: DeFi available: ["C-Chain"] description: "GMX is a decentralized perpetual exchange offering low fees, deep liquidity and minimal price impact." logo: /images/gmx.jpeg developer: GMX Team website: https://gmx.io/ documentation: https://docs.gmx.io/ --- ## Overview GMX is a decentralized perpetual exchange on Avalanche C-Chain (and other networks including Arbitrum). It supports spot and perpetual trading with up to 100x leverage, using a multi-asset liquidity pool that enables low-slippage trades. Fees generated by the platform are distributed to liquidity providers and GMX stakers. ## Features - **Multi-Asset Liquidity Pool**: Trades execute against a shared pool, reducing price impact on larger orders. - **Low Fees**: Competitive fee structure with fee sharing for liquidity providers. - **Leverage Trading**: Up to 100x leverage for long and short positions. - **Zero Price Impact Trades**: Unique pricing mechanism allows large trades without moving the market. - **Real Yield**: Stakers and LPs earn ETH/AVAX from actual trading fees (not token emissions). - **Multi-Asset Collateral**: Use several supported assets as trading collateral. ## Getting Started 1. Go to [GMX](https://gmx.io/) and connect your wallet (MetaMask, Core, etc.). 2. Deposit supported assets as collateral. 3. Select a trading pair, set your leverage (up to 100x), and open a position. 4. Monitor and manage open positions from the dashboard. ## Documentation For detailed guides and technical docs, visit the [GMX Documentation](https://docs.gmx.io/). ## Use Cases - **Spot Trading**: Swap tokens with low slippage via the liquidity pool. - **Perpetual Trading**: Open leveraged long/short positions on supported pairs. - **Yield**: Provide liquidity or stake GMX to earn a share of trading fees. - **Hedging**: Use perpetual positions to hedge exposure in other DeFi positions. # Safe (formerly Gnosis Safe) (/integrations/gnosis-safe) --- title: Safe (formerly Gnosis Safe) category: Wallets and Account Abstraction available: ["C-Chain"] description: An open-source and modular account abstraction stack. It's the essential tooling and infrastructure for integrating the Safe Smart Account into any digital platform. logo: /images/gnosis-safe.png developer: Safe (formerly Gnosis) website: https://safe.global/ documentation: https://docs.safe.global/ --- ## Overview Safe (formerly Gnosis Safe) is an open-source smart account stack. It provides multi-signature wallets and modular account abstraction infrastructure that you can integrate into your own applications. Safe Smart Accounts support configurable signer thresholds, transaction guards, and module extensions. ## Features - **Modular Architecture**: Extend Safe functionality with custom modules and guards. - **Multi-Signature Support**: Require M-of-N signers to approve transactions. - **Open-Source**: Full source code available for audit, contribution, and self-hosting. - **SDK and APIs**: Libraries for creating Safes, proposing transactions, and managing signers from your application. - **Flexible**: Handles everything from simple multi-sig wallets to complex on-chain governance setups. ## Getting Started The easiest way to integrate the Safe\{Core\} stack to your Avalanche L1 is to use the [Ash Wallet](/integrations/ash). 1. Review the [Safe Documentation](https://docs.safe.global/) for SDK setup and API references. 2. Create and configure a Safe Smart Account with your desired signer threshold. 3. Integrate the Safe SDK into your web or mobile application. 4. Test the full transaction flow (propose, confirm, execute) on a testnet before going to production. ## Documentation For SDK guides, API references, and deployment instructions, visit the [Safe Documentation](https://docs.safe.global/). ## Use Cases - **Treasury Management**: Multi-sig wallets for DAOs and project treasuries. - **DeFi Protocols**: Add multi-sig controls to protocol admin functions. - **Enterprise**: Manage on-chain access controls with configurable approval workflows. - **Wallet Products**: Build wallets with Smart Account features like social recovery and session keys. # GoldRush (/integrations/goldrush) --- title: GoldRush category: Indexers available: ["C-Chain", "All Avalanche L1s"] description: GoldRush (powered by Covalent) provides structured onchain data for easy app development. logo: /images/goldrush.png developer: Covalent website: https://goldrush.dev/ documentation: https://goldrush.dev/docs/chains/avalanche-c-chain --- ## GoldRush - powered by Covalent GoldRush provides Blockchain Data APIs for developers, analysts, and enterprises. The APIs offer fast, accurate access to on-chain data for DeFi dashboards, wallets, trading bots, AI agents, and compliance platforms. GoldRush consists of the following self-serve products that can be used independently or together to power your application: | **Product Name** | **Description** | **Key Data Feeds** | **Use Cases** | | --- | --- | --- | --- | | **Foundational API** | Access structured historical blockchain data across 100+ chains via REST APIs |
  • Token balances (spot & historical)
  • Token transfers
  • Token holders (spot & historical)
  • Token prices (onchain)
  • Wallet transactions
  • Get logs
|
  • Wallets
  • Portfolio trackers
  • Crypto accounting & tax tools
  • DeFi dashboards
  • Activity feeds
| | **Streaming API** | Subscribe to real-time blockchain events with sub-second latency using GraphQL over WebSockets |
  • OHLCV tokens & pairs
  • New & updated DEX pairs
  • Wallet activity
  • Token balances
|
  • Trading dashboards
  • Sniper bots
  • Gaming
  • Agentic workflows
| The **[GoldRush TypeScript SDK](https://www.npmjs.com/package/@covalenthq/client-sdk)** is the fastest way to integrate the GoldRush APIs. Install with: ```bash npm install @covalenthq/client-sdk ``` Learn more about GoldRush's integration with Avalanche C-Chain [here](https://goldrush.dev/docs/chains/avalanche-c-chain?utm_source=avalanche-c-chain&utm_medium=partner-docs) . # Gyve (/integrations/gyve) --- title: Gyve category: Indexers available: ["C-Chain", "All Avalanche L1s"] description: "Gyve provides blockchain data indexing services enabling efficient access to on-chain data for Web3 applications." logo: /images/gyve.png developer: Gyve website: https://www.gyve.io/ --- ## Overview > **Warning:** The Gyve website (gyve.io) no longer points to a blockchain service. The domain now redirects to an unrelated company. This integration may no longer be active. Gyve was listed as a blockchain data indexing service that provides access to on-chain data for Web3 applications. ## Features - **Data Indexing**: Full blockchain data indexing - **API Access**: Clean APIs for querying indexed data - **Event Processing**: Real-time event indexing and processing - **Custom Schemas**: Support for custom data schemas - **Historical Data**: Access to historical blockchain data - **Multi-Network**: Support for multiple blockchain networks ## Getting Started 1. **Sign Up**: Create account at [Gyve](https://www.gyve.io/) 2. **Configure Index**: Set up indexing for your data needs 3. **API Integration**: Integrate Gyve APIs into your application 4. **Query Data**: Access indexed blockchain data 5. **Monitor**: Track indexing status and performance ## Use Cases - **Application Data**: Power applications with indexed blockchain data - **Event Tracking**: Track and process on-chain events - **Historical Analysis**: Analyze historical blockchain activity - **Data Services**: Build data services on indexed information # Halliday (/integrations/halliday) --- title: Halliday category: Fiat On-Ramp available: ["C-Chain"] description: The commerce automation platform for modular chains. Empower your users with smart accounts and fiat onramps and offramps. logo: /images/halliday.png developer: Halliday website: https://halliday.xyz/ documentation: https://docs.halliday.xyz/ --- ## Overview Halliday is a commerce automation platform for modular chains, providing account abstraction with smart accounts and fiat onramps/offramps. It lets users manage, spend, and transact digital assets, connecting traditional finance with blockchain. ## Features - **Account Abstraction with Smart Accounts**: Users manage digital assets through smart accounts without dealing with raw blockchain interactions. - **Fiat Onramps and Offramps**: Convert fiat currency to digital assets and back, simplifying the transition between traditional finance and blockchain. - **User-Friendly Interface**: Intuitive platform for managing, spending, and transacting with digital assets. - **API Integration**: Incorporate Halliday’s smart accounts and fiat onramp/offramp features into your platform via APIs. - **Security and Compliance**: Secure transactions and compliance with industry standards for user asset and data protection. ## Getting Started 1. **Visit the Halliday Website**: Explore the [Halliday website](https://halliday.xyz/) to learn about the platform. 2. **Access the Documentation**: See the [Halliday Documentation](https://docs.halliday.xyz/) for integration instructions. 3. **Implement Smart Accounts**: Use Halliday's account abstraction features for user-facing asset management. 4. **Set Up Fiat Onramps/Offramps**: Enable conversion between fiat and digital assets. ## Documentation For more details, refer to the [Halliday Documentation](https://docs.halliday.xyz/). ## Use Cases - **E-commerce Platforms**: Streamline payment processes with smart accounts and fiat onramps/offramps. - **Video Games**: Implement in-game purchases and digital asset management using Halliday’s platform. - **Financial Services**: Provide clients with secure and compliant solutions for managing and converting digital assets. # Hardhat (/integrations/hardhat) --- title: Hardhat category: Developer Tooling available: ["C-Chain", "All Avalanche L1s"] description: Hardhat is a development environment for building, testing, and deploying smart contracts on Avalanche and other Ethereum-compatible networks. logo: /images/hardhat.png developer: Nomic Labs website: https://hardhat.org/ documentation: https://hardhat.org/getting-started/ --- ## Overview Hardhat is a development environment for building, testing, and deploying smart contracts. Developed by Nomic Labs, it supports Ethereum and Ethereum-compatible networks like Avalanche. It includes tools for local blockchain testing, debugging, and automated deployment. ## Features - **Local Blockchain Network**: Built-in local Ethereum network for rapid testing before deploying to public networks. - **Testing Framework**: Unit tests and integration tests with advanced debugging features. - **Extensible Plugins**: Plugin system for custom features and integrations. - **Avalanche Integration**: Configure Hardhat to deploy and interact with Avalanche smart contracts. - **Automated Deployment**: Tools for managing and automating deployment scripts. ## Getting Started 1. **Install Hardhat**: Visit the [Hardhat website](https://hardhat.org/) and install via npm with `npm install --save-dev hardhat`. 2. **Set Up a New Project**: Initialize a new Hardhat project by running `npx hardhat` and follow the prompts to configure your project. 3. **Configure Avalanche Network**: Update your Hardhat configuration file (`hardhat.config.js`) to include settings for the Avalanche network. 4. **Write and Test Contracts**: Develop and test your smart contracts using Hardhat’s built-in testing framework. 5. **Deploy Contracts**: Use Hardhat’s deployment tools to deploy your contracts to Avalanche or other Ethereum-compatible networks. ## Documentation For more details, visit the [Hardhat Documentation](https://hardhat.org/getting-started/). ## Use Cases - **Smart Contract Development**: Develop, test, and deploy smart contracts. - **Decentralized Finance (DeFi)**: Build and deploy DeFi applications on Avalanche or other Ethereum-compatible networks. - **NFT Projects**: Create and manage NFT smart contracts with Hardhat’s tools. - **Blockchain Prototypes**: Rapidly prototype and test blockchain applications using Hardhat’s local network. # Hypha (/integrations/hypha) --- title: Hypha category: Validator Marketplace available: ["All Avalanche L1s"] description: "Hypha (formerly GoGoPool) enables permissionless Layer 1 validation and liquid staking with decentralized validator infrastructure." logo: /images/gogopool.png developer: Multisig Labs website: https://www.gogopool.com/ documentation: https://docs.gogopool.com/ --- ## Overview Hypha (formerly GoGoPool) is a decentralized validator marketplace and liquid staking protocol on Avalanche. It enables permissionless Layer 1 blockchains and provides infrastructure for validators, stakers, and L1 builders to participate in the Avalanche ecosystem with lower capital and technical barriers. ## Features - **Minipools & Liquid Staking**: Validate Avalanche with reduced requirements and earn yields through ggAVAX liquid staking token. - **Layer 1 Launcher**: Deploy your own Layer 1 blockchain and attract validators through custom marketplaces. - **Layer 1 Marketplace**: Discover and engage with decentralized Layer 1s, participate in validation, staking, or delegation. - **ggAVAX Token**: Liquid staking token that allows users to earn yield while maintaining liquidity. - **Permissionless Validation**: Democratizes validator participation by lowering the technical and capital barriers. - **Validator Infrastructure**: Decentralized network of validators for security and reliability. - **DeFi Integrations**: ggAVAX is integrated with major DeFi protocols including DeltaPrime, Balancer, Wombat Exchange, and Uniswap. ## Documentation For more details, visit the [Hypha Documentation](https://docs.hypha.sh/). ## Use Cases - **Liquid Staking**: Stake AVAX and receive ggAVAX to maintain liquidity while earning staking rewards. - **Validator Operations**: Run validators with reduced capital requirements through the minipool system. - **Layer 1 Development**: Launch and manage custom Layer 1 blockchains with built-in validator infrastructure. - **Validator Marketplace**: Access a decentralized marketplace of validators for new Layer 1s. - **DeFi Yield**: Utilize ggAVAX across multiple DeFi protocols to maximize yield opportunities. ## Security Hypha prioritizes security with multiple independent audits: - Code4rena audit (December 2022 - January 2023) - Zellic audit (November 2022) - Kudelski Security audit (October 2022) # idOS (/integrations/idos) --- title: idOS category: KYC / Identity Verification available: ["C-Chain"] description: "idOS provides decentralized identity infrastructure enabling privacy-preserving credential management for Web3 applications." logo: /images/idos.jpg developer: idOS website: https://idos.network/ documentation: https://docs.idos.network/ --- ## Overview idOS is a decentralized identity operating system for privacy-preserving credential management in Web3. Users store and manage identity credentials while maintaining full control over their personal data and how it's shared with applications. ## Features - **Decentralized Infrastructure**: Identity data stored in a decentralized manner across the network. - **User-Controlled Data**: Users maintain full sovereignty over their identity information. - **Privacy-Preserving**: Share verification status without exposing underlying personal data. - **Credential Storage**: Secure storage for various types of identity credentials and verifications. - **Interoperable Standards**: Built on open standards for cross-platform compatibility. - **Access Control**: Granular control over which applications can access specific credentials. - **Multi-Chain Support**: Compatible with various blockchain networks including Avalanche. ## Documentation For more information, visit the [idOS Documentation](https://docs.idos.network/). ## Use Cases - **Credential Management**: Secure storage and management of identity credentials. - **Privacy-Preserving Verification**: Share verification status while maintaining privacy. - **Cross-Platform Identity**: Enable identity portability across Web3 applications. - **Self-Sovereign Identity**: Give users full control over their identity data. # IndexingCo (/integrations/indexingco) --- title: IndexingCo category: Indexers available: ["C-Chain", "All Avalanche L1s"] description: "IndexingCo provides managed blockchain indexing services enabling developers to access on-chain data through simple APIs." logo: /images/indexingco.png developer: IndexingCo website: https://www.indexing.co/ documentation: https://docs.indexing.co/ --- ## Overview IndexingCo is a managed blockchain indexing service that handles on-chain data indexing so developers can focus on building applications. It exposes blockchain data through straightforward APIs. ## Features - **Managed Service**: Fully managed indexing infrastructure - **Simple APIs**: Easy-to-use APIs for data access - **Real-Time Updates**: Live data updates as blocks are produced - **Historical Data**: Full historical blockchain data access - **Custom Indexing**: Support for custom indexing requirements - **Reliability**: High-availability indexing infrastructure ## Getting Started 1. **Create Account**: Sign up at [IndexingCo](https://www.indexing.co/) 2. **Select Network**: Choose Avalanche network to index 3. **Configure**: Set up indexing parameters 4. **API Access**: Get API credentials for data access 5. **Integrate**: Use APIs in your application ## Documentation For API documentation, visit [IndexingCo Docs](https://docs.indexing.co/). ## Use Cases - **Data Access**: Simple access to blockchain data - **Application Backend**: Power application backends with indexed data - **Analytics**: Build analytics on blockchain activity - **Monitoring**: Monitor on-chain activity # Infura (/integrations/infura) --- title: Infura category: RPC Endpoints available: ["C-Chain"] description: "Infura provides reliable, scalable blockchain infrastructure and RPC services for Web3 applications, powered by Consensys." logo: /images/infura.jpg developer: Consensys website: https://infura.io/ documentation: https://docs.infura.io/ --- ## Overview Infura is a blockchain infrastructure provider offering RPC services and APIs for Web3 applications. Part of Consensys, Infura provides enterprise-grade node infrastructure for accessing multiple blockchain networks including Avalanche, so developers can build and scale dApps without running their own nodes. ## Features - **Enterprise Infrastructure**: Highly reliable, scalable node infrastructure for mission-critical applications. - **Multi-Chain Support**: Access to Ethereum, Layer 2 networks, IPFS, and other blockchain networks. - **High Availability**: Industry-leading uptime with redundant infrastructure. - **WebSocket Support**: Real-time blockchain data through WebSocket connections. - **Archive Nodes**: Complete historical blockchain data access. - **Dashboard Analytics**: Monitor API usage, performance, and application metrics. - **Rate Limiting**: Flexible rate limits for different application needs. - **Global Infrastructure**: Distributed nodes across multiple regions for low latency. ## Documentation For more details, visit the [Infura Documentation](https://docs.infura.io/). ## Use Cases - **DApp Development**: Reliable infrastructure for decentralized applications of any scale. - **Wallet Services**: Backend infrastructure for cryptocurrency wallets. - **NFT Platforms**: IPFS integration and blockchain access for NFT applications. - **DeFi Applications**: High-performance RPC for DeFi protocols and platforms. - **Enterprise Solutions**: Enterprise-grade infrastructure for business applications. # Intain (/integrations/intain) --- title: Intain category: Tokenization Platforms available: ["C-Chain", "All Avalanche L1s"] description: "Intain provides blockchain infrastructure for structured finance, enabling tokenization and management of asset-backed securities." logo: /images/intain.png developer: Intain website: https://www.intainft.com/ documentation: https://www.intainft.com/resource --- ## Overview Intain is a blockchain-based structured finance platform for tokenizing and managing asset-backed securities. Its proprietary IntainMARKETS technology provides transparency and automation for structured finance, letting issuers and investors participate in tokenized credit markets. ## Features - **Structured Finance**: Specialized infrastructure for asset-backed securities - **IntainMARKETS**: Proprietary marketplace for tokenized credit products - **Transparency**: Real-time visibility into underlying asset performance - **Automation**: Automated payment waterfalls and reporting - **Compliance**: Regulatory compliance for securities offerings - **Secondary Trading**: Support for secondary market liquidity ## Getting Started 1. **Contact Intain**: Reach out through [Intain](https://www.intainft.com/) 2. **Product Design**: Work with Intain to structure your offering 3. **Platform Onboarding**: Complete platform setup and compliance 4. **Token Issuance**: Issue tokenized asset-backed securities 5. **Ongoing Management**: Manage securities through the platform ## Documentation For more information, visit the [Intain website](https://www.intainft.com/resource). ## Use Cases - **ABS Tokenization**: Tokenize asset-backed securities - **CLO Markets**: Create and manage collateralized loan obligations - **Credit Products**: Issue tokenized credit products - **Structured Finance**: Modern infrastructure for structured finance # Janus Henderson (/integrations/janus-henderson) --- title: Janus Henderson category: Assets available: ["C-Chain"] description: "Janus Henderson is a global asset manager offering tokenized funds including JAAA and JTRSY, integrating blockchain technology into traditional investment management." logo: /images/janus-henderson.png developer: Janus Henderson website: https://www.janushenderson.com/ documentation: https://www.janushenderson.com/ --- ## Overview Janus Henderson is a global asset management firm with a track record in active investment management. Through tokenized funds like JAAA and JTRSY, Janus Henderson offers blockchain-native investment products that combine its active management expertise with the efficiency and transparency of distributed ledger technology. # JAW (/integrations/jaw) --- title: JAW category: Wallets and Account Abstraction available: ["C-Chain"] description: "Identity-centric smart account infrastructure with passkey authentication, gasless transactions, and programmable permissions for Avalanche applications." logo: /images/jaw.png developer: JAW website: https://jaw.id/ documentation: https://docs.jaw.id/ --- ## Overview [JAW](https://jaw.id/) provides smart account infrastructure for Avalanche applications. Give users passkey-secured accounts with ENS identity, sponsor their gas, and enable programmable permissions for subscriptions and delegated agent execution. ## Features - **Passkey Authentication**: Phishing-resistant, synced across devices via iCloud/Google (works as transport layer) - **ERC-4337 Smart Accounts**: Gasless, batchable, and programmable - **EOA Upgrade**: Upgrade existing externally owned accounts to smart accounts without migrating assets - **EIP-1193 Compatible**: Drop-in replacement for MetaMask or any injected wallet - **Delegated Permissions**: Let contracts or agents perform scoped actions (ERC-7715) - **ENS Subname Issuance**: Assign human-readable identities on onboarding - **Headless / Server-Side Support**: AI agent wallets or backend-triggered transactions ## Getting Started Get an API key from the [JAW Dashboard](https://dashboard.jaw.id) and add your domain to the allowed list. Use `localhost` for local development, your production domain for production. JAW offers two SDKs: - **`@jaw.id/wagmi`**: React connector with wagmi hooks. Use this for React and Next.js apps. - **`@jaw.id/core`**: Framework-agnostic EIP-1193 provider. Use this for vanilla JS, server-side, or headless environments. ### wagmi (React / Next.js) #### Installation ```bash npm install @jaw.id/wagmi wagmi @tanstack/react-query ``` #### Configuration Create your wagmi config with the JAW connector, then wrap your app with `WagmiProvider` and `QueryClientProvider`. Both are required. ```typescript // config.ts import { createConfig, http } from 'wagmi'; import { avalanche } from 'wagmi/chains'; import { jaw } from '@jaw.id/wagmi'; export const config = createConfig({ chains: [avalanche], connectors: [ jaw({ apiKey: process.env.NEXT_PUBLIC_JAW_API_KEY!, appName: 'My Avalanche App', defaultChainId: avalanche.id, // 43114 }), ], transports: { [avalanche.id]: http(), }, }); ``` ```tsx // App.tsx import { WagmiProvider } from 'wagmi'; import { QueryClient, QueryClientProvider } from '@tanstack/react-query'; import { config } from './config'; const queryClient = new QueryClient(); export function App({ children }: { children: React.ReactNode }) { return ( {children} ); } ``` For testnet (Avalanche Fuji), enable `showTestnets` in the connector: ```typescript jaw({ apiKey: process.env.NEXT_PUBLIC_JAW_API_KEY!, defaultChainId: 43113, // Avalanche Fuji Testnet preference: { showTestnets: true }, }) ``` #### Connect a Wallet Use `useConnect` from `@jaw.id/wagmi` (not from `wagmi`) to support JAW-specific capabilities like SIWE and subname issuance during connection. ```tsx import { useConnect } from '@jaw.id/wagmi'; import { useAccount } from 'wagmi'; export function ConnectButton() { const { connect, connectors } = useConnect(); const { address, isConnected } = useAccount(); if (isConnected) return

Connected: {address}

; return ( ); } ``` In CrossPlatform mode, clicking the button opens a `keys.jaw.id` popup where the user registers or authenticates with their passkey. In AppSpecific mode, the `uiHandler` renders the UI inside your app instead. #### Send a Transaction ```tsx import { useSendCalls } from 'wagmi'; import { parseEther } from 'viem'; export function SendAVAX() { const { sendCalls } = useSendCalls(); return ( ); } ``` #### Gasless Transactions JAW supports two approaches to removing gas friction for users: - **Sponsored (gasless)**: A paymaster covers gas fees entirely, so users pay nothing. Requires a paymaster URL in your config. - **Stablecoin gas payments**: Users pay gas fees in stablecoins (e.g. USDC) instead of AVAX. This works natively with JAW, no additional configuration needed. ```typescript // config.ts import { createConfig, http } from 'wagmi'; import { avalanche } from 'wagmi/chains'; import { jaw } from '@jaw.id/wagmi'; export const config = createConfig({ chains: [avalanche], connectors: [ jaw({ apiKey: process.env.NEXT_PUBLIC_JAW_API_KEY!, appName: 'My Avalanche App', defaultChainId: avalanche.id, paymasters: { [avalanche.id]: { url: 'https://your-paymaster-url/rpc' }, }, }), ], transports: { [avalanche.id]: http(), }, }); ``` Once configured, transactions via `useSendCalls` and `useWriteContract` are automatically sponsored. No changes to your transaction code needed. See the [Gas Sponsoring guide](https://docs.jaw.id/guides/gas-sponsoring) for paymaster setup and stablecoin gas configuration. #### Delegated Permissions Permissions (ERC-7715) let you grant a spender (a backend wallet, contract, or AI agent) the ability to perform scoped actions on behalf of the user. Permissions define exactly which contracts can be called, how much can be spent, and for how long. Use them for subscription payments, recurring charges, and autonomous agent wallets. ```tsx import { useGrantPermissions } from '@jaw.id/wagmi'; import { parseUnits } from 'viem'; const AVAX_NATIVE = '0xEeeeeEeeeEeEeeEeEeEeeEEEeeeeEeeeeeeeEEeE'; function GrantPermission() { const { mutate: grant, isPending } = useGrantPermissions(); return ( ); } ``` Use `usePermissions` to query active permissions and `useRevokePermissions` to revoke them, both from `@jaw.id/wagmi`. ### core (Framework-Agnostic) JAW also provides `@jaw.id/core`, a framework-agnostic EIP-1193 provider for vanilla JS, server-side, or headless environments. Use it when you don't need React or wagmi. See the [Core SDK documentation](https://docs.jaw.id/core) for setup and usage. ### Headless / Server-Side For AI agent wallets and backend-triggered transactions, JAW supports headless smart accounts via `Account.fromLocalAccount()`. Wrap any viem `LocalAccount` (private key, Turnkey, Fireblocks, or other KMS) into a JAW smart account without any browser or passkey. See the [Account API documentation](https://docs.jaw.id/account) for full details. To use JAW as a smart account layer on top of an existing KMS provider, see the [KMS examples](https://github.com/JustaName-id/jaw-examples/tree/main/examples/kms). ## Authentication Modes - **CrossPlatform** (default): Passkey operations run on `keys.jaw.id` via a popup, giving users a portable wallet that works across any JAW-powered app. - **AppSpecific**: Passkey operations run inside your app via a `uiHandler`, giving you full UI control at the cost of wallet portability. See the [JAW configuration docs](https://docs.jaw.id/configuration) for setup details and UIHandler implementation. ## Use Cases - **User Onboarding**: Passkey-based sign-in with no browser extension or seed phrase needed - **Gasless DeFi**: Sponsor gas so users can interact with DeFi protocols without holding AVAX - **Subscription Payments**: Recurring charges via delegated permissions (ERC-7715) - **AI Agent Wallets**: Headless smart accounts for autonomous on-chain agents - **Enterprise Apps**: Backend-triggered transactions with server-side `Account.fromLocalAccount()` ## Documentation For full integration guides, API reference, and examples: - [Quickstart Guide](https://docs.jaw.id/guides/quickstart) - [Account API](https://docs.jaw.id/account) - [Gas Sponsoring](https://docs.jaw.id/guides/gas-sponsoring) - [Sign-In With Ethereum](https://docs.jaw.id/guides/sign-in-with-ethereum) - [Subscription Payments](https://docs.jaw.id/guides/subscription-payments) - [Onchain Identity](https://docs.jaw.id/guides/onchain-identity) - [Skills](https://docs.jaw.id/skills) - [Configuration Reference](https://docs.jaw.id/configuration) - [Source Code](https://github.com/JustaName-id/jaw-mono) - [Example Integrations](https://github.com/JustaName-id/jaw-examples) # JPYC (/integrations/jpyc) --- title: JPYC category: Assets available: ["C-Chain"] description: "JPYC provides a Japanese Yen stablecoin, offering a regulated tokenized representation of the Japanese Yen on blockchain." logo: /images/jpyc.png developer: JPYC website: https://corporate.jpyc.co.jp/en documentation: https://corporate.jpyc.co.jp/en --- ## Overview JPYC is a Japanese Yen-backed stablecoin for users and businesses in Japan and globally. With regulatory compliance and transparent reserve management, JPYC supports payments, remittances, and digital asset transactions pegged to the Japanese Yen. # Jumio (/integrations/jumio) --- title: Jumio category: KYC / Identity Verification available: ["C-Chain"] description: Jumio provides AI-powered identity verification and KYC/AML compliance solutions that integrate with txAllowlist precompiles for regulatory-compliant blockchain applications. logo: /images/jumio.png developer: Jumio website: https://www.jumio.com/ documentation: https://www.jumio.com/resources/ --- ## Overview Jumio is an identity verification and KYC/AML compliance platform powered by AI and machine learning. It combines ID verification, biometric authentication, and AML screening so that only verified users can access restricted blockchain applications. For projects using txAllowlist precompiles, Jumio handles the identity layer while keeping the verification flow straightforward for end users. ## Features - **AI-Powered Identity Verification**: Verifies government-issued IDs with OCR and authenticity checks across 5,000+ ID types from 200+ countries and territories. - **Biometric Authentication**: Confirms the person is physically present and matches their ID using liveness detection and facial recognition. - **Deepfake Detection**: Detects and blocks synthetic identity fraud attempts. - **AML Screening**: Automatically screens users against global watchlists for sanctions, politically exposed persons (PEPs), and adverse media. - **Risk Signals**: Provides additional verification through address checks, phone verification, email validation, and government database checks. - **Customizable Risk Scoring**: Determine verification requirements based on transaction risk levels with configurable workflows. - **Self-Service Rules Editor**: Adjust verification rules in real-time to respond to emerging fraud patterns. - **Omnichannel Support**: Verify users across web, mobile, and API integrations with consistent security standards. - **Continuous Monitoring**: Ongoing screening of user profiles against watchlists to maintain compliance. ## Getting Started To integrate Jumio with your Avalanche-based application using txAllowlist precompiles, follow these steps: 1. **Contact Jumio**: Reach out to Jumio to discuss your specific requirements and establish an account. 2. **Integration Planning**: Choose the appropriate integration method based on your application's needs (API, SDK, or iframe). 3. **Configuration**: Set up your verification workflows, risk rules, and compliance requirements in the Jumio dashboard. 4. **Implementation**: Use Jumio's [resources](https://www.jumio.com/resources/) to integrate their verification services into your application. 5. **Testing**: Test the integration in Jumio's sandbox environment to ensure proper functionality. 6. **Allowlist Automation**: Connect verification results to your txAllowlist management, automatically adding verified users and removing those who fail checks. ## Integration with txAllowlist Jumio's identity verification platform works with Avalanche's txAllowlist precompile in several ways: 1. **Verified Transaction Authorization**: Only allow transactions from users who have completed Jumio's identity verification. 2. **Risk-Based Transaction Permissions**: Apply different verification levels based on transaction types or amounts, controlling access through the allowlist. 3. **Automated Compliance Management**: Automatically update the allowlist based on ongoing monitoring of verification status and AML screening results. 4. **API-Driven Architecture**: Jumio's API allows programmatic management of the txAllowlist based on verification outcomes. 5. **Fraud Prevention**: Reduce the risk of malicious actors accessing your application by layering identity verification checks. ## Documentation Jumio provides [resources](https://www.jumio.com/resources/) including whitepapers, case studies, implementation guides, and best practices for identity verification in blockchain applications. ## Use Cases Jumio's identity verification fits well with these Avalanche-based applications: - **Regulatory-Compliant DeFi**: Create lending, borrowing, or trading platforms that meet KYC/AML requirements. - **Private Blockchain Networks**: Ensure that only verified participants can access permissioned networks. - **Tokenized Securities**: Verify investor accreditation and identity for compliant security token offerings. - **High-Value Asset Trading**: Implement enhanced verification for platforms dealing with high-value digital assets. - **Cross-Border Payment Solutions**: Address compliance requirements for international money transfers on blockchain. - **Enterprise Applications**: Provide secure identity verification for B2B blockchain solutions. # Juno (/integrations/juno) --- title: Juno category: Assets available: ["C-Chain"] description: "Juno provides stablecoins including MXNB (Mexican Peso stablecoin), offering tokenized fiat currencies for Mexican and Latin American markets." logo: /images/juno.png developer: Juno website: https://buildwithjuno.com/ documentation: https://docs.bitso.com/juno/ --- ## Overview Juno (a Bitso subsidiary) is a stablecoin issuer focused on Latin American markets, providing MXNB (Mexican Peso stablecoin) backed 1:1 by fiat reserves. Juno enables businesses and developers to integrate stablecoin payments, cross-border transactions, and fiat on/off-ramps for the Mexican and Latin American markets. # Kaleido (/integrations/kaleido) --- title: Kaleido category: Blockchain as a Service available: ["C-Chain", "All Avalanche L1s"] description: Kaleido is an enterprise blockchain platform offering instant Avalanche node deployment, digital asset management, and 500+ pre-built Web3 services with enterprise-grade security. logo: /images/kaleido.png developer: Kaleido website: https://www.kaleido.io/ documentation: https://docs.kaleido.io/ --- ## Overview Kaleido is an enterprise blockchain platform with support for [Avalanche nodes and subnets](https://www.kaleido.io/avalanche). It provides 500+ pluggable Web3 tools for building dApps with fast finality and high throughput. Rated #1 for Asset Tokenization and Blockchain-as-a-Service on G2, Kaleido is used by organizations including Swift, Unionbank of the Philippines, and the Reserve Bank of Australia. ## Features - **Instant Avalanche Deployment**: - Launch connections to Avalanche Mainnet in minutes - Support for Full and Archive nodes - Dedicated or elastic node configurations - Fuji testnet support for development and testing - **Enterprise-Grade Security**: - SOC 2 Type 2 certified - ISO 27001, 27017, and 27018 certified - HSM-based key management - Built-in high availability and disaster recovery - **Complete Web3 Stack**: - 400+ blockchain APIs - Smart contract management and API generation - Token factory for any asset type - Event streams for real-time data - Block explorer and transaction monitoring - **Developer Tools**: - Click-button tokens, wallets, and storage - REST API Gateway - Document exchange and App2App messaging - Identity and access management - **Flexible Infrastructure**: - Multi-cloud deployment (AWS, Azure, On-Premise) - No crypto management required - SLAs and 24/7 support - Multi-party control plane ## Getting Started 1. **Create an Account**: - Visit the [Kaleido platform](https://www.kaleido.io/) and sign up - Choose your deployment configuration - Select Avalanche as your blockchain protocol 2. **Launch Your Network**: - Stand up Avalanche nodes in minutes - Configure Full or Archive nodes based on your needs - Connect to Mainnet or Fuji testnet 3. **Build with Pre-Built Services**: - Deploy smart contracts and convert them to APIs - Use the token factory to mint and manage digital assets - Set up event streams to connect on-chain and off-chain systems 4. **Integrate and Scale**: - Use block explorers to monitor transactions and gas usage - Implement identity management and secure key storage - Scale with elastic node configurations as you grow ## Documentation For more details, visit the [Kaleido Documentation](https://docs.kaleido.io/). ## Use Cases - **Digital Asset Platforms**: Launch institutional-grade asset platforms on public or private chains with enterprise tokenization capabilities - **DeFi Applications**: Build high-throughput applications with rapid finality and low transaction fees - **Tokenization Projects**: Tokenize real-world assets to power digital strategies and monetize globally - **NFT Platforms**: Use Avalanche's speed and security to manage NFTs at scale - **Web3 Games**: Build new gaming experiences with in-game assets and revenue generation - **CBDCs**: Develop central bank digital currency solutions with secure systems for the global economy - **Supply Chain**: Unlock new efficiencies and traceability with blockchain-based supply chain solutions - **Capital Markets**: Build transformative solutions for asset management, FMIs, and securities ## Key Differentiators - **1 billion+ blocks** mined on Kaleido chains - **10,000+ developers** across 30+ countries - **1,000+ digital transformation** projects completed - Ranked **#1 Asset Tokenization Platform** on G2 - Ranked **#1 Blockchain-aaS Platform** on G2 - NSF **Blockchain Interoperability Grant Winner** # Keyring (/integrations/keyring) --- title: Keyring category: KYC / Identity Verification available: ["C-Chain"] description: Keyring provides privacy-preserving identity verification using zero-knowledge technology, enabling compliance for blockchain applications implementing txAllowlist precompiles. logo: /images/keyring.png developer: Keyring Network website: https://www.keyring.network/ documentation: https://docs.keyring.network/docs --- ## Overview Keyring is an identity verification platform that uses zero-knowledge (ZK) technology to handle KYC for blockchain applications without exposing personal data. Through Keyring Pro and Keyring Connect, users verify their identity once and reuse credentials across multiple platforms. For projects using txAllowlist precompiles, Keyring lets you enforce compliance while preserving user privacy -- verified users can transact on permissioned networks without sharing more data than necessary. ## Features - **Zero-Knowledge Verification**: Utilizes zkTLS technology to verify user identity without exposing personal data, allowing proofs to be submitted on-chain. - **Privacy-First Approach**: Users maintain control of their personal information while still satisfying verification requirements. - **Instant ZK-KYC**: Enables users to extract information from established platforms (like Binance, Revolut, or Coinbase) with proof of authenticity in minutes. - **Multi-Chain Support**: Available on multiple networks including Ethereum, Base, Arbitrum, Optimism, and Avalanche. - **Customizable Compliance Policies**: Define specific rules and verification requirements for your platform's risk profile. - **On-Chain Credential Issuance**: Creates verifiable credentials that can be queried by smart contracts for transaction approval. - **Quick Developer Integration**: Simple implementation that can be completed in under 3 hours. - **Automates Identity Verification**: Replaces manual KYC workflows with automated credential verification. ## Getting Started To integrate Keyring into your Avalanche-based application with txAllowlist precompiles, follow these steps: 1. **Contact Keyring**: Reach out to Keyring through their [contact page](https://www.keyring.network/contact) to discuss your specific compliance requirements. 2. **Define Compliance Policy**: Work with Keyring to establish the rules and data sources you'll accept for verification. 3. **SDK Integration**: Add a button or link to your frontend using Keyring's SDK to initiate the verification process. 4. **Smart Contract Integration**: Implement the provided modifiers in your smart contracts to check credential validity before allowing transactions. 5. **Testing**: Thoroughly test the integration to ensure proper credential verification and allowlist management. 6. **Launch**: Deploy your enhanced application with integrated txAllowlist and Keyring verification. ## Integration with txAllowlist Keyring works well with Avalanche's txAllowlist precompile: 1. **Privacy-Preserving Compliance**: Users prove they've passed KYC without revealing personal details, maintaining privacy while enforcing transaction restrictions. 2. **On-Chain Credential Verification**: Smart contracts verify credentials before approving transactions, acting as the allowlist enforcement layer. 3. **Dynamic Access Control**: Transaction permissions are automatically granted or revoked based on credential status -- no manual allowlist management needed. 4. **Verify-Once UX**: Users verify once and interact with multiple applications using the same credentials. 5. **Risk-Based Approach**: Verification can be calibrated to the risk level of different transaction types or user profiles. ## Documentation Keyring's developer docs are available at [docs.keyring.network/docs](https://docs.keyring.network/docs). ## Use Cases Keyring fits well with these Avalanche-based applications: - **Permissioned DeFi Pools**: Create lending pools or trading platforms that only allow KYC'd users from non-sanctioned countries. - **Compliant DEXs**: Enable decentralized exchanges to implement regulatory-compliant trading without compromising on decentralization principles. - **Cross-Chain Ecosystems**: Maintain consistent identity verification across multiple chains or subnets. - **Token Offerings**: Ensure participants in token sales meet jurisdictional requirements. - **Enterprise Applications**: Build permissioned applications that require verified identity while maintaining privacy. - **Travel Rule Compliance**: Implement solutions compatible with the FATF Travel Rule for VASPs without excessive data sharing. # Keystone (/integrations/keystone) --- title: Keystone category: Hardware Wallets available: ["C-Chain"] description: Integrate Keystone hardware wallet support into your applications using Keystone's SDK. logo: /images/keystone.png developer: Keystone website: https://keyst.one/ documentation: https://docs.keyst.one/ --- ## Overview Keystone provides SDK support for integrating air-gapped hardware wallet functionality into applications built on Avalanche. With recent grant support, Keystone is expanding its capabilities to include X-Chain and P-Chain support. ## SDK Components - **[Firmware](https://github.com/KeystoneHQ/keystone3-firmware)**: Open-source firmware for secure transaction signing - **Integration SDK**: Coming soon with X-Chain and P-Chain support - **QR Code Protocol**: Specifications for air-gapped transaction signing ## Integration Features - **Air-Gapped Security**: QR code-based transaction signing - **Multi-Chain Support**: Current C-Chain with upcoming X-Chain and P-Chain support - **Cross-Platform**: Support for both desktop and mobile applications - **Hardware Security**: Secure element protection for private keys ## Coming Soon Integration documentation and SDKs for Avalanche chains will be available in the coming months, enabling developers to: - Initialize QR code scanning for air-gapped communication - Generate and manage wallet addresses - Implement secure transaction signing - Handle key management within secure hardware ## Use Cases - **Wallet Applications**: Add air-gapped hardware wallet support - **Mobile Applications**: Implement QR-based transaction signing - **DeFi Applications**: Enable secure hardware wallet transactions - **Cross-Chain Applications**: Support multi-chain operations ## Documentation Full SDK documentation and integration guides will be released alongside the upcoming X-Chain and P-Chain support. # KKR (/integrations/kkr) --- title: KKR category: Assets available: ["C-Chain"] description: "KKR offers tokenized investment products including the Health Care Growth Fund, bringing private equity and alternative investments to blockchain." logo: /images/kkr.png developer: KKR website: https://www.kkr.com/ documentation: https://www.kkr.com/ --- ## Overview KKR is a global investment firm managing alternative asset classes including private equity, credit, and real assets. Through tokenized products like the Health Care Growth Fund, KKR offers qualified investors access to institutional-grade private market investments on blockchain, combining its investment expertise with the efficiency and transparency of distributed ledger technology. # L1Beat (/integrations/l1beat) --- title: L1Beat category: Analytics & Data available: ["C-Chain", "All Avalanche L1s"] description: "L1Beat provides analytics and comparison data for Avalanche L1s, tracking network performance, adoption metrics, and ecosystem growth." logo: /images/l1beat.svg developer: L1Beat website: https://l1beat.io/ documentation: https://l1beat.io/ --- ## Overview L1Beat is an analytics platform for tracking and comparing Avalanche L1 networks. It provides data on network performance, adoption metrics, and ecosystem development across Avalanche L1s. ## Features - **L1 Network Analytics**: - Real-time network metrics - Performance benchmarking - Transaction volume tracking - Network activity monitoring - **Comparative Analysis**: - Cross-L1 comparisons - Performance rankings - Adoption metrics - Growth trends - **Data Visualization**: - Interactive dashboards - Historical charts - Network comparisons - Trend analysis - **Ecosystem Insights**: - Developer activity - Protocol deployment - User adoption - Network statistics ## Getting Started 1. **Access Platform**: - Visit [L1Beat](https://l1beat.io/) - Explore L1 network dashboards - View comparative metrics 2. **Analyze Networks**: - Compare L1 performance - Track network growth - Monitor activity metrics 3. **Research**: - Access historical data - Identify trends - Evaluate network health ## Documentation For more information, visit the [L1Beat website](https://l1beat.io/). ## Use Cases - **L1 Development**: Research and compare L1 networks for development - **Network Analysis**: Track and analyze L1 performance and adoption - **Investment Research**: Evaluate L1 networks for investment decisions - **Competitive Analysis**: Compare network metrics and growth - **Ecosystem Monitoring**: Track the overall Avalanche L1 ecosystem # LayerZero (/integrations/layerzero) --- title: LayerZero category: Crosschain Solutions available: ["C-Chain"] description: LayerZero is an omnichain interoperability protocol for secure, censorship-resistant, and permissionless messaging across blockchains with immutable smart contracts powering cross-chain dApps, tokens, and data transfer. logo: /images/layerzero.png developer: LayerZero Labs website: https://layerzero.network/ documentation: https://docs.layerzero.network/ --- ## Overview LayerZero is an omnichain interoperability protocol that enables secure, censorship-resistant, and permissionless cross-chain messaging across 50+ blockchain networks. Unlike traditional bridges that rely on wrapped tokens and trusted intermediaries, LayerZero provides a low-level messaging primitive that lets smart contracts on different blockchains communicate directly. The protocol has facilitated over $6 billion in cross-chain value and powers major DeFi protocols, NFT platforms, and blockchain applications. Its immutable smart contracts, configurable security model, and developer-friendly design make it a foundation for building applications that work natively across connected blockchains. ## Features - **Omnichain Messaging**: Send messages between any connected blockchains. - **50+ Supported Chains**: Connect Ethereum, Avalanche, BNB Chain, Arbitrum, Optimism, Polygon, Solana, and 40+ more. - **Immutable Smart Contracts**: Core protocol contracts are immutable ensuring long-term security. - **Permissionless**: Anyone can build omnichain applications without permission. - **Configurable Security**: Applications choose their own security configuration. - **Censorship Resistant**: No centralized control over message delivery. - **Omni fungible Tokens (OFTs)**: Native token standard for tokens that exist across all chains. - **Omnichain NFTs (ONFTs)**: NFTs that can move between blockchains. - **Low Latency**: Fast message delivery between chains. - **Gas Efficiency**: Optimized for minimal cross-chain gas costs. - **Developer SDKs**: Development tools and libraries for Solidity and TypeScript. - **StargateFi**: Largest omnichain application built on LayerZero for liquidity bridging. ## Core Technology ### Omnichain Messaging LayerZero's fundamental capability is enabling smart contracts to send messages to contracts on other chains: **Direct Contract Calls**: Smart contracts on one chain can call functions on contracts on other chains. **Arbitrary Payloads**: Send any data structure between chains, not limited to token transfers. **Guaranteed Delivery**: Messages are guaranteed to be delivered to destination chains. **Ordered Execution**: Optional ordering guarantees for message sequences. **Atomic Transactions**: Build atomic cross-chain transactions. This primitive opens up new application architectures beyond simple token bridges. ### Omni fungible Tokens (OFTs) LayerZero's native token standard: **Single Native Token**: One native token across all chains, not wrapped versions. **Unified Liquidity**: Liquidity is unified across all chains. **No Fragmentation**: Eliminates the fragmentation of wrapped token standards. **Burn and Mint**: Tokens are burned on source chain and minted on destination. **Custom Logic**: Token creators can add custom cross-chain logic. **ERC-20 Compatible**: Fully compatible with existing ERC-20 infrastructure. OFTs eliminate the fragmentation problems that come with wrapped token approaches. ### Configurable Security LayerZero's unique security model: **Application-Controlled**: Each application configures its own security. **Oracle + Relayer**: Uses independent Oracle and Relayer for message verification. **Choose Your Stack**: Applications select their preferred Oracle and Relayer. **Multiple Options**: Use Chainlink, Google Cloud, or custom infrastructure. **No Single Point of Failure**: Separation of concerns between verification and delivery. **Immutable Core**: Core protocol contracts cannot be upgraded. This separation of concerns gives applications control over their own security tradeoffs. ## Avalanche Integration LayerZero has deep Avalanche integration: **Native Support**: First-class Avalanche C-Chain integration. **AVAX Bridging**: Bridge AVAX across all LayerZero-connected chains. **Avalanche DeFi**: Connect Avalanche DeFi to 50+ other chains. **Fast Finality**: Avalanche's sub-second finality enables quick bridging. **Low Costs**: Avalanche's low transaction fees keep bridging affordable. **Growing Ecosystem**: Increasing number of omnichain apps on Avalanche. LayerZero connects Avalanche to the broader multi-chain ecosystem. ## Use Cases LayerZero enables diverse omnichain applications: **Omnichain DeFi**: DeFi protocols operating natively across all chains simultaneously. **Unified Liquidity**: Aggregate liquidity from multiple chains into single pools. **Omnichain NFTs**: NFTs that can be moved between any blockchain. **Cross-Chain Governance**: DAO governance spanning multiple blockchains. **Omnichain Gaming**: Games where assets work across multiple chains. **Cross-Chain Yield**: Yield strategies that work across multiple chains. **Unified Identity**: Identity and reputation systems working across all chains. **Chain-Abstracted UX**: Users don't need to know which chain they're on. ## StargateFi LayerZero's flagship application: **Largest Omnichain DEX**: Over $4B in Total Value Locked. **Unified Liquidity Pools**: Single liquidity pool across all chains. **Instant Guaranteed Finality**: Transactions finalized instantly. **Native Assets**: Transfer native assets without wrapped intermediaries. **Capital Efficient**: No liquidity fragmentation across chains. **Built on LayerZero**: Demonstrates the power of LayerZero messaging. Stargate serves as proof of concept for what's possible with LayerZero. ## Developer Experience LayerZero provides developer tools including: **Solidity SDK**: Easy-to-use Solidity contracts for building omnichain apps. **TypeScript SDK**: Full-featured TypeScript library for off-chain integrations. **Documentation**: Extensive guides, tutorials, and API references. **Testnet**: Complete testnet environment across all supported chains. **Example Apps**: Open-source example applications and templates. **LayerZero Scan**: Explorer for tracking cross-chain messages. **Developer Discord**: Active community with core team support. **Grants Program**: Funding available for ecosystem projects. ## Security Model LayerZero's security approach: **Immutable Contracts**: Core protocol contracts are immutable and cannot be upgraded. **Independent Verification**: Oracle and Relayer are independent entities. **Application Control**: Each app chooses its security configuration. **Multiple Audits**: Audited by leading security firms including Trail of Bits. **Bug Bounty**: Active bug bounty program through Immunefi. **Battle-Tested**: Billions in value transferred through the protocol. **Monitoring**: Continuous monitoring of cross-chain activity. ## Ecosystem Major projects building on LayerZero: **DeFi**: Stargate, Radiant Capital, Sushiswap, and dozens of DeFi protocols. **NFTs**: Pudgy Penguins, Gh0stly Gh0sts, and major NFT collections. **Gaming**: Blockchain games with omnichain assets. **Infrastructure**: Chainlink, Google Cloud providing Oracle services. **Bridges**: Multiple bridge applications powered by LayerZero. These projects demonstrate LayerZero's production readiness across different use cases. ## Getting Started To build with LayerZero: 1. **Read Documentation**: Start at [docs.layerzero.network](https://docs.layerzero.network/) for guides and API references. 2. **Install SDK**: ``` npm install @layerzerolabs/solidity-examples ``` 3. **Inherit LayerZero Contracts**: Your contracts inherit from LayerZero base contracts. 4. **Implement Cross-Chain Logic**: Define what happens when messages are received. 5. **Deploy to Testnet**: Test your omnichain app on LayerZero testnet. 6. **Configure Security**: Choose Oracle and Relayer for your application. 7. **Launch**: Deploy your omnichain application to mainnet. ## Architecture LayerZero's architecture consists of: **Endpoint Contracts**: Deployed on each chain, handle message sending/receiving. **Oracle**: Independent entity verifying block headers between chains. **Relayer**: Independent entity delivering message proofs. **Applications**: Your smart contracts building on LayerZero. **Off-Chain Infrastructure**: APIs and services supporting the network. This modular design keeps verification and delivery independent of each other. ## LayerZero V2 The next evolution of the protocol: **Enhanced Security**: Improved security model with more configuration options. **Better Gas Efficiency**: Reduced costs for cross-chain messaging. **More Chains**: Expanded chain support including non-EVM chains. **Improved UX**: Better developer and user experience. **Backwards Compatible**: Existing V1 apps continue working. Existing V1 apps continue working without changes. ## Pricing LayerZero costs: - **Message Fees**: Small fees for cross-chain messages (typically $1-5) - **Gas Costs**: Native gas on source and destination chains - **Oracle Fees**: Fees paid to Oracle for block header verification - **Relayer Fees**: Fees paid to Relayer for message delivery - **No Platform Fees**: No fees to LayerZero Labs All fees are transparent and paid per message. ## Competitive Advantages **Omnichain Native**: Built from ground up for omnichain applications, not retrofitted. **Configurable Security**: Applications control their own security model. **Immutable Core**: Core contracts cannot be upgraded, ensuring long-term stability. **Proven at Scale**: Over $6B in value transferred, battle-tested. **50+ Chains**: Wide chain coverage including EVM and non-EVM networks. **Developer-Friendly**: Excellent documentation and tooling. **Active Ecosystem**: Growing ecosystem of applications and protocols. **Native Tokens**: OFT standard eliminates wrapped token fragmentation. ## Community and Governance LayerZero community features: **Discord**: Active community with 100,000+ members. **Twitter**: Regular updates and ecosystem highlights. **Documentation**: Continuously updated guides and tutorials. **Hackathons**: Regular hackathons with prizes. **Grants**: Funding for ecosystem projects. **Governance**: Community input on protocol direction. ## Audits and Security LayerZero security measures: - Audited by Trail of Bits, Zellic, and other top firms - Continuous security assessments - Active bug bounty program - Immutable core contracts - Independent Oracle and Relayer verification - Real-time monitoring systems ## Support LayerZero provides support through: - **Technical Documentation**: Extensive docs and guides - **Discord Support**: Active community and core team - **Developer Workshops**: Regular educational sessions - **Office Hours**: Weekly community calls - **GitHub**: Open-source code and examples - **LayerZero Scan**: Transaction explorer and debugging tools # Least Authority (/integrations/leastauthority) --- title: Least Authority category: Security Audits available: ["C-Chain", "All Avalanche L1s"] description: "Least Authority delivers top-tier security audits for blockchain systems with specialized expertise in cryptography and distributed systems." logo: /images/leastauthority.jpg developer: Least Authority website: https://leastauthority.com/ --- ## Overview Least Authority is a security firm specializing in cryptography, distributed systems, and privacy-focused technologies. Their team provides security audits for blockchain projects building on Avalanche, with particular expertise in cryptographic protocols and distributed consensus systems. Founded on principles of privacy and security, Least Authority brings deep technical knowledge and rigorous methodology to their assessments. ## Features - **Smart Contract Audits**: Thorough code reviews of Solidity and other smart contract implementations. - **Cryptographic Protocol Analysis**: Specialized review of cryptographic implementations and protocols. - **Distributed Systems Security**: Expert evaluation of consensus mechanisms and network protocols. - **Privacy-Focused Security**: Specialized assessments for privacy-enhancing technologies. - **Formal Security Models**: Development of formal security models and threat assessments. - **Open Source Expertise**: Deep experience with open source security principles. ## Getting Started 1. **Initial Inquiry**: Contact Least Authority through their website to discuss your project's needs. 2. **Project Scoping**: Define the scope, objectives, and timeline for the security assessment. 3. **Audit Process**: - In-depth code review by specialized security researchers - Analysis of protocol design and implementation - Identification of vulnerabilities and security concerns - Detailed remediation recommendations 4. **Final Report**: Delivery of an audit report documenting findings and recommendations. 5. **Remediation Review**: Optional review of implemented fixes to verify security improvements. ## Use Cases - **Privacy-Focused Projects**: Applications requiring strong privacy guarantees. - **Cryptographic Protocols**: Systems implementing novel cryptographic approaches. - **Consensus Mechanisms**: Custom consensus implementations for Avalanche L1s. - **Distributed Systems**: Complex distributed systems with multiple interaction points. - **Zero-Knowledge Applications**: Projects implementing zero-knowledge proof systems. # Ledger (/integrations/ledger) --- title: Ledger category: Hardware Wallets available: ["C-Chain", "All Avalanche L1s"] description: Integrate Ledger hardware wallet support into your applications using official Ledger SDKs for Avalanche. logo: /images/ledger.png developer: Ledger website: https://www.ledger.com/ documentation: https://developers.ledger.com/ --- ## Overview Ledger provides multiple SDKs that enable wallet applications and dApps to integrate hardware wallet functionality for Avalanche chains. The SDKs are available in Go and JavaScript, making it easy to add Ledger support to both desktop and web applications. ## Available SDKs - **[Avalanche Ledger App](https://github.com/ava-labs/ledger-avalanche)**: Core application running on Ledger devices - **[Go SDK](https://github.com/ava-labs/ledger-avalanche-go)**: Native Go implementation for backend and desktop applications - **[JavaScript SDK](https://github.com/ava-labs/ledger-avalanche)**: Browser and Node.js support for web applications ## Integration Features - **Multi-Chain Support**: Enable transaction signing across C-Chain, X-Chain, and P-Chain - **Hardware-Based Security**: Leverage Ledger's secure element for key protection - **Cross-Platform**: Support for both desktop and web applications - **Transaction Signing**: Secure signing for transfers, staking, and smart contracts ## Getting Started ### Go SDK Integration ```go import ledger "github.com/ava-labs/ledger-avalanche-go" // Initialize connection ledgerApp, err := ledger.NewLedgerApp() if err != nil { // Handle connection error } ``` ### JavaScript SDK Integration ```javascript import Ledger from '@avalabs/ledger-avalanche' const ledger = new Ledger() await ledger.connect() ``` ## Use Cases - **Wallet Applications**: Add hardware wallet support to cryptocurrency wallets - **DeFi Applications**: Enable secure transaction signing for DeFi operations - **Validator Tools**: Secure validator key management and staking operations - **Cross-Chain Applications**: Support multi-chain operations with single device ## Documentation - [Ledger Developer Portal](https://developers.ledger.com/) - [Go SDK Documentation](https://pkg.go.dev/github.com/ava-labs/ledger-avalanche-go) - [JavaScript SDK Documentation](https://github.com/ava-labs/ledger-avalanche) # LFJ (/integrations/lfj) --- title: LFJ category: DeFi available: ["C-Chain"] description: "LFJ is a leading decentralized exchange (DEX) on Avalanche, offering trading, lending, and yield farming capabilities." logo: /images/traderjoe.jpeg developer: LFJ Team website: https://lfj.gg/ documentation: https://lfj.gg/ --- ## Overview LFJ is a decentralized trading platform native to the Avalanche ecosystem. It provides trading, lending, and yield farming on Avalanche's C-Chain. The platform has deep liquidity pools and uses a Liquidity Book AMM model. ## Features - **Liquidity Book (LB)**: An AMM model that offers concentrated liquidity with customizable price ranges and multiple fee tiers. - **Token Swaps**: Efficient token exchanges with competitive rates and minimal slippage. - **Yield Farming**: Opportunities to earn rewards by providing liquidity to trading pairs. - **Lending Markets**: Users can lend and borrow assets through isolated lending markets. - **Real Yield**: Protocol revenue is distributed to token stakers and liquidity providers. - **Cross-Chain Trading**: Access to trading across multiple chains through their v2.1 deployment. ## Getting Started 1. **Connect Wallet**: Visit [LFJ](https://lfj.gg/) and connect your Web3 wallet (MetaMask, Core, etc.). 2. **Fund Your Wallet**: Ensure you have AVAX in your wallet for transaction fees. 3. **Start Trading**: - Select the tokens you want to trade - Review the transaction details - Confirm the swap in your wallet 4. **Provide Liquidity**: Optionally, provide liquidity to earn trading fees and rewards. ## Documentation For more details, visit the [LFJ Documentation](https://lfj.gg/). Key resources include: - Integration guides for developers - Smart contract addresses and audits - Liquidity provision tutorials - API documentation ## Use Cases - **Token Swaps**: Quick and efficient token exchanges on Avalanche. - **Liquidity Provision**: Earn yields by providing liquidity to trading pairs. - **Yield Farming**: Participate in incentivized liquidity mining programs. - **Asset Management**: Access to Avalanche-based tokens. - **Cross-Chain Trading**: Execute trades across multiple blockchain networks. # Libeara/Wellington Management (/integrations/libeara-wellington) --- title: Libeara/Wellington Management category: Assets available: ["C-Chain"] description: "Libeara, in partnership with Wellington Management, offers tokenized investment products including ULTRA, bringing institutional asset management to blockchain." logo: /images/libeara.jpeg developer: Libeara website: https://libeara.com/ documentation: https://libeara.com/ --- ## Overview Libeara, in partnership with Wellington Management, tokenizes institutional investment products on blockchain. Through offerings like ULTRA, Libeara brings Wellington Management's asset management expertise to the blockchain ecosystem, giving investors access to institutional investment strategies through tokenized products. # Liquify (/integrations/liquify) --- title: Liquify category: Blockchain as a Service available: ["All Avalanche L1s"] description: Liquify is a bare-metal infrastructure provider offering high-performance, customizable solutions for launching and managing Avalanche L1s. logo: /images/liquify_primary_icon.svg developer: Liquify website: https://www.liquify.com/ --- ## Overview Liquify is a bare-metal infrastructure provider that supports Avalanche L1 deployment with high-performance, customizable solutions for launching and managing custom L1s. Using its own dedicated servers, Liquify delivers deployments that prioritize security, scalability, and full customization without relying on shared cloud resources. ## Features - **Bare-Metal Performance**: Maximum control, global coverage, low latency, and high throughput, free from the limitations of virtualized environments. - **Full Customization**: Customizable deployments with load-balanced RPC endpoints, custom indexing solutions, monitoring dashboards, and third-party tool integration. - **Security and Reliability**: Advanced security protocols, 99.9% uptime SLAs, and fault-tolerant architecture for production-ready L1s. - **Scalability and Monitoring**: Built-in tools for real-time monitoring, analytics, and automated maintenance. ## Getting Started 1. **Fill in the form**: Complete the deployment request form at [Liquify Avalanche L1 Deployment](https://www.liquify.com/avalanche-l1-deployment). 2. **Consultation**: Liquify will reach out to go over the setup and tooling options tailored to your needs. 3. **Scale and optimize**: Work with Liquify's expert team to scale and optimize your infrastructure as needed. ## Learn More - **Website**: [https://www.liquify.com/](https://www.liquify.com/) - **Twitter/X**: [https://x.com/liquify_ltd](https://x.com/liquify_ltd) # Luganodes (/integrations/luganodes) --- title: Luganodes category: Validator Infrastructure available: ["C-Chain", "All Avalanche L1s"] description: "Luganodes is an institutional-grade blockchain infrastructure provider offering validator services and staking solutions." logo: /images/luganodes.png developer: Luganodes website: https://www.luganodes.com/ documentation: https://docs.luganodes.com/ --- ## Overview Luganodes is a Swiss-based institutional blockchain infrastructure provider offering professional validator services and staking solutions. With a focus on security, reliability, and compliance, Luganodes serves institutional clients including exchanges, custodians, and asset managers with enterprise-grade Avalanche validation and staking infrastructure. ## Features - **Swiss Quality**: Swiss-based operations with high standards for security and compliance - **Institutional Focus**: Services designed for exchanges, custodians, and funds - **Multi-Chain Validators**: Professional validation across 40+ blockchain networks - **High Availability**: 99.9%+ uptime guarantee for validator operations - **Security Practices**: HSM key management and related security measures - **White-Label Solutions**: Customizable staking solutions for enterprise clients ## Getting Started 1. **Contact Luganodes**: Reach out through [Luganodes](https://www.luganodes.com/) for institutional services 2. **Requirements Analysis**: Discuss your specific infrastructure needs 3. **Solution Design**: Design custom validator or staking solutions 4. **Deployment**: Deploy infrastructure with Luganodes' support 5. **Operations**: Ongoing management and monitoring ## Documentation For technical information, visit [Luganodes Documentation](https://docs.luganodes.com/). ## Use Cases - **Exchange Staking**: Provide staking services for cryptocurrency exchanges - **Custodian Infrastructure**: Support custodial staking operations - **Fund Validation**: Run validators for crypto investment funds - **White-Label Staking**: Launch branded staking products # M^0 (/integrations/m0) --- title: M^0 category: Stablecoins as a Service available: ["C-Chain", "All Avalanche L1s"] description: "M^0 is a decentralized stablecoin infrastructure protocol enabling permissionless minting of digital dollars backed by high-quality collateral." logo: /images/m0.png developer: M^0 Foundation website: https://www.m0.org/ documentation: https://docs.m0.org/ --- ## Overview M^0 (M Zero) is a decentralized infrastructure protocol for creating and managing stablecoins. The protocol enables qualified participants to mint M stablecoins backed by eligible collateral, providing a permissionless yet secure framework for stablecoin issuance. M^0 combines decentralized governance with institutional-quality reserve management. ## Features - **Decentralized Infrastructure**: Permissionless protocol for stablecoin minting and management - **Collateral Flexibility**: Support for multiple types of high-quality collateral - **Transparent Reserves**: On-chain verification of collateral backing - **Institutional Use**: Designed for enterprise and institutional use cases - **Governance Token**: Decentralized governance through the M^0 token - **Interoperability**: Built to work across multiple blockchain networks including Avalanche ## Getting Started 1. **Review Documentation**: Study the [M^0 documentation](https://docs.m0.org/) to understand the protocol mechanics 2. **Understand Eligibility**: Review requirements for becoming a minter or validator in the M^0 ecosystem 3. **Integration**: Use M stablecoins in your Avalanche applications or explore minter opportunities ## Documentation For more details, visit the [M^0 Documentation](https://docs.m0.org/). ## Use Cases - **Stablecoin Minting**: Qualified institutions can mint M stablecoins against eligible collateral - **DeFi Integration**: Use M stablecoins across Avalanche DeFi protocols - **Payment Infrastructure**: Build payment solutions using M^0's stable digital currency - **Treasury Management**: Enterprise treasury solutions with transparent, backed digital dollars # Magic (/integrations/magic) --- title: Magic category: Wallets and Account Abstraction available: ["C-Chain"] description: "Enterprise-grade wallet infrastructure with proven reliability. Create wallets, trigger onchain actions, and stay compliant with one powerful API." logo: /images/magic.jpg developer: Magic Labs website: https://magic.link/ documentation: https://magic.link/docs --- ## Overview Magic provides wallet infrastructure with over 7 years of reliability. The platform lets developers provision wallets on demand for users and agents via API. It is SOC 2 Type 2, ISO 27001:2022, HIPAA, CCPA, and GDPR compliant, with sub-second latency (50-100ms) for wallet creation and transaction signing, flexible authentication options, whitelabel UI customization, and non-custodial architecture where users keep full control of their assets. The infrastructure can execute millions of signatures in minutes and uses multi-layered resilience with keys distributed and encrypted across isolated services using trusted execution environments (TEEs). # Mercuryo (/integrations/mercuryo) --- title: Mercuryo category: Fiat On-Ramp available: ["C-Chain"] description: "Mercuryo is a fiat-to-crypto payment gateway enabling easy purchase of cryptocurrencies with credit cards and bank transfers." logo: /images/mercuryo.png developer: Mercuryo website: https://mercuryo.io/ documentation: https://developers.mercuryo.io/ --- ## Overview Mercuryo is a global fiat-to-crypto payment gateway that enables users to easily purchase cryptocurrencies using credit cards, debit cards, and bank transfers. With support for multiple fiat currencies and a streamlined checkout experience, Mercuryo helps Web3 applications onboard users who want to acquire crypto assets quickly and securely. ## Features - **Multiple Payment Methods**: Credit cards, debit cards, bank transfers - **Wide Currency Support**: 30+ fiat currencies accepted - **Fast Processing**: Quick transaction completion - **Global Coverage**: Available in 100+ countries - **Widget Integration**: Easy-to-embed purchase widget - **API Access**: Full API for custom integrations ## Getting Started 1. **Sign Up**: Create merchant account at [Mercuryo](https://mercuryo.io/) 2. **Get API Keys**: Access credentials from merchant dashboard 3. **Widget or API**: Choose widget embed or API integration 4. **Configure**: Set up supported currencies and options 5. **Launch**: Enable fiat on-ramp for your users ## Documentation For integration guides, visit the [Mercuryo Developer Documentation](https://developers.mercuryo.io/). ## Use Cases - **dApp Onboarding**: Enable crypto purchase within dApps - **NFT Marketplaces**: Fiat payment for NFT purchases - **DeFi Entry**: Easy entry into DeFi with fiat - **Gaming**: In-game crypto purchases with fiat # Messari Avalanche (/integrations/messari) --- title: Messari Avalanche category: Analytics & Data available: ["C-Chain", "All Avalanche L1s"] description: "Messari provides research, analytics, and key updates for the Avalanche ecosystem, including network metrics, governance data, and market insights." logo: /images/messari.png developer: Messari website: https://avalanche.messari.io/key-updates documentation: https://messari.io/avalanche --- ## Overview Messari is a crypto research and data platform that provides analytics and insights for the Avalanche ecosystem. Their Avalanche hub offers real-time network metrics, governance updates, protocol analytics, and market intelligence. ## Features - **Network Analytics**: - Real-time protocol metrics - On-chain activity tracking - Network statistics - Performance indicators - **Research & Reports**: - In-depth ecosystem analysis - Quarterly reports - Market insights - Governance updates - **Key Updates**: - Protocol developments - Network upgrades - Ecosystem news - Governance proposals - **Data Access**: - API endpoints - Historical data - Custom metrics - Data exports ## Getting Started 1. **Platform Access**: - Visit [Messari Avalanche](https://avalanche.messari.io/key-updates) - Create a free account - Explore available metrics and reports 2. **Research**: - Read in-depth reports - Access key updates - Track network metrics 3. **API Integration**: - Register for API access - Integrate data into applications - Access historical metrics ## Documentation For more details, visit [Messari](https://messari.io/avalanche). ## Use Cases - **Investment Research**: Access data for investment decisions - **Ecosystem Monitoring**: Track Avalanche ecosystem developments and growth - **Protocol Analysis**: Analyze protocol metrics and performance - **Market Intelligence**: Stay informed with real-time market insights - **Development Planning**: Use data insights for development decisions # Messiah (/integrations/messiah) --- title: Messiah category: Development Infrastructure available: ["C-Chain"] description: Messiah is a decentralized infrastructure platform that simplifies Web3 participation through one-click node deployment, fractional ownership, and AI-powered yield optimization. logo: /images/messiah.png developer: Messiah Network website: https://messiah.network documentation: https://docs.messiah.network/ --- ## Overview Messiah is building **accessible Web3 infrastructure for everyone**. With Messiah’s flagship platform, **NodeHub**, users can plug in, fractionalize nodes, deploy dApps, or even become a miner -- all with a single click. Messiah removes the complexity and high costs typically associated with Web3 infrastructure by offering automated tools for deploying and managing **nodes, smart contracts, decentralized applications, and mining operations**. The mission is clear: **Build, Deploy, Earn** — enabling mass adoption of Web3 by making participation frictionless, censorship-resistant, and rewarding. ### Messiah Offers: * **One-Click Node Deployment** * Instantly run validator and RPC nodes across multiple chains. * No coding or server setup required. * **Fractionalized Node Ownership** * Split expensive node ownership into tradable shares. * Co-own and earn rewards from high-yield nodes. * **Messiah Yield Engine (MYE)** * AI-driven optimization to track real-time APYs. * Dynamically reallocates stake to maximize risk-adjusted yield. * **Messiah Agent** * AI assistant for node deployment and monitoring. * Deploy nodes or request insights using natural text prompts. * **Cross-Chain Infrastructure** * Support for 100+ blockchain networks planned. * Community-driven onboarding of new chains. * **Sustainable Compute Layer** * Carbon-neutral operations with tokenized green credits. * ESG validator rewards for environmentally friendly practices. * **Full Decentralization** * From censorship-resistant infrastructure to community-driven governance. * Powered by the Messiah Token for staking, gas, and governance. In short, Messiah empowers anyone — from casual Web3 enthusiasts to institutional players — to participate in decentralized infrastructure without the usual financial and technical barriers. ## Getting Started To get started with Messiah, follow these steps: 1. **Visit NodeHub**: Go to [NodeHub](https://app.messiah.network) to explore available nodes and features. 2. **Create an Account**: Sign up with your wallet or social login for instant access. 3. **Explore Fractionalized Nodes**: Buy shares in high-yield validator nodes without needing full capital. 4. **Check the Messiah Agent**: Use natural language prompts to deploy or manage nodes effortlessly. 5. **Review Documentation**: Read the [Messiah Documentation](https://docs.messiah.network/) for detailed guides and technical insights. 6. **Join the Community**: Connect on [Telegram](https://t.me/messiahnetwork), [X](https://x.com/messiah_network), and [LinkedIn](https://www.linkedin.com/company/messiahnetwork) for updates and governance participation. 7. **Earn Rewards**: Start earning yield from validator and compute nodes, powered by Messiah’s automated infrastructure. ## Documentation For an in-depth look at Messiah’s architecture, tokenomics, and roadmap, check the [Messiah Documentation](https://docs.messiah.network/). ## Use Cases Messiah unlocks new opportunities across the Web3 ecosystem: **Fractional Node Ownership** * Democratizes access to high-yield validators. * Enables anyone to earn rewards without running full infrastructure. **DeFi Applications** * Integrate reliable validator nodes and data oracles into DeFi platforms. * Automate yield allocation through MYE. **AI Agents** * Deploy autonomous AI agents with the Messiah Agent and LLM-powered infrastructure. * Perfect for real-time monitoring, portfolio optimization, and automated decision-making. **Developers & Enterprises** * Use NodeHub’s one-click deployment for RPC nodes, testing environments, or CI/CD pipelines. * Scale infrastructure with DevHub and MessiahCompute. **Sustainable Web3 Infrastructure** * Validators and node operators can earn ESG rewards for verifiable green practices. * Every compute cycle is audited for carbon neutrality. **Academia & Research** * Messiah’s Nodes-as-a-Service model supports labs and funds with reliable compute and validator infrastructure. * Tailored for collaborative projects and secure data management. ## Roadmap **Phase 1: The Birth (MVP)** * NodeHub MVP launched with one-click node setup. * RPC nodes supported. * Pre-seed round closed. * Token and contracts audited by Hacken. * Strategic partnerships onboarded. * Fractionalized nodes & APY rewards live. * Messiah Agent MVP released. **Phase 2: Extended Capabilities (Beta)** * Messiah Token launch & smart contract deployment. * Multi-chain support expansion. * Bulk node management & real-time APY tracking. * MYE predictive models for yield optimization. * MessiahCompute MVP with ZK support. * Premium education, knowledge base & priority support. **Phase 3: Scaling Intelligence and Infrastructure** * CEX listing of Messiah Token. * 100+ chains supported in NodeHub. * Node Portfolio Manager with AI-powered insights. * Passive yield strategies & mentorship programs. * DevHub Beta for developers. * Expanded token utility for governance, fees, and priority access. **Phase 4: Owning the Edge (Messiah L1 & Autonomous Infrastructure)** * Messiah Layer 1 chain launch (testnet & devnet). * Validator staking, gas payments, and AI agent credits. * Distributed LLM agent network. * Automated Node Launchpad for startups. * Fully automated deployment via natural language prompts. * Quant Infra for trading venues & dark pools. * ESG Validator Rewards & tokenized carbon credits. * Cross-platform oracles to power governance and smart contracts. # Metaco (/integrations/metaco) --- title: Metaco category: Custody available: ["C-Chain", "All Avalanche L1s"] description: "Metaco (Ripple Custody) provides institutional digital asset custody and orchestration infrastructure for financial institutions." logo: /images/ripple.png developer: Ripple website: https://ripple.com/solutions/digital-asset-custody/ documentation: https://ripple.com/demo/custody/ --- ## Overview > **Note:** Metaco was acquired by Ripple for $250M in May 2023 and has been fully absorbed into Ripple Custody. All Metaco services now operate under the Ripple brand. Metaco, now part of Ripple (Ripple Custody), is a leading provider of digital asset custody and orchestration infrastructure designed for banks and financial institutions. The platform offers secure custody solutions combined with workflow orchestration, enabling institutions to manage digital asset operations with bank-grade security and built-in compliance capabilities. ## Features - **Institutional Custody**: Bank-grade custody infrastructure for secure digital asset storage. - **Orchestration Platform**: Full workflow management for digital asset operations. - **Multi-Asset Support**: Support for a wide range of cryptocurrencies and digital assets. - **HSM Security**: Hardware security module integration for enhanced key protection. - **Governance Framework**: Advanced governance and approval workflows for institutional controls. - **API Infrastructure**: APIs for system integration and automation. - **Regulatory Compliance**: Built-in compliance tools meeting institutional requirements. - **White-Label Solutions**: Customizable infrastructure for institutional branding. ## Documentation For more information, visit the [Ripple website](https://ripple.com/solutions/digital-asset-custody/). ## Use Cases Metaco serves various institutional infrastructure needs: - **Banking Infrastructure**: Enable banks to offer digital asset custody to clients. - **Asset Management**: Custody solutions for institutional asset managers and funds. - **Orchestration**: Streamline complex digital asset workflows and operations. - **Treasury Management**: Institutional-grade solutions for corporate digital asset management. - **Compliance**: Meet regulatory requirements with built-in governance tools. # Mobula (/integrations/mobula) --- title: Mobula category: Analytics & Data available: ["C-Chain", "All Avalanche L1s"] description: "Mobula provides low latency, real-time onchain data through a unified API to power any Web3 app across multiple chains." logo: /images/mobula.svg developer: Mobula website: https://mobula.io/ documentation: https://docs.mobula.io/introduction --- ## Overview Mobula is a low latency, real-time onchain data provider built for developers building any kind of Web3 application. Through a unified API, Mobula delivers stable and accurate onchain data, wallet data and trading data across multiple chains. ## Features - **Low latency, real time data**: Built to have real time experiences where data freshness matters. - **Unified API**: One consistent interface for token, market, pool/pair, trade, and wallet data. - **Stable and accurate results**: Designed for production use cases that need reliable data at scale. - **Multi-chain coverage**: Support more than 100 blockchains, including Avalanche C-Chain and Avalanche L1 ecosystems. - **Built for builders**: Clear docs, guides, and a developer-first API surface. ## Getting Started 1. **Explore the documentation**: Visit the [docs](https://docs.mobula.io/introduction) to review available endpoints and step-by-step guides. 2. **Create a free account**: Generate a free API key to test and use Mobula endpoints. 3. **Make your first request**: Choose the endpoint you need, fill in the required parameters, and pull the data you want. 4. **Try the Mobula Demo UI**: Use Mobula’s dedicated demo interface to experiment with endpoints and responses quickly. ## Documentation - [Mobula Documentation Page](https://docs.mobula.io/introduction) ## Use Cases - **Real time token data**: Power token pages and analytics with real time prices, liquidity, pools, trades, and more. - **Portfolio and wallet apps**: Build multi-chain portfolio views with balances, positions, history, and activity. - **Trading apps and infrastructure**: Support trading experiences with low latency market and pool data that updates in real time. - **Any Web3 app that needs low latency data**: Ideal for products that require stable, accurate, real time onchain data. # Moca ID (/integrations/moca-id) --- title: Moca ID category: KYC / Identity Verification available: ["C-Chain"] description: "Moca ID provides decentralized identity (DID) solutions for Web3 applications with user-controlled credentials." logo: /images/moca.png developer: Moca ID website: https://moca.network/ documentation: https://moca.network/ --- ## Overview Moca ID is a decentralized identity (DID) platform designed for Web3 applications, enabling users to manage their digital identity and credentials in a self-sovereign manner. The platform provides infrastructure for verifiable credentials that can be used across multiple blockchain applications and services. ## Features - **Decentralized Identity**: Self-sovereign identity management giving users full control. - **Verifiable Credentials**: Issue and verify credentials using decentralized infrastructure. - **Multi-Chain Support**: Compatible with various blockchain networks including Avalanche. - **User-Controlled Data**: Personal information remains under user control. - **Reputation System**: Build portable reputation across Web3 applications. - **Privacy-Focused**: Minimize data exposure while maintaining verification capabilities. - **Interoperability**: Standards-based approach for cross-platform compatibility. ## Documentation For more information, visit the [Moca Network website](https://moca.network/). ## Use Cases - **Web3 Identity**: Enable decentralized identity for blockchain applications. - **Credential Management**: Issue and verify credentials in a decentralized manner. - **Reputation Portability**: Build and maintain reputation across different platforms. - **Privacy-Preserving Verification**: Verify attributes without compromising privacy. # MoonPay (/integrations/moonpay) --- title: MoonPay category: Fiat On-Ramp available: ["C-Chain", "All Avalanche L1s"] description: MoonPay is a financial technology company that provides payment infrastructure for cryptocurrencies, enabling easy fiat-to-crypto on-ramp and off-ramp solutions. logo: /images/moonpay.png developer: MoonPay website: https://www.moonpay.com/ documentation: https://dev.moonpay.com/docs/on-ramp-overview --- ## Overview MoonPay is a global payment infrastructure for cryptocurrency purchases using traditional payment methods. It provides fiat-to-crypto and crypto-to-fiat solutions that can be integrated into any application, supporting credit cards, debit cards, bank transfers, Apple Pay, Google Pay, and other local payment methods. MoonPay handles the entire payment process, including compliance, fraud prevention, and liquidity. ## Features - **Global Reach**: Available in 160+ countries with support for 80+ fiat currencies. - **Extensive Crypto Support**: Support for 100+ cryptocurrencies, including custom tokens for L1 networks. - **Multiple Payment Methods**: Credit/debit cards, bank transfers, Apple Pay, Google Pay, and region-specific payment options. - **Complete On/Off-Ramp Solution**: Both buy (on-ramp) and sell (off-ramp) functionality. - **Crypto Swaps**: Allow users to exchange one cryptocurrency for another with competitive rates. - **Customizable Widget**: White-label solution that can be styled to match your application's branding. - **NFT Checkout**: Direct purchase of NFTs with fiat currencies. - **Compliance**: Built-in KYC and AML processes with multiple verification levels. - **Fraud Prevention**: Risk management system to prevent fraudulent transactions. - **Multi-Platform SDK Support**: Native SDKs for web, iOS, Android, and React Native applications. - **Security**: SOC 2 Type 2 certified infrastructure. ## Getting Started 1. **Register for an Account**: Sign up on the [MoonPay Developer Dashboard](https://dashboard.moonpay.com/) to get your API credentials. 2. **Choose Integration Method**: MoonPay offers several integration options: - **Widget Integration**: The simplest approach, using a pre-built widget: ```javascript const moonpayUrl = new URL('https://buy.moonpay.com'); moonpayUrl.searchParams.append('apiKey', 'YOUR_API_KEY'); moonpayUrl.searchParams.append('currencyCode', 'avax'); moonpayUrl.searchParams.append('walletAddress', userWalletAddress); window.open(moonpayUrl.href, '_blank'); ``` - **Web SDK Integration**: For more control and a seamless experience: ```bash npm install @moonpay/moonpay-web-sdk ``` ```javascript import { MoonPayWebSdk } from '@moonpay/moonpay-web-sdk'; const moonpay = new MoonPayWebSdk({ flow: 'buy', environment: 'production', // or 'sandbox' for testing variant: 'overlay', // or 'embedded' params: { apiKey: 'YOUR_API_KEY', currencyCode: 'avax', walletAddress: userWalletAddress, baseCurrencyCode: 'usd', baseCurrencyAmount: 100, externalTransactionId: 'YOUR_TRANSACTION_ID', // Optional theme: 'light', // or 'dark' showWalletAddressForm: true, lockAmount: false } }); // Open the widget moonpay.show(); // Listen to events moonpay.on('onClose', () => { console.log('Widget closed'); }); moonpay.on('onTransactionSuccess', (data) => { console.log('Transaction created:', data); }); ``` - **React SDK**: ```bash npm install @moonpay/moonpay-react-sdk ``` ```jsx import { MoonPayProvider, useMoonPaySdk } from '@moonpay/moonpay-react-sdk'; function BuyButton() { const { showWidget } = useMoonPaySdk(); const handleClick = () => { showWidget({ flow: 'buy', params: { apiKey: 'YOUR_API_KEY', currencyCode: 'avax', walletAddress: userWalletAddress } }); }; return ; } function App() { return ( ); } ``` 3. **Implement Webhooks**: Set up webhooks to receive transaction status updates: ```javascript // Example server-side webhook handler (Express.js) app.post('/moonpay-webhook', (req, res) => { const event = req.body; if (event.type === 'transaction_complete') { // Handle completed transaction } res.sendStatus(200); }); ``` 4. **Test in Sandbox**: Use the sandbox environment to test your integration before going live. ## Documentation For more details, visit the [MoonPay Developer Documentation](https://dev.moonpay.com/docs/on-ramp-overview). ## Use Cases **Cryptocurrency Wallets**: Enable direct crypto purchases within your wallet application. **NFT Marketplaces**: Allow users to purchase NFTs directly with credit cards and other payment methods. **DeFi Platforms**: Provide an on-ramp for users to acquire tokens needed for DeFi services. **Web3 Applications**: Reduce friction in user onboarding by integrating a native fiat entry point. **Blockchain Games**: Enable players to purchase in-game assets without navigating external exchanges. ## Pricing MoonPay operates on a transaction fee model: - **Fee Range**: Typically 1% to 4.5% per transaction, depending on payment method and region - **Custom Fee Structure**: Enterprise solutions with customizable fee arrangements - **White-Label Solutions**: Premium pricing for fully branded solutions - **Volume Discounts**: Available for high-volume partners For detailed pricing information and enterprise solutions, contact MoonPay's sales team. # Moralis (/integrations/moralis) --- title: Moralis category: RPC Endpoints available: ["C-Chain"] description: Moralis is a leading crypto data provider that helps companies build great user experiences and drive engagement, growth, and revenue in their applications through Moralis’ suite of APIs and Moralis Nodes. logo: /images/moralis.png developer: Moralis website: https://moralis.io/ documentation: https://docs.moralis.io/ --- ## Overview Moralis is a crypto data provider that helps companies build applications through its suite of APIs and nodes. Its cross-chain data products -- NFT API, Wallet API, Token API, Price API, Blockchain API, Moralis Streams, and Moralis Nodes -- bridge the development gap between Web2 and Web3. ## Features - **[NFT API](https://moralis.io/api/nft/)**: Access NFT data across multiple blockchains, including metadata, ownership details, and transaction history. - **[Wallet API](https://moralis.io/api/wallet/)**: Retrieve real-time wallet data including balances, transaction history, and token holdings across different blockchains. - **[Token API](https://moralis.io/api/token/)**: Fetch detailed information on any token, including prices, transfers, and historical data across multiple blockchains. - **[Price API](https://moralis.io/api/price/)**: Get real-time and historical price data for various cryptocurrencies. - **[Blockchain API](https://moralis.io/api/blockchain/)**: Access blockchain data such as blocks, transactions, and smart contract events. - **[Moralis Streams](https://moralis.io/streams/)**: Receive real-time notifications for transactions and events on your chosen blockchain. - **[Moralis Nodes](https://moralis.io/nodes/)**: RPC nodes across many blockchains with 99.9% uptime, response times as low as 70 ms, and SOC 2 Type II certification. - **[SOC 2 Type II Compliance](https://moralis.io/Security/)**: SOC 2 Type II certified for data security and privacy. - **[Cross-Chain Compatibility](https://moralis.io/chains/)**: Supports integration across various blockchain protocols. - **[Scalability](https://moralis.io/scale/)**: Infrastructure built to handle high transaction volumes. Companies using Moralis have saved up to 87% on time-to-market and $86.4M in engineering costs, with 24/7 global support. ## Getting Started 1. **Sign Up**: Create a free account on the [Moralis website](https://moralis.io/). 2. **Access the Dashboard**: Log in to your Moralis account to access the admin dashboard. 3. **Get Your API Key**: Navigate to the settings section to find your API key, which is essential for making requests to Moralis services. 4. **Integrate with Your Project**: Use the API key and SDK to call Moralis APIs. You can use the provided SDKs or make direct HTTP requests. 5. **Explore Moralis APIs**: Review available APIs (Wallet API, NFT API, Token API, etc.) in the [Moralis Documentation](https://docs.moralis.io/). ## Documentation For more details on getting started, tutorials and API references, visit the [Moralis Documentation](https://docs.moralis.io/) ## Use Cases - **[Crypto Tax & Accounting](https://moralis.io/solutions/crypto-tax/)**: Build tax and accounting platforms using blockchain data from APIs and nodes. - **[Wallet Portfolio Management](https://moralis.io/solutions/wallets/)**: Show users their wallet balances, including native assets, ERC-20 tokens, NFTs, and DeFi holdings. - **[DeFi Applications](https://moralis.io/solutions/defi/)**: Power DeFi dapps with fast, reliable on-chain data. - **[Efficient Data Retrieval](https://moralis.io/api/)**: Use enriched APIs to minimize API calls for data-heavy applications. - **[Multi-Chain Dapps](https://moralis.io/chains/)**: Build dapps with cross-chain capabilities supporting multiple blockchain networks. - **[NFT Marketplaces](https://moralis.io/api/nft/)**: Build NFT marketplaces with real-time data on ownership, metadata, and transactions. - **[GameFi Development](https://moralis.io/api/)**: Create blockchain-based games integrating NFTs, tokens, and smart contracts. - **[Analytics & Insights](https://moralis.io/api/)**: Build analytics platforms with transaction data and market trends. - **[Real-Time Notifications](https://moralis.io/streams/)**: Implement instant updates for blockchain transactions and events with Moralis Streams. # Mt Pelerin (/integrations/mtpelerin) --- title: Mt Pelerin category: Fiat On-Ramp available: ["C-Chain"] description: Mt Pelerin is a Swiss-regulated crypto gateway offering easy and secure buying, selling, and swapping of cryptocurrencies in 163 countries with their Bridge Wallet app. logo: /images/mtpelerin.svg developer: Mt Pelerin Group website: https://www.mtpelerin.com/ documentation: https://developers.mtpelerin.com/ --- ## Overview Mt Pelerin is a Swiss-regulated fintech company that provides a crypto-fiat gateway for buying, selling, swapping, and managing cryptocurrencies. Based in Switzerland, it supports 163 countries. Their flagship product, Bridge Wallet, is a mobile app for investing in and managing crypto assets with full self-custody. Mt Pelerin supports Avalanche along with major cryptocurrencies and stablecoins, providing on-ramp, off-ramp, and swap functionality. It offers both consumer-facing solutions and developer APIs for integrating crypto services into applications. ## Features - **Fiat On-Ramp**: Buy cryptocurrencies including AVAX with bank transfers, credit cards, and other payment methods. - **Fiat Off-Ramp**: Cash out crypto back to bank accounts in 163 countries with competitive rates. - **Crypto Swaps**: Exchange between different cryptocurrencies and across multiple blockchain networks. - **Cross-Chain Bridge**: Move assets between different blockchain networks. - **Bridge Wallet App**: Mobile app for managing crypto investments with full self-custody. - **Multiple Payment Methods**: Bank transfers, credit cards, debit cards, and local payment options. - **Global Coverage**: Support for 163 countries with localized payment methods and currencies. - **Swiss Regulation**: Fully regulated by Swiss financial authorities providing trust and security. - **Low Fees**: Competitive fee structure with zero fees for users holding 50+ MPS tokens. - **No KYC Up to Limits**: Certain transaction limits available without identity verification. - **Lightning Network**: Support for Bitcoin Lightning Network for instant, low-cost transactions. - **Multi-Chain Support**: Support for Bitcoin, Ethereum, Avalanche, and other major blockchain networks. - **Developer APIs**: Integration APIs for embedding Mt Pelerin's services into applications and platforms. - **White-Label Solutions**: Customizable integration options for businesses. - **Customer Service**: Responsive customer support. ## Getting Started 1. **Visit Mt Pelerin's Developer Portal**: Access the developer documentation at [developers.mtpelerin.com](https://developers.mtpelerin.com/). 2. **Request Integration Key**: Contact Mt Pelerin at [email protected] to request a unique integration key. Provide the URL where you plan to integrate the service. 3. **Choose Integration Type**: - **Widget Integration**: Embed Mt Pelerin's on-ramp widget directly into your application - **API Integration**: Build custom solutions using Mt Pelerin's REST APIs - **Bridge Protocol**: Integrate with the Bridge Protocol for on/off-ramp services 4. **Configure Services**: By default, integration keys activate bank transfer on-ramps, off-ramps, and crypto swaps. 5. **Enable Card Payments** (Optional): To activate purchases via credit/debit cards: - Complete Know Your Business (KYB) verification process - Sign the integration agreement with Mt Pelerin - Card payment functionality will be enabled after approval 6. **Test Integration**: Use Mt Pelerin's testing environment to verify your integration works correctly. 7. **Set Up Webhooks**: Configure webhook endpoints to receive notifications about transaction status changes. 8. **Go Live**: After testing, activate your production integration and start offering crypto services to your users. ## Avalanche Support Mt Pelerin supports AVAX and USDC on the Avalanche C-Chain, enabling users to buy, sell, and swap Avalanche-based assets easily. Users can purchase AVAX with fiat currencies through bank transfers or cards, swap between AVAX and other cryptocurrencies, and cash out AVAX back to their bank accounts in 163 countries. ## Documentation For more details, visit: - [Mt Pelerin Developer Portal](https://developers.mtpelerin.com/) - [Integration Getting Started Guide](https://developers.mtpelerin.com/integration-guides/getting-started) - [Bridge Protocol Documentation](https://developers.mtpelerin.com/bridge-protocol) - [API Reference](https://developers.mtpelerin.com/api-reference) ## Bridge Wallet App Mt Pelerin's Bridge Wallet is a mobile application that provides: - **Self-Custody**: Full control over your crypto with self-custodial wallet architecture - **Buy and Sell**: Integrated on-ramp and off-ramp directly within the app - **Multi-Chain Support**: Manage assets across Bitcoin, Ethereum, Avalanche, and other networks - **Swap Functionality**: Exchange cryptocurrencies within the app - **Lightning Network**: Send and receive Bitcoin via Lightning for instant transfers - **Portfolio Management**: Track all your crypto investments in one place - **Security Features**: Biometric authentication, secure key storage, and backup options Available on iOS and Android app stores. ## Use Cases on Avalanche **Cryptocurrency Wallets**: Integrate Mt Pelerin's on-ramp and off-ramp to allow users to buy AVAX and cash out directly from wallet apps. **DeFi Platforms**: Provide users with easy fiat entry points to acquire AVAX and stablecoins for DeFi protocols on Avalanche. **NFT Marketplaces**: Enable direct purchase of AVAX for buying NFTs with fiat payment methods. **dApp Integrations**: Add fiat on-ramp functionality to decentralized applications, reducing friction for new users. **Exchange Platforms**: Use Mt Pelerin as a fiat gateway for your exchange to enable crypto purchases and sales. **GameFi Applications**: Allow players to purchase in-game tokens on Avalanche using bank transfers or cards. **Payment Solutions**: Build payment applications that leverage Mt Pelerin's global off-ramp capabilities for 163 countries. ## Pricing Mt Pelerin offers transparent and competitive pricing: - **Transaction Fees**: Competitive percentage-based fees on buy, sell, and swap transactions - **Zero Fees**: Users holding 50+ MPS (Mt Pelerin Shares) tokens enjoy zero transaction fees - **Payment Method Fees**: Variable fees based on payment method (bank transfer, credit card, etc.) - **No Hidden Charges**: Transparent pricing with all fees displayed upfront - **Volume Discounts**: Reduced fees for high-volume users and integrations - **Integration Fees**: Custom pricing for business integrations and white-label solutions For detailed pricing and enterprise arrangements, contact Mt Pelerin's partnership team. ## MPS Token Benefits Mt Pelerin Shares (MPS) is Mt Pelerin's utility token offering benefits to holders: - **Zero Transaction Fees**: Hold 50+ MPS tokens to enjoy zero fees on all transactions - **Loyalty Rewards**: Additional benefits for long-term token holders - **Governance**: Participate in platform governance and decision-making - **Revenue Sharing**: Token holders may receive a share of platform revenues ## Compliance and Security Mt Pelerin maintains strong compliance and security standards: - **Swiss Regulation**: Fully compliant with Swiss financial regulations and supervised by Swiss authorities - **AML/KYC Compliance**: Anti-money laundering and know-your-customer processes meet Swiss and international standards - **Data Protection**: GDPR compliant with strong data privacy protections - **Security Infrastructure**: Multi-layer security architecture protecting user funds and data - **Regular Audits**: Ongoing compliance audits and security assessments - **Transparent Operations**: Based in Switzerland with clear legal entity and regulatory oversight # Nansen (/integrations/nansen) --- title: Nansen category: Analytics & Data available: ["C-Chain"] description: "Nansen provides blockchain analytics with smart money tracking, wallet labeling, and on-chain insights for the Avalanche ecosystem." logo: /images/nansen.png developer: Nansen website: https://www.nansen.ai/ documentation: https://docs.nansen.ai/ --- ## Overview Nansen is a blockchain analytics platform that combines on-chain data with millions of wallet labels to provide actionable insights for the Avalanche ecosystem. It offers real-time analytics, smart money tracking, and dashboards for understanding market trends and identifying opportunities. # AuditAgent (/integrations/nethermind-auditagent) --- title: AuditAgent category: Developer Tooling available: ["C-Chain", "All Avalanche L1s"] description: AuditAgent is an AI-powered smart contract security agent that proactively detects vulnerabilities and provides continuous monitoring for Solidity projects. logo: /images/auditagent.png developer: Nethermind website: https://auditagent.nethermind.io/ documentation: https://docs.auditagent.nethermind.io/ --- ## Overview AuditAgent is an autonomous, AI-driven security platform from **Nethermind** that helps developers discover and fix vulnerabilities in their smart contracts *before* they go live. It uses machine-learning models, symbolic execution, and a continuously-updated knowledge base of exploits to deliver rapid, actionable insights. ## Features - **AI-Driven Vulnerability Detection** – Combines static analysis, dynamic testing, and large-language-model reasoning to identify re-entrancy, arithmetic errors, access-control flaws, and more. - **Continuous Monitoring** – Watches repositories and deployed addresses, rescanning automatically whenever code changes or new bytecode is detected. - **Human-Readable Reports** – Generates detailed findings with severity classifications, PoC transactions, and clear remediation guidance. - **CI/CD Integrations** – Native GitHub Actions workflow and REST API let teams fail builds on new critical issues and gate deployments behind security checks. - **Multi-Chain Support** – Optimised for Avalanche’s C-Chain and any EVM-compatible Layer 1. ## Getting Started 1. **Sign Up / Log In** – Visit the [AuditAgent dashboard](https://auditagent.nethermind.io/) and authenticate with GitHub, GitLab, or email. 2. **Create a Project** – Point AuditAgent at a public repo, upload Solidity sources, or paste an address to analyse deployed bytecode. 3. **Run Your First Scan** – Click *Start Scan* and wait a few minutes while AuditAgent performs AI-backed analysis of your codebase. 4. **Review Findings** – Examine the vulnerability list, severity breakdown, and remediation tips. Export the report as JSON, PDF, or SARIF. 5. **Automate** – Add AuditAgent to your pipeline using the provided GitHub Action or REST API for on-push security gates. ## Documentation For full API reference, configuration options, and CI/CD examples, visit the [AuditAgent Docs](https://docs.auditagent.nethermind.io/). ## Use Cases - **Pre-Audit Preparation** – Catch low-hanging issues early and reduce the cost and turnaround time of formal audits. - **Ongoing Security Monitoring** – Continuously track contract changes post-deployment to guard against new risks introduced by upgrades or dependencies. - **Developer Education** – Use detailed explanations and code snippets to upskill engineers on secure-coding best practices. - **Compliance & Reporting** – Export machine-readable SARIF results for governance dashboards and regulatory submissions. # Nethermind (/integrations/nethermind) --- title: Nethermind category: Security Audits available: ["C-Chain", "All Avalanche L1s"] description: "Nethermind offers good-quality security audits with deep Ethereum expertise and a 20% discount for Avalanche ecosystem projects." logo: /images/nethermind.png developer: Nethermind website: https://nethermind.io/ --- ## Overview Nethermind is a blockchain development company offering security audits for projects building on Avalanche. With deep expertise in Ethereum and EVM-compatible chains, their team provides security assessments informed by their experience as both security researchers and blockchain developers. Avalanche ecosystem projects can get a 20% discount. ## Features - **Smart Contract Audits**: Thorough code reviews and vulnerability assessments. - **Protocol Security**: Analysis of protocol design and implementation. - **20% Ecosystem Discount**: Referral discount for Avalanche ecosystem projects. - **Developer Perspective**: Security insights informed by active blockchain development experience. - **EVM Expertise**: Deep understanding of EVM internals and security implications. - **Ethereum Background**: Extensive experience with Ethereum and EVM-compatible chains. - **Client Implementation Experience**: Unique insights from building Ethereum clients. ## Getting Started 1. **Initial Contact**: Reach out through their website to discuss your audit requirements. 2. **Mention Ecosystem**: Reference the Avalanche ecosystem for the available referral discount. 3. **Audit Process**: - Scope definition and planning - Code review - Vulnerability identification and classification - Detailed remediation recommendations 4. **Report Delivery**: Receive a detailed audit report with findings and security guidance. 5. **Implementation Support**: Optional assistance with addressing identified vulnerabilities. ## Use Cases - **EVM-Based Projects**: Benefit from their deep EVM expertise. - **Layer 2 Solutions**: Security assessment of layer 2 implementations. - **Cross-Chain Applications**: Review of bridge and cross-chain mechanisms. - **DeFi Protocols**: Thorough evaluation of financial smart contracts. - **Avalanche Ecosystem Projects**: Access to quality audits with ecosystem discount. # Nexera (/integrations/nexera) --- title: Nexera category: KYC / Identity Verification available: ["C-Chain"] description: "Nexera provides flexible accreditation and decentralized identity (DID) solutions for compliant blockchain applications." logo: /images/nexera.jpg developer: Nexera website: https://www.nexera.network/ documentation: https://docs.nexera.network/ --- ## Overview Nexera (formerly AllianceBlock) offers identity and compliance solutions for blockchain applications, including flexible accreditation verification and decentralized identity (DID) management. The platform enables projects to implement customizable compliance frameworks while maintaining user privacy through decentralized identity infrastructure. ## Features - **Decentralized Identity (DID)**: Self-sovereign identity management giving users control over their credentials. - **Flexible Accreditation**: Customizable verification frameworks for different investor types and compliance requirements. - **Credential Management**: Reusable verification credentials that can be shared across platforms. - **Privacy-Preserving**: Verify accreditation status without exposing sensitive personal information. - **On-Chain Verification**: Smart contract integration for automated compliance checks. - **Multi-Jurisdiction Support**: Adaptable to different regulatory frameworks globally. - **Modular Architecture**: Flexible implementation options for various use cases. ## Documentation For more information, visit the [Nexera Documentation](https://docs.nexera.network/). ## Use Cases - **Token Offerings**: Verify investor accreditation for compliant token sales. - **DeFi Compliance**: Enable permissioned DeFi with flexible accreditation requirements. - **Decentralized Identity**: Implement DID solutions for privacy-preserving verification. - **Institutional Onboarding**: Streamline verification for institutional participants. - **Cross-Platform Credentials**: Enable reusable identity across multiple applications. # Nonco (/integrations/nonco) --- title: Nonco category: Payments available: ["C-Chain"] description: Nonco is an institutional digital asset trading firm backed by VanEck that has launched an institutional-grade FX trading protocol on Avalanche, bringing real-world FX liquidity to stablecoin markets. logo: /images/nonco.png developer: Nonco website: https://www.nonco.com/ documentation: https://www.nonco.com/protocol --- ## Overview Nonco is an institutional digital asset trading firm backed by VanEck that has built an institutional-grade foreign exchange (FX) trading protocol on Avalanche. The protocol brings real-world FX liquidity to stablecoin markets, starting with the USDMXN (US Dollar to Mexican Peso) pair and expanding to additional currency pairs over time. Using a request-for-quote (RFQ) model, Nonco connects traditional FX liquidity providers with on-chain users, offering competitive pricing, atomic settlement, and integration with banks and stablecoin issuers. ## Features - **Institutional-Grade FX Protocol**: Professional foreign exchange trading infrastructure built on blockchain. - **Real-World Liquidity**: Access to deep liquidity from traditional FX market makers and banks. - **Request-for-Quote (RFQ) Model**: Competitive pricing through quote requests to multiple liquidity providers. - **Atomic Settlement**: Instant, trustless settlement of FX trades on-chain. - **Bank Integration**: Direct connections to banking partners for fiat settlement. - **Stablecoin Issuer Partnerships**: Integration with major stablecoin issuers for direct currency conversion. - **USDMXN Focus**: Initial launch with Mexican Peso pairs, serving high-demand corridor. - **Multi-Currency Expansion**: Roadmap to support additional fiat currency pairs. - **Competitive Pricing**: Institutional-quality pricing competitive with traditional FX markets. - **Regulatory Compliance**: Built with regulatory requirements and institutional standards in mind. - **Avalanche Native**: Purpose-built on Avalanche for speed, cost-efficiency, and institutional adoption. - **API Access**: APIs for programmatic trading and integration. - **Transparent Operations**: On-chain transparency of trades and settlements. - **VanEck Backed**: Support from leading institutional asset manager VanEck. ## Getting Started 1. **Institutional Onboarding**: Contact Nonco to begin institutional onboarding process. 2. **Compliance Verification**: Complete institutional KYC/AML and compliance requirements. 3. **Integration Options**: Choose how to access the protocol: - **Direct Protocol Access**: Integrate directly with Nonco's smart contracts - **API Integration**: Use Nonco's APIs for programmatic trading - **White-Label Solutions**: Embed FX functionality into your platform - **Liquidity Provider**: Become a liquidity provider on the protocol 4. **Testing**: Test FX trading in Nonco's sandbox environment on Avalanche testnet. 5. **Go Live**: Execute live FX trades with real liquidity on Avalanche mainnet. ## Request-for-Quote (RFQ) Model Nonco's RFQ model provides institutional-quality execution: **Quote Request**: Users or applications request quotes for specific FX pairs and amounts. **Competitive Quotes**: Multiple liquidity providers respond with competitive pricing. **Best Execution**: User selects best quote based on price, size, and settlement terms. **Atomic Settlement**: Trade executes instantly on-chain with atomic settlement guaranteeing execution. **On-Chain Record**: Complete transparency with all trades recorded on Avalanche blockchain. This model ensures users receive competitive institutional pricing while maintaining the efficiency and transparency of blockchain settlement. ## Avalanche Integration Nonco chose Avalanche for their FX protocol for several reasons: **High Performance**: Sub-second finality enables real-time FX trading. **Low Costs**: Minimal transaction fees make frequent trading economically viable. **Regulatory Compatibility**: Avalanche's architecture supports compliance and institutional requirements. **Subnet Capability**: Potential for a dedicated FX subnet with customized parameters. **Network Effect**: Other institutional applications are already deployed on Avalanche. ## Use Cases **Cross-Border Payments**: Convert between stablecoins and local currencies for international transfers. **Remittances**: Enable efficient remittance corridors with instant FX conversion. **Treasury Management**: Corporations managing multi-currency treasuries with stablecoins. **Exchanges**: Cryptocurrency exchanges offering fiat on/off-ramps with competitive FX rates. **Payment Processors**: Fintech platforms processing international payments. **Neobanks**: Digital banks offering multi-currency accounts and FX services. **DeFi Protocols**: DeFi applications needing access to real-world FX liquidity. **Market Makers**: Trading firms providing liquidity across fiat and stablecoin markets. ## USDMXN Focus Nonco's initial focus on the USDMXN pair is strategic: **High-Demand Corridor**: US-Mexico is one of the world's largest remittance corridors. **Underserved Market**: Traditional FX markets often have high spreads for MXN. **Growing Stablecoin Adoption**: Strong demand for stablecoin-based solutions in Latin America. **Regulatory Clarity**: Increasing regulatory clarity in both jurisdictions. **Market Opportunity**: Billions in annual transaction volume between USD and MXN. The success with USDMXN will pave the way for additional currency pairs. ## Liquidity Providers Nonco connects multiple types of liquidity providers: **Traditional Banks**: Banking partners providing fiat FX liquidity. **Market Makers**: Professional trading firms quoting competitive prices. **Stablecoin Issuers**: Direct integration with stablecoin issuers for minting/redemption. **Institutional Traders**: Hedge funds and prop trading firms providing liquidity. **Treasury Operations**: Corporate treasuries participating as liquidity providers. This diverse liquidity base ensures competitive pricing and deep markets. ## Technology Infrastructure - **Smart Contracts**: Audited smart contracts on Avalanche for trustless execution - **RFQ Engine**: High-performance quote request and aggregation system - **Settlement Layer**: Atomic settlement ensuring simultaneous asset exchange - **Oracle Integration**: Price feeds for reference rates and validation - **API Gateway**: RESTful APIs for programmatic access - **WebSocket Feeds**: Real-time market data and quote streams - **Admin Dashboard**: Interface for managing trades and monitoring activity - **Compliance Tools**: Built-in tools for transaction monitoring and reporting ## Institutional Standards Nonco meets institutional requirements: - **Price Discovery**: Transparent, competitive price discovery through RFQ - **Best Execution**: Users can select from multiple quotes ensuring best execution - **Audit Trail**: Complete on-chain audit trail of all transactions - **Reporting**: Comprehensive reporting for compliance and accounting - **Custody Integration**: Compatible with institutional custody providers - **Risk Management**: Tools for managing counterparty and settlement risk - **SLAs**: Service level agreements for uptime and performance ## VanEck Partnership VanEck's backing provides: **Institutional Credibility**: VanEck's reputation adds trust with institutional clients. **Regulatory Expertise**: Access to VanEck's regulatory knowledge and relationships. **Network Access**: Connections to VanEck's institutional network. **Capital Support**: Financial backing for protocol development and growth. ## Regulatory Approach Nonco operates with focus on compliance: - **Institutional KYC**: Comprehensive know-your-customer for all participants - **Transaction Monitoring**: Real-time monitoring for suspicious activity - **Sanctions Screening**: Screening against global sanctions lists - **Reporting**: Regulatory reporting capabilities for relevant jurisdictions - **Licensing**: Working with necessary licenses for FX and digital asset operations - **Bank Partnerships**: Collaborating with regulated banking partners ## Roadmap Nonco's expansion plans include: - **Additional Currency Pairs**: Expansion beyond USDMXN to other major pairs - **Increased Liquidity**: Onboarding additional liquidity providers - **Enhanced Features**: Advanced trading features for institutional users - **Geographic Expansion**: Support for additional regional corridors - **DeFi Integration**: Deeper integration with DeFi protocols - **Derivative Products**: Potential FX derivatives and hedging instruments ## Pricing Nonco offers institutional pricing: - **Competitive Spreads**: Tight bid-ask spreads competitive with traditional FX - **Transparent Fees**: Clear fee structure for all participants - **Volume Discounts**: Reduced fees for high-volume traders - **Liquidity Provider Incentives**: Rewards for liquidity provision - **Enterprise Solutions**: Custom pricing for large institutional clients Contact Nonco for specific pricing based on trading volume and requirements. # Nuggets (/integrations/nuggets) --- title: Nuggets category: KYC / Identity Verification available: ["C-Chain"] description: "Nuggets offers privacy-focused KYC verification with self-sovereign identity management for blockchain applications." logo: /images/nuggets.png developer: Nuggets website: https://nuggets.life/ documentation: https://nuggets.life/ --- ## Overview Nuggets is a self-sovereign identity and payment platform that provides privacy-focused KYC verification services. The platform enables users to verify their identity once and securely store their personal data, which can then be shared with verified businesses without repeated verification processes. ## Features - **Self-Sovereign Identity**: Users maintain full control over their personal data and credentials. - **Privacy-Preserving**: Personal data is encrypted and stored securely on the user's device. - **Biometric Authentication**: Secure access using biometric verification. - **Reusable Credentials**: Verify once and share credentials with multiple platforms. - **Zero-Knowledge Proofs**: Share verification status without revealing underlying personal data. - **Payment Integration**: Combines identity verification with secure payment capabilities. - **GDPR Compliant**: Built with European data protection regulations in mind. ## Documentation For more information, visit the [Nuggets website](https://nuggets.life/). ## Use Cases - **Privacy-Preserving KYC**: Verify user identity while maintaining maximum privacy. - **Reusable Identity**: Enable users to access multiple platforms with a single verification. - **Compliance**: Meet regulatory requirements without compromising user privacy. - **Data Protection**: Ensure user data remains under user control. # Octane (/integrations/octane) --- title: Octane category: Security Tooling available: ["C-Chain", "All Avalanche L1s"] description: Octane's AI-powered security platform provides continuous vulnerability detection and deep codebase analysis integrated directly into developer workflows. logo: /images/octane.jpg developer: Octane website: https://octane.security/ documentation: https://docs.octane.security/introduction --- ## Overview Octane is an AI-powered continuous security platform for smart contracts. It integrates directly into developer workflows to analyze code, simulate attack paths, and detect deep logic vulnerabilities before deployment. Designed for high-velocity teams, Octane provides ongoing security coverage without disrupting existing development processes. ## Features - **AI Code Analysis**: Automatically detects high-impact vulnerabilities in code. - **Cross-Contract Expertise**: Analyzes interactions across multi-contract systems. - **Broad Coverage**: 50+ vulnerability classes, including reentrancy, MEV, and oracle risks. - **Exploit Scenarios**: Describes attacker–victim behavior and how the issue is exploitable. - **CI/CD Integration**: Automated analysis on every pull request via native GitHub App. - **Avalanche-Tuned Models**: Detection tuned using Avalanche-specific documentation. - **Octane Security Program**: Eligible teams may receive Octane security credits. ## Getting Started 1. Request access or schedule a demo [here](https://www.octane.security/schedule-demo) 2. Connect your repo with our [quickstart guide](https://www.octane.security/post/how-to-get-the-most-out-of-octane) 3. Run an initial scan to generate findings in minutes 4. Enable CI/CD integration to secure all future PRs automatically 5. Eligible Avalanche builders may receive Octane credits during onboarding ## Use Cases - **Continuous Security**: Ongoing analysis to maintain a secure codebase. - **CI/CD Pipeline Protection**: Automatically review every PR to catch issues in real time. - **Pre-Deployment Checks**: Run deep analysis before testnet or mainnet releases. - **Advanced Protocol Analysis**: Suitable for DeFi, stablecoins, gaming, and teams building high-value systems that require institutional-grade security. # OKX OS (/integrations/okxos) --- title: OKX OS category: Wallets and Account Abstraction available: ["C-Chain"] description: Onchain infrastructure suite for building and scaling applications across 100+ chains. logo: /images/okx.png developer: OKX website: https://www.okx.com/web3/build documentation: https://www.okx.com/web3/build/docs/waas/okx-waas-what-is-waas --- ## Overview OKX OS provides developers with tools, SDKs, and APIs to build and scale applications across over 100 chains. It uses the same technology that powers the OKX Wallet, serving millions of users and processing more than 400 million daily API calls. ## Features - **One-stop solution**: Tools and APIs for building onchain experiences across any chain — wallets, games, exchanges, and collections. - **Multi-chain support and liquidity aggregation**: Access to over 100 chains with aggregated liquidity across multiple networks, DEXs, and major marketplaces. - **Bitcoin-friendly**: Tools for Inscriptions, Ordinals, Runes, Fractal Bitcoin, and other emerging Bitcoin-based innovations. - **Security**: Uses OKX's audited security measures and processes. - **Proven scalability**: Battle-tested infrastructure serving millions of users and handling over 400 million daily API calls. ## Getting Started Developers can start using OKX OS for free by visiting the [OKX Build Portal](https://www.okx.com/web3/build). The portal has the tools, SDKs, and APIs needed to build and scale applications across multiple chains. ## Documentation For detailed documentation and guides, please visit the [OKX OS Documentation](https://www.okx.com/web3/build). ## Use Cases - Building multi-chain wallets with transaction management across chains. - Integrating cross-chain swaps and liquidity aggregation into dApps. - Creating NFT marketplaces with real-time data and marketplace integrations. - Developing blockchain games with in-game asset management across 100+ chains. - Pulling onchain data through APIs for analytics and dashboards. ### Building an On-Chain Data Dashboard for Avalanche C-Chain This guide walks you through setting up a dashboard to track wallet assets and transactions on the Avalanche C-Chain. You'll use OKX OS's Wallet API to fetch and display this data. #### Prerequisites - [Node.js](https://nodejs.org/) installed on your system - Basic understanding of JavaScript and async/await - An OKX Developer account #### Setting Up Your Development Environment 1. **Log in to the Developer Portal**: Sign up for an account on the [OKX Developer Portal](https://www.okx.com/web3/build/dev-portal). 2. **Create a New Project**: Click on the `Create new project` button and fill in the required details. Once the project is created, you will receive a `Project ID`. Keep it for future reference. 3. **Generate API Keys**: Once your project is created, click the `Manage` and then `Create API key` buttons to create a new API key. Fill in the required details and click `Create`. You will receive an `API Key` and `API Secret`. Keep your `API Key`, `API Secret`, and `Passphrase` for future use. > **Note**: Keep your Project ID, API Key, Secret, and Passphrase secure by storing them in environment variables or a secure storage solution. It is recommended to never share these credentials publicly or commit them to your codebase. 4. **Initialize a New Project**: Run the following commands to create a new directory and initialize a Node.js project with default settings and required dependencies: ```bash mkdir avalanche-dashboard cd avalanche-dashboard npm init -y npm install crypto-js ``` Create three script files: ```bash touch createAccount.js getAssets.js getTx.js ``` ## Create Wallet Account You'll start by creating an account to track your Avalanche addresses with a simple Node.js script that interacts with the OKX Wallet API. In the `createAccount.js` file: ```javascript const CryptoJS = require("crypto-js"); const createWallet = async () => { // Generate timestamp in ISO format const timestamp = new Date().toISOString(); const method = "POST"; const path = "/api/v5/wallet/account/create-wallet-account"; // Prepare the body first as we need it for signature const body = { addresses: [ { chainIndex: "43114", address: "0x2eFB50e952580f4ff32D8d2122853432bbF2E204", }, // You can add more addresses and chain indexes // { // chainIndex: "1", // address: "0x2eFB50e952580f4ff32D8d2122853432bbF2E204", // }, // { // chainIndex: "43114", // address: "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045", // }, ], }; // Generate signature // timestamp + method + path + body const signString = timestamp + method + path + JSON.stringify(body); const signature = CryptoJS.enc.Base64.stringify( CryptoJS.HmacSHA256(signString, "YOUR API SECRET KEY"), ); const response = await fetch( "https://www.okx.com/api/v5/wallet/account/create-wallet-account", { method: "POST", headers: { "Content-Type": "application/json", "OK-ACCESS-PROJECT": "YOUR PROJECT ID", "OK-ACCESS-KEY": "YOUR API KEY", "OK-ACCESS-SIGN": signature, "OK-ACCESS-PASSPHRASE": "YOUR API PASSPHRASE", "OK-ACCESS-TIMESTAMP": timestamp, }, body: JSON.stringify(body), }, ); const data = await response.json(); return data; }; // Example usage: createWallet() .then((response) => console.log("Success:", response)) .catch((error) => console.error("Error:", error)); ``` Before running the script, replace these placeholder values with your actual credentials: ```javascript "YOUR API SECRET KEY" → Your API Secret "YOUR PROJECT ID" → Your Project ID "YOUR API KEY" → Your API Key "YOUR API PASSPHRASE" → Your Passphrase ``` Run your script: ```bash node createAccount.js ``` You should see a success message with the response data if the account is created successfully. For example, ```bash Success: { code: '0', message: 'success', data: { accountId : 'Y7489xxxx-xxxx-xxxx-xxxx-xxxxxxaa652c' } } ``` ## Check Wallet Assets Now that we have an account, you can fetch the token balances. This script will show you all tokens held by your tracked addresses. In your `getAssets.js` file: 1. Copy this code to `getAssets.js`: ```javascript const CryptoJS = require("crypto-js"); const getRequestUrl = (baseUrl, path, params = null) => { const url = new URL(baseUrl + path); if (params) { Object.keys(params).forEach((key) => url.searchParams.append(key, params[key]), ); } return url.toString(); }; const apiBaseUrl = "https://www.okx.com"; const getAssetsParams = { accountId: "ACCOUNT ID FROM PREVIOUS STEP", // Replace with your accountId }; const timestamp = new Date().toISOString(); const method = "GET"; const path = "/api/v5/wallet/asset/wallet-all-token-balances"; const queryString = `?accountId=${getAssetsParams.accountId}`; // Generate signature const signString = timestamp + method + path + queryString; const signature = CryptoJS.enc.Base64.stringify( CryptoJS.HmacSHA256(signString, "YOUR API SECRET KEY"), ); const headersParams = { "Content-Type": "application/json", "OK-ACCESS-PROJECT": "YOUR PROJECT ID", "OK-ACCESS-KEY": "YOUR API KEY", "OK-ACCESS-SIGN": signature, "OK-ACCESS-PASSPHRASE": "YOUR API PASSPHRASE", "OK-ACCESS-TIMESTAMP": timestamp, }; const getAssetsData = async () => { const apiRequestUrl = getRequestUrl(apiBaseUrl, path, getAssetsParams); const response = await fetch(apiRequestUrl, { method: "GET", headers: headersParams, }); return response.json(); }; // Use it getAssetsData() .then(({ data }) => { console.log("\n=== Wallet Assets ===\n"); data.forEach((wallet) => { // Convert timestamp to readable date const date = new Date(parseInt(wallet.timeStamp)); console.log(`Last Updated: ${date.toLocaleString()}\n`); console.log("Token Assets:"); wallet.tokenAssets.forEach((token) => { console.log(` Token: ${token.symbol} Chain: ${token.chainIndex} Balance: ${token.balance} -----------------------------`); }); }); }) .catch((error) => console.error("Error:", error)); ``` Make sure to: - Update the accountId with the one you received in Step 1 - Replace the API credentials with yours Run the asset checker: ```bash node getAssets.js ``` You should see the assets of the wallet account if the request is successful. For example, ```bash === Wallet Assets === Last Updated: 10/24/2024, 7:23:20 PM Token Assets: Token: AVAX Chain: 43114 Balance: 882338.9729422927 ----------------------------- Token: Sword Chain: 43114 Balance: 100000 ----------------------------- Token: ERGC Chain: 43114 Balance: 100000 ----------------------------- Token: MILO Chain: 43114 Balance: 500000 ----------------------------- ``` ## View Transaction Details Finally, you can set up transaction viewing. This script provides detailed information about any transaction on the Avalanche C-Chain. In your `getTx.js` file: ```javascript const CryptoJS = require("crypto-js"); const getRequestUrl = (baseUrl, path, params = null) => { const url = new URL(baseUrl + path); if (params) { Object.keys(params).forEach((key) => url.searchParams.append(key, params[key]), ); } return url.toString(); }; const apiBaseUrl = "https://www.okx.com"; const params = { txHash: '0xaf54d1cb2c21bed094095bc503ec76128f80c815db8631fd74c6e49781b94bd1', // Changed from txhash to txHash chainIndex: '43114' }; const timestamp = new Date().toISOString(); const method = "GET"; const path = '/api/v5/wallet/post-transaction/transaction-detail-by-txhash'; const queryString = `?txHash=${params.txHash}&chainIndex=${params.chainIndex}`; // Changed from txhash to txHash const signString = timestamp + method + path + queryString; const signature = CryptoJS.enc.Base64.stringify( CryptoJS.HmacSHA256(signString, "YOUR API SECRET"), ); const headersParams = { "Content-Type": "application/json", "OK-ACCESS-PROJECT": "YOUR PROJECT ID", "OK-ACCESS-KEY": "YOUR API KEY", "OK-ACCESS-SIGN": signature, "OK-ACCESS-PASSPHRASE": "YOUR API PASSPHRASE", "OK-ACCESS-TIMESTAMP": timestamp, }; const getTransactionDetailData = async () => { const apiRequestUrl = getRequestUrl(apiBaseUrl, path, params); const response = await fetch(apiRequestUrl, { method: "GET", headers: headersParams, }); return response.json(); }; const formatDate = (timestamp) => { return new Date(parseInt(timestamp)).toLocaleString(); }; const formatGas = (gas) => { return parseFloat(gas).toLocaleString(); }; getTransactionDetailData() .then((response) => { console.log('\n=== Transaction Details ===\n'); if (response.code === "0" && response.data && response.data.length > 0) { const tx = response.data[0]; // Transaction Basic Info console.log('📝 Basic Information'); console.log('------------------'); console.log(`Hash: ${tx.txhash}`); console.log(`Status: ${tx.txStatus.toUpperCase()}`); console.log(`Block: ${formatGas(tx.height)}`); console.log(`Time: ${formatDate(tx.txTime)}`); console.log(`Method ID: ${tx.methodId}`); console.log(`Chain: ${tx.chainIndex} (${tx.symbol})`); // Gas Info console.log('\n⛽ Gas Information'); console.log('----------------'); console.log(`Gas Limit: ${formatGas(tx.gasLimit)}`); console.log(`Gas Used: ${formatGas(tx.gasUsed)}`); console.log(`Gas Price: ${formatGas(tx.gasPrice)} Wei`); console.log(`Nonce: ${tx.nonce}`); // From Address console.log('\n📤 From Address'); console.log('-------------'); tx.fromDetails.forEach(from => { console.log(`Address: ${from.address}`); console.log(`Type: ${from.isContract ? 'Contract' : 'Wallet'}`); }); // To Address console.log('\n📥 To Address'); console.log('-----------'); tx.toDetails.forEach(to => { console.log(`Address: ${to.address}`); console.log(`Type: ${to.isContract ? 'Contract' : 'Wallet'}`); }); // Token Transfers if (tx.tokenTransferDetails && tx.tokenTransferDetails.length > 0) { console.log('\n🔄 Token Transfers'); console.log('---------------'); tx.tokenTransferDetails.forEach((transfer, index) => { console.log(`\nTransfer #${index + 1}:`); console.log(`Token: ${transfer.symbol}`); console.log(`Amount: ${transfer.amount}`); console.log(`From: ${transfer.from} ${transfer.isFromContract ? '(Contract)' : '(Wallet)'}`); console.log(`To: ${transfer.to} ${transfer.isToContract ? '(Contract)' : '(Wallet)'}`); console.log(`Contract: ${transfer.tokenContractAddress}`); }); } // Internal Transactions (if any) if (tx.internalTransactionDetails && tx.internalTransactionDetails.length > 0) { console.log('\n💱 Internal Transactions'); console.log('--------------------'); tx.internalTransactionDetails.forEach((internal, index) => { console.log(`\nInternal Transfer #${index + 1}:`); console.log(`From: ${internal.from}`); console.log(`To: ${internal.to}`); console.log(`Amount: ${internal.amount} ${tx.symbol}`); console.log(`Status: ${internal.state}`); }); } } else { console.log('Status:', response.code); console.log('Message:', response.msg); console.log('Data:', response.data); } }) .catch(error => console.error('Error:', error)); ``` Update the script with: - Your API credentials - Any transaction hash you want to investigate Check a transaction: ```bash node getTx.js ``` You'll see a detailed breakdown including: - Transaction basics - Gas info - Addresses involved - Token transfers - Internal transactions ## Related APIs The Wallet API is one of four pillars in OKX OS, alongside the [DEX API] for decentralized trading, the [Marketplace API] for NFT functionality, and the [Explorer API] for blockchain data access and analysis. [DEX API]: https://www.okx.com/web3/build/docs/waas/dex-introduction [Marketplace API]: https://www.okx.com/web3/build/docs/waas/marketplace-introduction [Explorer API]: https://www.oklink.com/docs/en/#introduction # Olympix (/integrations/olympix) --- title: "Olympix" category: ["Security Tooling"] available: ["C-Chain", "All Avalanche L1s"] description: "Olympix is an institutional-grade proactive security suite for smart contract developers, providing automated vulnerability detection and testing integrated into developer workflows." logo: /images/olympix.png developer: "Olympix.ai" website: https://olympix.ai documentation: https://olympix.github.io --- ## Overview [Olympix](https://www.olympix.ai) is a proactive security suite for Web3, built to identify and prevent vulnerabilities before audits or deployment. Its proprietary architecture (including a custom compiler, intermediate representation (IR), and symbolic execution engine) powers static analysis, unit test generation, mutation testing, and automated POC generation. By integrating directly into developer environments (VS Code, GitHub CI/CD), Olympix helps teams catch vulnerabilities early, strengthen audit readiness, and reduce the time and cost of achieving secure deployment on Avalanche and other EVM-compatible chains. ## Features - Static Analyzer: Detects critical vulnerabilities early in development using advanced symbolic execution and proprietary detectors. - Unit Test Generation: Automatically generates test suites to improve coverage and resilience of smart contracts. - Mutation Testing: Measures the robustness of unit tests by simulating bad commits and highlighting which edge cases go undetected. - Internal Audit Agent: Generates and audit-style report complete with POCs to prove the exploitability of findings. - Developer Integration: Embedded within IDEs and CI/CD for continuous, proactive security. - Audit Readiness: Hardens test suites (out of scope of typical audit), reduces audit findings by up to 60%, and shortens remediation cycles. - Precision: 300% better benchmarking performance versus open-source alternatives. ## Getting Started 1. Visit [Olympix](https://olympix.ai) to request access or schedule a demo. 2. Install the Olympix VS Code or GitHub CI/CD integration. 3. Run static analysis, unit test generation, mutation testing, and internal audit agent on your Avalanche-based smart contracts. 4. Review findings and fix issues before audit and before deployment. ## Use Cases 1. DeFi Protocols: Secure high-value contracts and mitigate risk before audits. 2. Bridges and Cross-Chain Projects: Identify logic and invariant vulnerabilities across EVM environments. 3. Enterprises and Financial Institutions: Embed compliance-grade security early in tokenization and on-chain deployments. 4. Developers on Avalanche: Use proactive tooling to minimize risk and accelerate secure releases. ## Contact us For more information, please visit [Olympix](https://olympix.ai) and [contact](https://www.olympix.ai/get-started-enterprise) us. # Omniscia (/integrations/omniscia) --- title: Omniscia category: Security Audits available: ["C-Chain", "All Avalanche L1s"] description: "Omniscia provides good-quality security audits with efficient processes and competitive rates for projects on Avalanche." logo: /images/omniscia.jpeg developer: Omniscia website: https://omniscia.io/ --- ## Overview Omniscia is a security audit provider for blockchain projects building on Avalanche. Known for efficient processes and competitive rates, Omniscia delivers thorough security reviews with relatively low lead times, making them suitable for projects with time-sensitive deployment schedules. ## Features - **Smart Contract Audits**: Code reviews and vulnerability assessment. - **Efficient Process**: Streamlined audit procedures with relatively low lead times. - **Competitive Pricing**: Accessible rates with potential referral discounts. - **20% Referral Discount**: Available referral discount for Avalanche ecosystem projects. - **Protocol Security**: Analysis of protocol design and implementation. - **Practical Remediation**: Actionable guidance for addressing security issues. - **Verification Services**: Validation of security fixes after implementation. ## Getting Started 1. **Initial Contact**: Reach out through their website to discuss your security needs. 2. **Mention Referral**: Reference the Avalanche ecosystem for potential referral discounts. 3. **Audit Process**: - Scope definition and planning - Code review - Vulnerability identification and classification - Documentation of findings 4. **Report Delivery**: Receive a detailed audit report with security recommendations. 5. **Optional Follow-up**: Verification of implemented security improvements. ## Use Cases - **Time-Sensitive Projects**: Benefit from efficient audit processes. - **DeFi Applications**: Security assessment of financial smart contracts. - **NFT Platforms**: Audit of NFT-related contract implementations. - **Budget-Conscious Projects**: Access to good-quality audits with competitive pricing. - **Avalanche Ecosystem Projects**: Potential referral discounts available. # OnFinality (/integrations/onfinality) --- title: OnFinality category: RPC Endpoints available: ["C-Chain","P-Chain","X-Chain"] description: Blockchain Infrastructure Made Smarter. OnFinality empowers web3 developers with easy-to-use, reliable, and scalable blockchain infrastructure. logo: /images/onfinality.png developer: OnFinality website: https://onfinality.io --- ## Overview OnFinality provides web3 developers with reliable and scalable blockchain infrastructure. OnFinality supports many networks across RPC Nodes, Dedicated Nodes, Validators, Data Indexers, and AI Agents. ## Features - **Generous Free Tier**: Try for free with our generous [free plan](https://onfinality.io/en/pricing) - **Avalanche Native**: Complete set of Avalanche endpoints including all chains, web sockets, and ETH + AVAX - **Low Latency**: Avalanche RPC endpoints served by a global fleet of nodes for low latency - **Enterprise Nodes**: Dedicated Avalanche nodes and [load balanced clusters](https://blog.onfinality.io/onfinality-enterprise-nodes-series-blockchain-load-balancer/) for the most demanding enterprise needs like bridges and exchanges. - **Data Indexers**: Deploy Subgraph and SubQuery data indexers on [OnFinality Indexing](https://indexing.onfinality.io/login) - **Endpoint Security**: Secure your private endpoint to only accept requests from specific origins or IP addresses. - **Analytics and Insights**: Gain insights into how your application is performing and where your users are. ## Getting Started 1. **Sign Up**: Create an account with [OnFinality](https://app.onfinality.io/signup) 2. **Create your API App**: After signing in, create an API App and select the Avalanche network 4. **Copy your Private Avalanche Endpoint**: Copy your C-chain, P-Chain, or X-Chain endpoints. Web Socket is also supported. 5. **Monitor and Optimize**: Utilize OnFinality's Network Insights to track your app's performance and make changes - with analytics down to the method. ## Documentation [OnFinality Documentation](https://documentation.onfinality.io/support/api-service) ## Use Cases - **Bridges** requiring high availability connections between Avalanche and other blockchain networks. - **Explorers and Scanners** who need full archive access with high throughput for historical backfills. - **Decentralized Applications (dApps)** serving global Avalanche users 24/7 - **Exchanges and Custodial Services** who know that downtime and delays cost their customers # Openfort (/integrations/openfort) --- title: Openfort category: Wallets and Account Abstraction available: ["C-Chain", "All Avalanche L1s"] description: "Create secure embedded wallets with easy authentication flows and gas sponsorship capabilities." logo: /images/openfort.png developer: Openfort website: https://openfort.io documentation: https://www.openfort.io/docs --- ## Overview [Openfort](https://openfort.io) is an open-source alternative to wallet infrastructure solutions. The core offerings—Openfort Kit, Invisible Wallet, and Cross-app Wallet - enable rapid integration of wallet functionality, intuitive onboarding, and flexible user journeys for any application or ecosystem. ### [Openfort Kit](https://www.openfort.io/docs/products/kit/react/quickstart) Openfort Kit is a developer toolkit that streamlines the integration of wallet authentication and connectivity into any web application. It provides: - **Plug-and-play UI Components**: Prebuilt, customizable authentication and wallet connection flows that can be deployed in minutes, not weeks, with support for major authentication providers and wallet connector - **Developer Experience**: TypeScript-ready, ecosystem-standard libraries (wagmi, viem), and easy integration with frameworks like React, Next.js, and Create React App. - **Full Customization**: Predesigned themes or the ability to fully tailor the UI to match your brand. ### [Invisible Wallet](https://www.openfort.io/docs/products/embedded-wallet/javascript) Invisible Wallet enables applications to onboard users without requiring them to interact directly with traditional wallet interfaces. Features include: - **Embedded Non-custodial Signer**: Secure, self-custodied wallet creation and signing for users, with no need for browser extensions or external apps. - **Fundign Support**: Users can onramp their newly created wallets with traditional methods or depositing crypto. - **Key Export**: Users can always export private keys, allowing them to take the wallet with them. ### [Cross-app Wallet](https://www.openfort.io/docs/products/cross-app-wallet/setup) The Cross-app Wallet empowers ecosystems and platforms to offer branded, interoperable wallets that work across multiple apps and services. Key capabilities: - **Ecosystem SDK**: Build your own wallet SDK that can be integrated across your suite of apps, ensuring users have a consistent identity and asset management experience everywhere. - **No App or Extension Required**: Users can create and use wallets instantly via iFrames or embedded flows, compatible with any EVM chain. - **Modern Standards**: Supports the latest Ethereum standards (EIP-1193, 6963, 7702, 4337, and more) for broad compatibility and future-proofing. ## Getting Started ### 1. Installation ```bash # Install dependencies npm install @openfort/openfort-js @openfort/openfort-node npm install ethers viem ``` ### 2. Configuration ```typescript // Initialize Openfort with client and server configurations // Client side const openfortClient = new Openfort({ baseConfiguration: { publishableKey: "YOUR_OPENFORT_PUBLISHABLE_KEY", }, shieldConfiguration: { shieldPublishableKey: "YOUR_SHIELD_PUBLISHABLE_KEY", }, }); // Server side const openfortServer = new Openfort("YOUR_SECRET_KEY"); ``` ### 3. Basic Implementation ```typescript // Authentication const authResponse = await openfort.logInWithEmailPassword({ email: "user@example.com", password: "password123" }); // Initialize Provider const provider = await openfort.getProvider(); await provider.request({ method: 'wallet_switchEthereumChain', params: [{ chainId: `0x${avalancheChain.id.toString(16)}` }] }); // Create a gas sponsorship policy (server-side) const policy = await openfortServer.policies.create({ chainId: 43114, // Avalanche C-Chain name: "Gas Sponsorship Policy", strategy: { sponsorSchema: "pay_for_user" } }); // Create transaction intent with sponsorship (server-side) const transactionIntent = await openfortServer.transactionIntents.create({ player: "PLAYER_ID", chainId: 43114, optimistic: true, policy: policy.id, interactions: [{ contract: "CONTRACT_ADDRESS", functionName: "transfer", functionArgs: [recipientAddress, amount] }] }); // Sign and send the sponsored transaction (client-side) const response = await openfortClient.sendSignatureTransactionIntentRequest( transactionIntent.id, transactionIntent.nextAction.payload.userOperationHash ); ``` ## Documentation For more details, visit [Openfort Documentation](https://www.openfort.io/docs) # OpenTrade (/integrations/opentrade) --- title: OpenTrade category: Tokenization Platforms available: ["C-Chain"] description: OpenTrade provides institutional-grade, asset-backed yield products for the stablecoin economy, offering exchanges and neobanks access to tokenized real-world assets like money market funds. logo: /images/opentrade.webp developer: OpenTrade website: https://www.opentrade.io/ documentation: https://www.opentrade.io/ --- ## Overview OpenTrade was spun out of Circle to build asset-backed yield products for the stablecoin economy. Backed by Circle and a16z Crypto, OpenTrade gives cryptocurrency exchanges, neobanks, and fintech platforms access to yield from real-world assets such as money market funds, U.S. Treasuries, and other short-duration fixed income products. The platform includes bank-grade legal structuring, off-chain asset management, and on-chain tokenization infrastructure. Platforms can integrate yield offerings into both front-end user experiences and back-end treasury operations while maintaining the compliance and liquidity characteristics needed for institutional adoption. ## Features - **Asset-Backed Yield Products**: Tokenized exposure to institutional-grade real-world assets generating stable returns. - **Money Market Funds**: Access to high-quality money market instruments with daily liquidity. - **U.S. Treasury Exposure**: Tokenized products backed by U.S. government securities. - **Institutional-Grade Assets**: All underlying assets meet institutional quality and compliance standards. - **Full-Service Platform**: Covers legal structuring, asset management, and tokenization. - **Bank-Grade Legal Structure**: Compliant securitization and fund structures developed with top-tier legal advisors. - **Off-Chain Asset Management**: Professional management of underlying real-world assets by regulated managers. - **On-Chain Tokenization**: Blockchain-native tokens representing ownership in underlying yield-generating assets. - **Easy Integration**: Simple APIs for exchanges and platforms to offer yield products to end users. - **Front-End and Back-End Use**: Support both customer-facing yield products and treasury management. - **Daily Liquidity**: Products designed for redemption flexibility matching crypto market expectations. - **Transparent Reporting**: Regular reporting on underlying asset composition and performance. - **Stablecoin Native**: Purpose-built for platforms operating in the stablecoin ecosystem. - **Regulatory Compliance**: Fully compliant structures meeting securities and banking regulations. ## Getting Started To integrate OpenTrade into your platform: 1. **Contact OpenTrade**: Reach out to OpenTrade's partnerships team to discuss your platform's needs. 2. **Define Use Case**: Determine whether you need: - Customer-facing yield products for end users - Treasury management solutions for corporate funds - Both retail and treasury applications 3. **Platform Assessment**: Work with OpenTrade to assess integration requirements: - User base and expected demand - Regulatory considerations for your jurisdiction - Technical integration approach - Front-end vs back-end implementation 4. **Legal and Compliance Setup**: OpenTrade handles legal structuring: - Establish compliant fund or securitization structure - Navigate securities regulations - Set up custody and administration arrangements 5. **Technical Integration**: Implement OpenTrade's platform: - Integrate APIs for subscription and redemption - Connect to on-chain token infrastructure - Implement necessary compliance checks - Test in sandbox environment 6. **Launch Yield Products**: Go live with OpenTrade-powered yield offerings: - Enable users to earn yield on idle stablecoins - Use treasury products to generate returns on corporate holdings - Provide transparent reporting to users ## Avalanche Support OpenTrade's tokenization infrastructure supports multiple blockchain networks. As an EVM-compatible chain, Avalanche C-Chain works for deploying OpenTrade's yield-generating tokenized products, letting platforms on Avalanche offer yield to their users with the low costs and speed of the Avalanche network. ## Product Offerings OpenTrade provides several types of asset-backed yield products: **Money Market Fund Tokens**: Tokenized exposure to institutional money market funds investing in high-quality, short-term debt instruments. **Treasury Tokens**: Digital assets backed by U.S. Treasury bills and bonds, providing government-backed yield. **Short-Duration Fixed Income**: Tokenized access to investment-grade corporate debt and other short-duration instruments. **Cash Management Products**: Yield solutions designed for institutional cash management with daily liquidity. All products are structured with institutional-grade underlying assets managed by regulated asset managers. ## Use Cases OpenTrade serves various platform types and use cases: **Cryptocurrency Exchanges**: Offer yield products to exchange users who want to earn returns on idle stablecoin balances without leaving the platform. **Neobanks**: Provide competitive interest rates to customers through asset-backed tokenized products. **Fintech Platforms**: Integrate yield-generating features into fintech apps serving retail or business customers. **Treasury Management**: Enable companies to generate returns on corporate stablecoin treasuries through institutional-grade products. **DeFi Protocols**: Bridge DeFi platforms to real-world asset yields through compliant tokenized products. **Payment Platforms**: Offer yield on payment platform balances, enhancing user value proposition. **Custodians**: Provide custody clients with access to yield on their held assets. ## Platform Benefits **Turnkey Solution**: OpenTrade handles legal structuring, asset management, and compliance—platforms just integrate. **Institutional Quality**: All underlying assets meet institutional standards for credit quality and liquidity. **Regulatory Compliance**: Bank-grade legal structures ensure compliance with securities and financial regulations. **Flexible Integration**: Support for both customer-facing and treasury management use cases. **Stable Yields**: Generate predictable returns from high-quality fixed income assets. **Daily Liquidity**: Redemption structures designed for crypto market expectations. **Transparent Operations**: Regular reporting and disclosure of underlying assets. **Proven Backing**: Supported by Circle and a16z Crypto with institutional credibility. ## Legal and Compliance OpenTrade maintains the following compliance standards: - **Securities Compliance**: Registered offerings or appropriate exemptions for tokenized products - **Banking Regulations**: Structures that comply with banking and financial services rules - **Asset Management**: Underlying assets managed by regulated investment managers - **Custody**: Institutional-grade custody arrangements for underlying assets - **Reporting**: Regular financial reporting and audits - **KYC/AML**: Integrated compliance processes for investor verification - **Multi-Jurisdictional**: Structures adaptable to different regulatory environments ## Technology Infrastructure OpenTrade provides the following infrastructure: - **Subscription/Redemption APIs**: Simple integration for platforms to offer yield products - **On-Chain Tokens**: ERC-20 compatible tokens representing ownership in underlying assets - **Smart Contracts**: Audited smart contracts for token issuance and lifecycle management - **Off-Chain Asset Bridge**: Secure connection between blockchain tokens and traditional assets - **Reporting Dashboard**: Real-time portfolio composition and performance data - **Compliance Engine**: Automated compliance checks and regulatory requirements - **Multi-Chain Support**: Capability to deploy across multiple blockchain networks ## Asset Management OpenTrade partners with top-tier asset managers: - **Institutional Managers**: Regulated asset management firms with proven track records - **Quality Standards**: Strict criteria for underlying asset credit quality - **Diversification**: Portfolio construction across multiple high-quality issuers - **Liquidity Management**: Daily management to support redemptions - **Risk Management**: Conservative approach prioritizing capital preservation - **Transparent Reporting**: Regular disclosure of holdings and performance ## Circle and a16z Backing OpenTrade's backing provides significant advantages: **Circle Relationship**: Spun out of Circle, the issuer of USDC, providing deep stablecoin ecosystem ties. **a16z Crypto Investment**: Support from leading crypto venture firm with extensive network. **Industry Credibility**: Backing from respected names enhances trust with platforms and regulators. **Strategic Alignment**: Close integration with Circle's broader stablecoin infrastructure. **Network Access**: Connections to exchanges, platforms, and institutions through backers. ## Pricing OpenTrade offers institutional pricing: - **Management Fees**: Annual fees based on assets under management - **Performance Fees**: Potential performance-based compensation (product dependent) - **Platform Fees**: Integration and ongoing platform access fees - **Custom Structures**: Tailored pricing for large platforms or unique requirements - **Transparent Pricing**: Clear fee schedules with no hidden costs Contact OpenTrade for detailed pricing based on your platform's specific needs. ## Competitive Advantages **Purpose-Built for Stablecoins**: Designed specifically for the stablecoin economy, not retrofitted from TradFi. **Full-Service Solution**: Handles legal, compliance, asset management, and technology in one package. **Institutional Quality**: No compromise on underlying asset quality or regulatory compliance. **Proven Team**: Leadership with deep experience in both traditional finance and crypto. **Strategic Backing**: Support from Circle and a16z Crypto provides credibility and resources. **Easy Integration**: Simple APIs make offering yield products straightforward for platforms. **Flexible Deployment**: Support both retail yield products and institutional treasury management. # OpenZeppelin Audits (/integrations/openzeppelin-audit) --- title: OpenZeppelin Audits category: Audit Firms available: ["C-Chain"] description: OpenZeppelin provides expert smart contract audits, security tools, and the industry's most widely used smart contract libraries trusted by thousands of projects. logo: /images/openzeppelin.png developer: OpenZeppelin website: https://openzeppelin.com/ documentation: https://openzeppelin.com/security-audits --- ## Overview OpenZeppelin is widely trusted in smart contract security, known both for creating the industry-standard OpenZeppelin Contracts library used by thousands of projects and for providing security audit services. Founded by security researchers and Ethereum core contributors, OpenZeppelin has audited hundreds of high-profile projects including Ethereum Foundation, Coinbase, TheGraph, Aave, Compound, and many large DeFi protocols. OpenZeppelin's security team has both created security patterns used across the industry and audited critical blockchain infrastructure. Their combination of deep protocol knowledge, audit experience, and contributions to Ethereum security standards makes them a strong choice for projects requiring high security assurance. ## Services - **Smart Contract Audits**: Security audits by experienced experts. - **Protocol Security Reviews**: Architecture and design-level security assessment. - **Security Consulting**: Advisory services for security best practices and protocol design. - **Formal Verification**: Mathematical proofs of contract correctness for critical systems. - **Incident Response**: Emergency support and post-mortem analysis. - **Security Training**: Educational programs for development teams. - **OpenZeppelin Defender**: Automated security operations platform. - **Continuous Monitoring**: Ongoing security surveillance post-deployment. - **Upgrade Security**: Safe upgrade pattern implementation and review. - **Economic Security**: Tokenomics and game theory analysis. ## OpenZeppelin Contracts Beyond audits, OpenZeppelin maintains the industry-standard smart contract library: **OpenZeppelin Contracts**: Battle-tested Solidity library with implementations of ERC standards, access control, security utilities, and more. Used by thousands of projects as the secure foundation for their contracts. **Upgradeable Contracts**: Safe upgrade patterns and implementations. **Cairo Contracts**: Standard library for StarkNet smart contracts. The library represents years of security research and community contributions. ## Audit Methodology OpenZeppelin's audit process: 1. **Kickoff & Planning**: Deep dive into protocol design and threat model 2. **Automated Analysis**: Run security tools 3. **Manual Review**: Expert review by senior security researchers 4. **Architecture Analysis**: Assess system design and attack surfaces 5. **Economic Security**: Review incentive structures and game theory 6. **Integration Testing**: Test interactions with external protocols 7. **Formal Verification**: Prove critical invariants mathematically (when applicable) 8. **Report Compilation**: Detailed report with prioritized findings 9. **Review Call**: In-depth discussion of findings with team 10. **Remediation Support**: Ongoing support during fixes 11. **Re-Audit**: Thorough verification of all remediations ## OpenZeppelin Defender OpenZeppelin Defender provides ongoing security operations: **Operations**: Automate smart contract operations securely. **Monitoring**: Real-time alerts for suspicious transactions. **Incident Response**: Automated response to detected threats. **Access Control**: Secure management of contract permissions. **Upgrades**: Safely execute contract upgrades. This platform extends security beyond one-time audits into continuous protection. ## Avalanche Expertise OpenZeppelin has experience securing protocols across all major blockchain networks including Avalanche: - Avalanche C-Chain smart contracts - Cross-chain bridge implementations - Subnet-specific security considerations - High-throughput protocol designs - Avalanche consensus and finality properties ## Access Through Areta Marketplace Avalanche projects can engage OpenZeppelin through the [Areta Audit Marketplace](https://areta.market/avalanche): - **Direct Connection**: Get matched with OpenZeppelin for your Avalanche project - **Competitive Process**: Compare proposals from multiple top-tier firms - **Transparent Pricing**: Clear costs without intermediaries - **Subsidy Eligibility**: Qualify for up to $10k in audit cashback - **Streamlined Engagement**: Faster than traditional direct outreach - **Ecosystem Support**: Marketplace built specifically for Avalanche ## Notable Audits OpenZeppelin has audited the most critical infrastructure in blockchain: - Ethereum Foundation (multiple projects) - Coinbase (various infrastructure) - Aave (multiple versions) - Compound - TheGraph - Gnosis Safe - Synthetix - MakerDAO - And hundreds of other leading projects ## Why Choose OpenZeppelin **Library Creators**: Built the security patterns the industry relies on. **Deep Expertise**: Team includes Ethereum core contributors and security researchers. **Formal Verification**: Capability to provide mathematical security proofs. **Ongoing Tools**: Defender platform provides continuous security. **Track Record**: Chosen by many of the highest-profile projects in blockchain. ## Research and Standards OpenZeppelin actively shapes blockchain security: - EIP contributions and security standards - Security research and publications - Conference presentations and workshops - Open-source security tools and libraries - Community education and resources ## Pricing OpenZeppelin audits typically serve: - High-value protocols requiring maximum security assurance - Enterprise blockchain implementations - Infrastructure-level systems - Projects with significant funding and complexity Pricing reflects their premium positioning and unmatched expertise. Contact via Areta marketplace or directly for proposals. ## Getting Started 1. **Via Areta Marketplace** (Recommended for Avalanche): - Visit [areta.market/avalanche](https://areta.market/avalanche) - Submit your audit request - Receive proposal from OpenZeppelin - Access potential subsidies 2. **Direct Contact**: - Visit [openzeppelin.com/security-audits](https://openzeppelin.com/security-audits) - Submit audit inquiry - Schedule consultation - Receive detailed proposal ## Deliverables OpenZeppelin provides: - **Audit Report**: Full findings with detailed analysis - **Executive Summary**: High-level overview for stakeholders - **Architecture Recommendations**: System-level security improvements - **Code Review**: Line-by-line assessment and suggestions - **Formal Verification Report**: Mathematical proofs (when applicable) - **Re-Audit Report**: Verification of all fixes - **Defender Integration**: Optional ongoing monitoring setup ## Training and Resources OpenZeppelin provides extensive security resources: - OpenZeppelin Contracts documentation - Security guides and best practices - Video tutorials and workshops - Smart contract security blog - Community forums and support # OpenZeppelin (/integrations/openzeppelin) --- title: OpenZeppelin category: Security Audits available: ["C-Chain", "All Avalanche L1s"] description: "OpenZeppelin provides high-quality security audits for smart contracts with additional auditing hours for community contributions." logo: /images/openzeppelin.png developer: OpenZeppelin website: https://www.openzeppelin.com/security-audits documentation: https://docs.openzeppelin.com/ --- ## Overview OpenZeppelin is a security firm specializing in smart contract security, best known for their widely-used secure contract libraries and security tools. They provide audits for projects building on Avalanche, combining manual code review with automated analysis. OpenZeppelin's team includes smart contract experts experienced in identifying vulnerabilities and recommending secure development practices. ## Features - **Smart Contract Audits**: Review of smart contract code and architecture. - **Security Research**: Continuous research on smart contract vulnerabilities and security patterns. - **Architecture Review**: Assessment of system architecture and security design. - **Best Practice Guidance**: Recommendations aligned with industry security standards. - **Community Contributions**: Additional auditing hours available for community-focused projects. - **Library Expertise**: Deep knowledge of secure smart contract patterns and libraries. - **Custom Tooling**: Development and use of specialized security tools. ## Getting Started 1. **Request an Audit**: Contact OpenZeppelin through their website to initiate the process. 2. **Scope Definition**: Collaborate to define the audit scope, timeline, and objectives. 3. **Audit Process**: - Manual code review by security experts - Automated analysis using proprietary and open-source tools - Vulnerability identification and classification - Detailed remediation guidance 4. **Report Delivery**: Receive an audit report with detailed findings. 5. **Optional Follow-up**: Post-audit verification of implemented fixes. ## Use Cases - **DeFi Protocols**: Thorough validation of financial smart contracts. - **Open Source Projects**: Security reviews with potential for additional community-focused audit hours. - **Projects Using OpenZeppelin Libraries**: Specialized expertise in reviewing implementations that build on their libraries. - **EVM-Based Smart Contracts**: Deep expertise in EVM specifics and security implications. - **Governance Systems**: Review of DAO and governance contract implementations. # Orbital (/integrations/orbital) --- title: Orbital category: Payments available: ["C-Chain"] description: Orbital provides payment solutions including merchant acquiring, payment gateway, and transaction processing with support for traditional and digital currencies. logo: /images/orbital.png developer: Orbital website: https://getorbital.com/ documentation: https://docs.getorbital.com/ --- ## Overview Orbital is a payment solutions provider offering merchant acquiring, payment gateway services, and transaction processing for businesses of all sizes. It combines traditional payment processing with digital currency support, so merchants can accept payments across multiple channels and currencies. The platform handles payment infrastructure from checkout to settlement, letting businesses focus on their core operations. ## Features - **Merchant Acquiring**: Complete merchant services for payment acceptance. - **Payment Gateway**: Secure gateway for processing online and in-person payments. - **Multi-Currency**: Support for fiat currencies and cryptocurrencies. - **Multiple Payment Methods**: Credit cards, debit cards, ACH, crypto, and more. - **Transaction Processing**: Reliable processing with high uptime. - **Fraud Prevention**: Built-in fraud detection and prevention tools. - **PCI Compliance**: PCI DSS compliant payment infrastructure. - **API Integration**: Developer-friendly APIs for custom integration. - **Reporting**: Transaction reporting and analytics. - **Settlement**: Flexible settlement options and schedules. - **Multi-Channel**: Support for online, mobile, and in-person payments. ## Getting Started To integrate Orbital: 1. **Merchant Onboarding**: Apply for a merchant account with Orbital. 2. **Account Setup**: Configure your payment preferences: - Accepted payment methods - Settlement preferences - Fraud rules - Integration approach 3. **Integration**: Implement Orbital's payment gateway: - Integrate payment APIs - Add checkout flows - Configure webhooks - Test transactions 4. **Go Live**: Start processing customer payments. ## Avalanche Support Orbital's multi-currency payment infrastructure includes support for blockchain-based payments on networks like Avalanche, enabling merchants to accept AVAX and Avalanche stablecoins alongside traditional payment methods. ## Use Cases **E-Commerce**: Process online payments for retail businesses. **Omnichannel Retail**: Accept payments online, mobile, and in-store. **Subscription Services**: Process recurring payments automatically. **High-Volume Merchants**: Handle large transaction volumes reliably. **International Sales**: Accept payments from global customers. **Crypto Acceptance**: Integrate cryptocurrency payment options. # Paladin Security (/integrations/paladinsec) --- title: Paladin Security category: Audit Firms available: ["C-Chain"] description: Paladin Security provides smart contract audits and security assessments for DeFi, NFT, and infrastructure protocols. logo: /images/paladin.svg developer: Paladin Security website: https://paladinsec.co/ documentation: https://paladinsec.co/services --- ## Overview Paladin Security is a blockchain security firm specializing in smart contract audits and security assessments for Web3 protocols. They help development teams identify and fix vulnerabilities before launching on Avalanche and other networks. Their security researchers have deep knowledge of smart contract security patterns, attack vectors, and best practices. Paladin combines manual code review with automated security tools to cover potential security issues. Their auditors have experience across multiple blockchain ecosystems and protocol types, from DeFi to NFTs to infrastructure. ## Services - **Smart Contract Audits**: Full security audits of Solidity and other smart contract languages. - **Security Assessments**: Comprehensive evaluation of protocol architecture and design. - **Vulnerability Analysis**: Identification and classification of security issues by severity. - **Code Quality Review**: Assessment of code organization, documentation, and maintainability. - **Gas Optimization Analysis**: Recommendations for reducing transaction costs. - **Post-Audit Consultation**: Support during vulnerability remediation. - **Re-Audits**: Verification audits after fixes are implemented. - **Security Documentation**: Detailed audit reports with findings and recommendations. - **Ongoing Security Support**: Available for consultation on security matters. ## Audit Methodology Paladin follows a structured approach to security audits: 1. **Scope Definition**: Establish clear audit boundaries and objectives 2. **Documentation Review**: Understand protocol design and intended behavior 3. **Automated Testing**: Run security analysis tools on codebase 4. **Manual Code Review**: Line-by-line examination by experienced auditors 5. **Vulnerability Testing**: Test for known attack patterns and edge cases 6. **Reporting**: Compile comprehensive report with prioritized findings 7. **Team Collaboration**: Present findings and answer questions 8. **Remediation Support**: Assist with fixing identified issues 9. **Verification**: Re-audit to confirm all issues are resolved ## Avalanche Expertise Paladin has experience auditing protocols built on Avalanche C-Chain, understanding the platform-specific considerations including: - Avalanche smart contract patterns - EVM compatibility nuances - Cross-chain bridge security - Subnet-specific implementations - High-throughput protocol designs ## Access Through Areta Marketplace Avalanche builders can connect with Paladin Security through the [Areta Audit Marketplace](https://areta.market/avalanche): - **Quick Matching**: Submit request and receive quotes within 48 hours - **Multiple Proposals**: Compare Paladin's quote with other top auditors - **Transparent Pricing**: No hidden fees or intermediaries - **Subsidy Programs**: Potential eligibility for up to $10k in audit cashback - **Streamlined Process**: Simplified engagement compared to direct outreach - **Ecosystem-Specific**: Marketplace designed for Avalanche projects ## Audit Focus Areas **DeFi Protocols**: DEXs, lending platforms, staking, and yield optimization. **NFT Projects**: NFT minting, marketplaces, and gaming integrations. **Token Contracts**: ERC-20, ERC-721, and custom token implementations. **Governance Systems**: DAO contracts and voting mechanisms. **Bridge Protocols**: Cross-chain bridges and messaging systems. **Infrastructure**: Protocol-level infrastructure and system contracts. ## Why Choose Paladin **Thorough Analysis**: Comprehensive review combining automated and manual techniques. **Experienced Team**: Security researchers with extensive smart contract audit experience. **Clear Communication**: Detailed reports with actionable recommendations. **Reasonable Pricing**: Competitive rates for high-quality security audits. **Fast Turnaround**: Efficient processes without sacrificing thoroughness. **Avalanche Knowledge**: Experience with Avalanche ecosystem protocols. ## Getting Started To engage Paladin Security: 1. **Via Areta Marketplace** (Recommended for Avalanche): - Visit [areta.market/avalanche](https://areta.market/avalanche) - Submit your audit request with scope details - Receive competitive quote from Paladin - Choose based on pricing, timeline, and fit 2. **Direct Contact**: - Visit [paladinsec.co](https://paladinsec.co/) - Request audit consultation - Discuss project scope and timeline - Receive audit proposal ## Deliverables Paladin provides comprehensive audit deliverables: - **Audit Report**: Complete findings with severity classifications and recommendations - **Executive Summary**: Overview suitable for stakeholders and investors - **Code Suggestions**: Specific code improvements and optimizations - **Remediation Guidance**: Clear instructions for fixing identified issues - **Re-Audit Report**: Verification that all issues have been properly addressed - **Security Badge**: Post-audit badge for your documentation # Palmera (/integrations/palmera-infra-provider) --- title: Palmera category: Wallets and Account Abstraction available: ["All Avalanche L1s"] description: Palmera is a Safe multisig infrastructure provider powering secure smart-account deployments, white-label UIs, and backend services for Avalanche L1s. logo: /images/palmera.png developer: Palmera website: https://www.palmeradao.xyz/ documentation: https://docs.palmeradao.xyz/safe-multisig-deployment --- ## Overview [Palmera](https://www.palmeradao.xyz/) provides end-to-end Safe multisig infrastructure for Avalanche L1s, enabling teams to deploy and operate secure smart accounts using official Safe contracts. With over four years of experience running Safe infrastructure, Palmera ensures reliable, production-ready systems with ongoing maintenance, SLAs, and DevOps support. Palmera's infrastructure solution includes official Safe contract deployment, white-label Safe UI hosting, backend services, monitoring, and guaranteed uptime through SLAs. This lets Avalanche L1 teams focus on building their ecosystems while Palmera handles the infrastructure. ## Features Palmera provides a complete Safe multisig infrastructure solution with the following features: - **Official Safe Contract Deployment**: Deployment of Safe multisig contracts using canonical Safe releases, compatible with the official Safe ecosystem. - **White-Label Safe UI**: Customizable Safe interface hosted and maintained by Palmera for each Avalanche L1. - **Infrastructure & DevOps**: Backend services, monitoring, and guaranteed uptime via SLAs. - **Proposer & Nested Safes**: Advanced Safe features fully supported across the chain ecosystem for complex multi-signature workflows. - **Safe App Compatibility**: Access to the Safe Apps ecosystem for extended capabilities. - **Security-First Architecture**: Continuous updates and alignment with Safe's latest security requirements. ## Getting Started Avalanche L1 teams can integrate Safe multisig infrastructure by contacting Palmera. The onboarding and setup process includes: 1. **Chain data collection**: Palmera gathers necessary information about your Avalanche L1 chain. 2. **Official Safe contract deployment**: Deployment of canonical Safe contracts to your chain. 3. **Backend indexing and infrastructure setup**: Configuration of backend services and indexing infrastructure. 4. **UI deployment and customization**: Setup and customization of the white-label Safe UI. 5. **Testing, QA, and launch support**: Testing and quality assurance before production launch. Teams can also use **Palmera's Safe One-Click Deployment**, which already supports deployment for Avalanche L1s for instant access to official Safe contracts and infrastructure. To begin integration, visit [Palmera's website](https://www.palmeradao.xyz/) to get started. ## Documentation Link For more details, visit the [Palmera Documentation](https://docs.palmeradao.xyz/safe-multisig-deployment). # PancakeSwap (/integrations/pancakeswap) --- title: PancakeSwap category: DeFi available: ["C-Chain"] description: "PancakeSwap is a leading DEX offering token swaps, yield farming, and perpetual trading on Avalanche's C-Chain with gamified features." logo: /images/pancakeswap.jpeg developer: PancakeSwap website: https://pancakeswap.finance/ documentation: https://docs.pancakeswap.finance/ --- ## Overview PancakeSwap is a popular decentralized exchange that has expanded to Avalanche's C-Chain, offering token swaps, yield farming, and perpetual trading. Known for its user-friendly interface and gamified features, PancakeSwap pairs DeFi functionality with an engaging user experience. ## Features - **Smart Router**: Intelligent routing system for optimal trade execution and better rates. - **Perpetual Trading**: Up to 100x leverage trading with competitive fees. - **Stable Swaps**: Optimized pools for stablecoin trading with minimal slippage. - **Yield Farming**: Multiple opportunities to earn CAKE and other tokens. - **Fixed-Term Staking**: Flexible and fixed-term staking options for CAKE. - **NFT Ecosystem**: Integration of NFTs with trading and farming features. - **IFO (Initial Farm Offering)**: Launch platform for new projects. ## Getting Started To begin using PancakeSwap on Avalanche: 1. **Access Platform**: Visit [PancakeSwap](https://pancakeswap.finance/) and switch to Avalanche network. 2. **Connect Wallet**: Link your Web3 wallet and ensure you have AVAX for gas fees. 3. **Start Trading**: - Select tokens for swapping - Review exchange rate and slippage - Confirm transaction 4. **Explore Features**: Discover farming, staking, and perpetual trading options. ## Documentation For detailed guides and technical documentation, visit the [PancakeSwap Documentation](https://docs.pancakeswap.finance/). ## Use Cases PancakeSwap serves various DeFi needs: - **Token Swapping**: Efficient token exchanges with competitive rates. - **Perpetual Trading**: Access to leveraged trading with deep liquidity. - **Yield Generation**: Multiple farming and staking opportunities. - **Project Launches**: Platform for new token launches through IFO. - **Stablecoin Trading**: Optimized pools for stablecoin swaps. # Pangolin (/integrations/pangolin) --- title: Pangolin category: DeFi available: ["C-Chain"] description: "Pangolin is a community-driven decentralized exchange for Avalanche and Ethereum assets with fast settlement, low transaction fees, and a democratic distribution." logo: /images/pangolin.jpeg developer: Pangolin DAO website: https://pangolin.exchange/ documentation: https://docs.pangolin.exchange/ --- ## Overview Pangolin is a decentralized exchange (DEX) built on Avalanche, offering fast and cost-effective trading of Avalanche and Ethereum assets. As a community-driven platform, Pangolin emphasizes democratic governance and fair token distribution while providing essential DeFi services including swapping, yield farming, and liquidity provision. ## Features - **Fast Settlement**: Leverage Avalanche's quick finality for near-instant trade settlement. - **Low Transaction Fees**: Benefit from Avalanche's C-Chain efficiency for reduced trading costs. - **Cross-Chain Trading**: Trade both Avalanche-native and Ethereum assets. - **Yield Farming**: Earn rewards through liquidity provision and farming opportunities. - **Community Governance**: Participate in platform decisions through the PNG governance token. - **User-Friendly Interface**: Simple and intuitive trading interface for all user levels. ## Getting Started To begin using Pangolin: 1. **Connect Wallet**: Visit [Pangolin Exchange](https://pangolin.exchange/) and connect your Web3 wallet. 2. **Fund Your Wallet**: Ensure you have AVAX for transaction fees. 3. **Start Trading**: - Select tokens to trade - Review rates and slippage - Confirm transaction 4. **Provide Liquidity**: Optionally, add liquidity to earn trading fees and PNG rewards. ## Documentation For detailed guides and technical documentation, visit the [Pangolin Documentation](https://docs.pangolin.exchange/). ## Use Cases Pangolin serves various DeFi needs: - **Token Swaps**: Quick and efficient token exchanges on Avalanche. - **Liquidity Provision**: Earn yields by providing liquidity to trading pairs. - **Yield Farming**: Access additional rewards through farming programs. - **Cross-Chain Access**: Trade Ethereum-based assets on Avalanche. - **DAO Participation**: Engage in platform governance through PNG token. # ParaFi (/integrations/parafi) --- title: ParaFi category: Assets available: ["C-Chain"] description: "ParaFi Capital is a digital asset investment firm offering tokenized investment products and DeFi strategies." logo: /images/parafi.png developer: ParaFi Capital website: https://www.parafi.capital/ documentation: https://www.parafi.capital/ --- ## Overview ParaFi Capital is a digital asset investment firm that bridges traditional finance and decentralized finance. Through tokenized investment products and strategic DeFi positions, ParaFi provides institutional and qualified investors with access to digital asset opportunities. ## Features - **Tokenized Products**: Investment products available as tokenized assets - **DeFi Strategies**: Professional DeFi investment strategies - **Institutional Grade**: Investment processes meeting institutional standards - **Research-Driven**: Deep research into DeFi protocols and opportunities - **Portfolio Management**: Professional portfolio management services - **Regulatory Compliance**: Compliant investment structures ## Getting Started 1. **Visit ParaFi**: Explore [ParaFi Capital](https://www.parafi.capital/) 2. **Review Offerings**: Learn about available investment products 3. **Contact Team**: Reach out for investment inquiries 4. **Due Diligence**: Complete investor qualification process 5. **Invest**: Access ParaFi investment products ## Documentation For more information, visit the [ParaFi Capital website](https://www.parafi.capital/). ## Use Cases - **DeFi Exposure**: Gain professional DeFi exposure - **Institutional Investment**: Digital asset investment meeting institutional standards - **Tokenized Funds**: Access tokenized investment fund products - **Strategic Allocation**: Strategic allocation to digital assets # Parallel Markets (/integrations/parallel-markets) --- title: Parallel Markets category: KYC / Identity Verification available: ["C-Chain"] description: Parallel Markets provides KYC/AML solutions with a portable identity system, well suited for crypto platforms implementing txAllowlist precompiles. logo: /images/parallel-markets.png developer: Parallel Markets website: https://parallelmarkets.com/ documentation: https://developer.parallelmarkets.com/ --- ## Overview Parallel Markets offers identity verification and compliance solutions for financial institutions and blockchain platforms. Their KYC (Know Your Customer) and AML (Anti-Money Laundering) tools verify user identities with less friction than traditional onboarding. For blockchain applications implementing txAllowlist precompiles, Parallel Markets provides a portable identity solution — users verify once and can access multiple platforms while maintaining compliance. ## Features - **Portable Identity Verification**: Users can complete KYC verification once and reuse their verified credentials across platforms in the Parallel Markets ecosystem. - **Comprehensive KYC/AML Screening**: Automated verification against global sanctions lists, PEP (Politically Exposed Persons) databases, and adverse media sources. - **Business Verification (KYB)**: Verify corporate entities, map beneficial ownership structures, and perform due diligence checks for institutional users. - **No-Code Integration**: Implement identity verification with minimal development resources through their JavaScript SDK or API integration. - **Real-Time Monitoring**: Continuous monitoring of user profiles for changes in risk status or sanctions exposure. - **Customizable Verification Flows**: Design verification journeys specific to your platform's risk profile and regulatory requirements. - **Dashboard Analytics**: Access verification results and monitor compliance metrics through a unified dashboard. ## Getting Started To integrate Parallel Markets into your Avalanche-based application with txAllowlist precompiles, follow these steps: 1. **Account Setup**: Contact Parallel Markets to create an account and obtain API credentials. 2. **Integration Planning**: Choose between JavaScript SDK for frontend integration or direct API calls from your backend. 3. **Configure Verification Flow**: Set up your verification requirements based on your compliance needs. 4. **Implementation**: Follow the [documentation](https://developer.parallelmarkets.com/) to add the verification flow to your application. 5. **Testing**: Verify the integration in a sandbox environment before going live. 6. **Verification-to-Allowlist Mapping**: Connect user verification status to your txAllowlist management to ensure only verified users can transact. ## Integration with txAllowlist Parallel Markets is particularly well-suited for platforms implementing txAllowlist precompiles because: 1. **Streamlined Allowlisting**: When a user completes verification through Parallel Markets, your application can automatically add their wallet address to the txAllowlist. 2. **Verification Status Monitoring**: If a user's compliance status changes (e.g., they appear on a sanctions list), your application can receive webhook notifications and remove their address from the allowlist. 3. **Programmatic Control**: The API-driven approach allows for complete automation of the allowlist management based on identity verification results. 4. **Reduced Onboarding Friction**: Users who have already verified with another platform in the Parallel Markets ecosystem can be fast-tracked through your verification process. ## Documentation Integration guides and API documentation are available in the [Parallel Markets Developer Documentation](https://developer.parallelmarkets.com/). The documentation covers: - JavaScript SDK implementation - Server-side API integration - Webhook setup for status notifications - Testing and sandbox environments - Handling verification results ## Use Cases Parallel Markets' identity verification solutions are ideal for various Avalanche-based applications: - **Permissioned DeFi Protocols**: Ensure participants in lending or trading pools meet regulatory requirements. - **Tokenized Securities**: Verify investor accreditation status and identity for compliant security token offerings. - **Enterprise Blockchains**: Create permissioned networks with verified corporate and individual participants. - **Regulated Exchanges**: Build compliant DEXs that satisfy regulatory requirements while maintaining a good user experience. - **Cross-Chain Applications**: Apply consistent compliance standards across multiple blockchain networks. # Particle Network (/integrations/particle-network) --- title: Particle Network category: Chain Abstraction available: ["C-Chain"] description: Chain abstraction powered by Universal Accounts. One account, one balance, any chain. logo: /images/particle-network.png developer: Particle Network website: https://particle.network/ documentation: https://developers.particle.network/intro/introduction --- ## **Overview** Particle Network enables **chain abstraction** through its **Universal Accounts** infrastructure, giving users a unified account and balance. This allows them to interact with your Avalanche dApp with assets from any supported chain (EVM chains and Solana) without bridging. Universal Accounts also abstract away network switching and gas management, allowing users to pay gas with any token. Thanks to this, Particle Network delivers a truly chain-agnostic experience where liquidity can be deployed on any chain, independently of where the user holds funds. Within Universal Accounts users’ assets are also automatically combined, allowing them to deposit tokens from multiple chains and spend them as if they were in the same chain. ## **Features** * **Universal Accounts**: One account and one balance across all supported chains—no need to bridge or switch networks. * **Chain Abstraction**: Handle multi-chain logic like transfers, payments, and smart contract interactions at the account level, not the chain level. * **Multi-Chain Support**: Works with nearly all EVM chains and Solana, with new integrations added frequently. * **Developer Flexibility**: Compatible with common tools like ethers, wagmi, and viem. ## **Getting Started** 1. **Initialize the Universal Accounts SDK** using your API key from the [Particle Dashboard](https://dashboard.particle.network/). 2. **Fetch the user’s universal address and unified balance** – a single address and balance valid across all chains. 3. **Build and send UserOperations** using any supported network. 4. **Deploy your chain-agnostic app** – users can now transact across ecosystems with a single unified account. [Check out the Quickstart](https://developers.particle.network/universal-accounts/cha/web-quickstart) for more details. ## **Use Cases** * **DeFi Platforms**: Multi-chain deposits, cross-chain swaps, paying gas in any currency, and allowing users to combine their liquidity from different chains. * **Blockchain Games**: Web2-like onboarding via social logins, with full on-chain settlement and deposits/withdrawals to/from any chain. * **NFT Marketplaces**: Cross-chain spending (deposit assets from one chain, buy/mint in another) and combined ownership/listing of NFTs from multiple chains. * **Fintech & Payments**: Allowing users to pay gas in any token, simplified stablecoin-centric interfaces. * **Others**: Given Universal Accounts multi-dApp ecosystem, ability for users to use their UAs across a number of apps. ## **Supported Chains** Universal Accounts currently support most **EVM chains and Solana**, and will soon support all major Avalanche L1s. View the full list: [Network Coverage](https://developers.particle.network/universal-accounts/cha/chains) ## **Community** * **Universal Accounts on Avalanche**: [Retail-friendly UX: Accepting stablecoins from any chain on Avalanche dapps](https://blog.particle.network/retail-friendly-ux-accepting-stablecoins-from-any-chain-on-avalanche-dapps/) * **Slack**: [Join Particle’s Slack](https://join.slack.com/t/particlenetworkhq/shared_invite/zt-3blxdzcd2-7skD8MNWUn_20eOrp9SICA) * **GitHub**: [Particle Network GitHub](https://github.com/particle-network) ## **TL;DR for Developers** If you want your users to: * Log in once * Use a single account and balance across all chains * Skip network switching, bridging, and gas management Then you’re looking for **chain abstraction**. You’re looking for **Universal Accounts**. Start building → [Universal Accounts SDK](https://developers.particle.network/universal-accounts/cha/overview) # PayAI (/integrations/payai) --- title: PayAI category: x402 available: ["C-Chain"] description: PayAI offers the x402 protocol, a payment infrastructure that enables monetization of AI agents and services on Avalanche. logo: /images/payai.svg developer: PayAI website: https://payai.network/ documentation: https://docs.payai.network/introduction --- ## Overview PayAI provides the x402 protocol, a payment infrastructure for AI agent monetization and service payments on Avalanche's C-Chain. The x402 protocol standardizes how AI agents and services accept payments, so developers can build monetized AI applications without building payment infrastructure from scratch. ## What is x402? The x402 protocol is a payment standard that facilitates transactions between: - **Merchants**: AI service providers and agents that offer paid services - **Clients**: Users or applications that consume AI services - **Facilitators**: Infrastructure that handles payment processing on Avalanche ## Key Features - **Avalanche Native**: Built for Avalanche's fast finality and low transaction costs - **Simple Integration**: SDKs available in TypeScript and Python for both merchants and clients - **Facilitator Network**: PayAI operates facilitators that handle payment routing on Avalanche - **AI-First Design**: Built specifically for AI agent monetization and service payments ## Getting Started ### For Merchants (AI Service Providers) If you're building an AI service or agent that needs to accept payments on Avalanche: 1. **Choose Your SDK**: PayAI provides SDKs in both TypeScript and Python 2. **Set Up Payment Endpoints**: Configure your service to accept x402 protocol payments on Avalanche 3. **Configure for Avalanche**: Use the `avalanche` network string for mainnet or `avalanche-fuji` for testnet 4. **Start Accepting Payments**: Your AI agent can now receive payments for services in AVAX ### For Clients (Service Consumers) If you're building an application that consumes paid AI services on Avalanche: 1. **Install the Client SDK**: Available in TypeScript and Python 2. **Configure for Avalanche**: Set up payments using Avalanche C-Chain 3. **Connect to Services**: Start paying for AI services through the x402 protocol with AVAX ## Use Cases ### AI Agent Monetization Enable your AI agents to charge for their services on a per-request or subscription basis. ### Freelance AI Services Build marketplaces where AI agents can offer specialized services and receive payment automatically. ### Token-Gated AI Access Create premium AI services that require payment to access, with automated payment verification. ### CT (Crypto Twitter) Agent Monetization Monetize AI agents that provide crypto analysis, trading signals, or social media insights. ## Documentation For more details: - [PayAI Documentation](https://docs.payai.network/introduction) - [Supported Networks](https://docs.payai.network/x402/supported-networks) ## Integration Examples ### Express Server Example Set up an Express server that accepts x402 payments on Avalanche: **Environment Variables (.env)** ```bash FACILITATOR_URL=https://facilitator.payai.network NETWORK=avalanche-fuji # or avalanche for mainnet ADDRESS=0x... # wallet public address you want to receive payments to ``` **Server Code (index.ts)** ```typescript import { config } from "dotenv"; import express from "express"; import { paymentMiddleware, Resource } from "x402-express"; config(); const facilitatorUrl = process.env.FACILITATOR_URL as Resource; const payTo = process.env.ADDRESS as `0x${string}`; if (!facilitatorUrl || !payTo) { console.error("Missing required environment variables"); process.exit(1); } const app = express(); app.use( paymentMiddleware( payTo, { "GET /weather": { // USDC amount in dollars price: "$0.001", network: "avalanche-fuji", // or "avalanche" for mainnet }, "/premium/*": { // Define atomic amounts in any EIP-3009 token price: { amount: "100000", asset: { address: "0xabc", decimals: 18, eip712: { name: "WETH", version: "1", }, }, }, network: "avalanche-fuji", // or "avalanche" for mainnet }, }, { url: facilitatorUrl, }, ), ); app.get("/weather", (req, res) => { res.send({ report: { weather: "sunny", temperature: 70, }, }); }); app.get("/premium/content", (req, res) => { res.send({ content: "This is premium content", }); }); app.listen(4021, () => { console.log(`Server listening at http://localhost:${4021}`); }); ``` ## Avalanche C-Chain Integration When deploying x402 on Avalanche's C-Chain: - **Network String**: Use `avalanche` for mainnet or `avalanche-fuji` for testnet - **Native Token**: AVAX is used for gas fees - **Fast Finality**: Benefit from Avalanche's sub-second transaction finality - **Low Fees**: Enable micropayments for AI services with minimal transaction costs # PeckShield (/integrations/peckshield) --- title: PeckShield category: Audit Firms available: ["C-Chain"] description: PeckShield is a leading blockchain security company providing smart contract audits, security solutions, and real-time threat monitoring for protocols across multiple blockchain ecosystems. logo: /images/peckshield.jpg developer: PeckShield website: https://peckshield.com/ documentation: https://peckshield.com/services --- ## Overview PeckShield is a blockchain security company that has audited thousands of smart contracts and protocols across multiple chains. Founded by security researchers and blockchain experts, PeckShield provides smart contract audits, real-time security monitoring, and incident response. The firm is known for its large audit portfolio, quick turnaround times, and expertise across diverse protocol types. PeckShield's research team monitors the blockchain ecosystem for emerging threats and vulnerabilities, publishing regular security reports and advisories. They combine technical expertise, automated security tools, and rapid response capabilities. ## Services - **Smart Contract Audits**: Security audits for Solidity, Vyper, and other languages. - **Security Assessments**: Full protocol architecture and design review. - **Real-Time Monitoring**: Continuous security monitoring post-deployment. - **Incident Response**: Emergency support for security incidents and exploits. - **Vulnerability Research**: Proactive identification of emerging threats. - **Security Tools**: Automated analysis and detection systems. - **Penetration Testing**: Adversarial testing of protocols and systems. - **Code Review**: Detailed examination of smart contract implementations. - **Consulting Services**: Security advisory for protocol design and architecture. - **Public Security Reports**: Regular threat intelligence and security updates. ## Audit Portfolio PeckShield has audited: - 1000+ blockchain projects - Major DeFi protocols with billions in TVL - Leading NFT marketplaces and platforms - Cross-chain bridges and infrastructure - Layer 1 and Layer 2 blockchain protocols - GameFi and metaverse projects This experience gives PeckShield broad knowledge of security patterns and vulnerabilities across protocol types. ## Audit Methodology PeckShield's audit approach: 1. **Preliminary Assessment**: Review scope, documentation, and architecture 2. **Automated Analysis**: Run multiple security analysis tools 3. **Manual Code Review**: Expert review by senior security researchers 4. **Vulnerability Detection**: Test for known attack vectors and edge cases 5. **Logic Analysis**: Verify business logic and economic mechanisms 6. **Documentation**: Compile detailed findings with severity ratings 7. **Presentation**: Discuss findings with development team 8. **Remediation Support**: Available for questions during fixes 9. **Re-Audit**: Verify fixes and issue final report ## PeckShield Security Platform Beyond audits, PeckShield offers monitoring tools: **CoinHolmes**: Blockchain transaction tracking and analysis platform. **AlertSystem**: Real-time monitoring for suspicious activities. **DeFiHub**: Analytics and security insights for DeFi protocols. **Security Scores**: Ongoing security ratings for protocols. These tools provide continuous security beyond one-time audits. ## Avalanche Expertise PeckShield has experience auditing protocols across all major blockchain networks including Avalanche: - Avalanche C-Chain smart contracts - Subnet-specific implementations - Cross-chain bridge security - DeFi protocols on Avalanche - NFT and gaming projects - Infrastructure and tooling ## Access Through Areta Marketplace Avalanche projects can engage PeckShield through the [Areta Audit Marketplace](https://areta.market/avalanche): - **Rapid Response**: Submit request and get quotes within 48 hours - **Competitive Proposals**: Compare PeckShield with other leading firms - **Clear Pricing**: Transparent costs without hidden fees - **Subsidy Opportunities**: Eligible for up to $10k audit cashback - **Streamlined Process**: Faster than traditional direct engagement - **Avalanche-Focused**: Marketplace built for Avalanche ecosystem ## Audit Focus Areas **DeFi Protocols**: All DeFi categories including DEXs, lending, derivatives, and yield. **NFT & Gaming**: NFT marketplaces, game contracts, and metaverse infrastructure. **Bridges & Interoperability**: Cross-chain bridges and messaging protocols. **Infrastructure**: Layer 1/2 protocols, consensus mechanisms, and core infrastructure. **Token Economics**: Token contracts, vesting, and distribution mechanisms. **Governance**: DAO governance systems and voting mechanisms. ## Why Choose PeckShield **Extensive Experience**: 1000+ audits completed. **Fast Turnaround**: Known for efficient audit processes and quick delivery. **Broad Coverage**: Experience across all protocol types and blockchain networks. **Real-Time Monitoring**: Ongoing security monitoring beyond one-time audits. **Incident Response**: Rapid response capability for security emergencies. **Research Leadership**: Regular security research and threat intelligence. **Global Reach**: Serving projects worldwide with multilingual support. ## Public Security Contributions PeckShield contributes to ecosystem security through: - Regular security blog posts and advisories - Public incident response and analysis - Vulnerability disclosures and reports - Security threat intelligence sharing - Community education on security best practices ## Pricing PeckShield offers competitive pricing: - Tiered pricing based on project size and complexity - Fast-track options for urgent audits - Volume discounts for multiple audits - Flexible engagement models Contact via Areta marketplace or directly for detailed proposals. ## Getting Started To engage PeckShield, choose one of the following paths: 1. **Via Areta Marketplace** (Recommended for Avalanche): - Visit [areta.market/avalanche](https://areta.market/avalanche) - Submit audit request with project details - Receive competitive quote from PeckShield - Access potential subsidies and streamlined process 2. **Direct Contact**: - Visit [peckshield.com](https://peckshield.com/) - Submit audit inquiry - Discuss scope and requirements - Receive audit proposal ## Deliverables PeckShield provides: - **Audit Report**: Detailed findings with severity classifications - **Executive Summary**: High-level overview for stakeholders - **Remediation Recommendations**: Specific guidance for fixing issues - **Re-Audit Report**: Verification of fixes - **Security Badge**: Post-audit security badge - **Optional Monitoring**: Ongoing security monitoring services # Pharaoh (/integrations/pharaoh) --- title: Pharaoh category: DeFi available: ["C-Chain"] description: "Pharaoh is a decentralized exchange offering advanced liquidity solutions and trading features on Avalanche." logo: /images/pharaoh.png developer: Pharaoh website: https://pharaoh.exchange documentation: https://pharaoh.exchange --- ## Overview Pharaoh is a decentralized exchange on Avalanche's C-Chain with concentrated liquidity and efficient trading mechanisms. It offers competitive rates and multiple earning options for liquidity providers in the Avalanche ecosystem. ## Features - **Token Swapping**: Optimized execution with competitive rates. - **Liquidity Management**: Flexible liquidity pools with configurable fee tiers. - **Yield Farming**: Earn rewards through liquidity provision and staking. - **Multi-Asset Support**: Trade tokens within the Avalanche ecosystem. - **Low-Cost Execution**: Uses Avalanche's fast transaction speeds and low fees. - **User-Friendly Interface**: Intuitive design for trading and LP management. ## Getting Started 1. **Access Platform**: Visit [Pharaoh](https://pharaoh.exchange). 2. **Connect Wallet**: Link your Web3 wallet and ensure you have AVAX for transaction fees. 3. **Start Trading**: - Choose tokens for swapping - Review pricing and slippage settings - Execute trades 4. **Explore Liquidity Options**: Provide liquidity to earn trading fees and additional rewards. ## Documentation For detailed information and guides, visit the [Pharaoh platform](https://pharaoh.exchange). ## Use Cases - **Token Trading**: Execute swaps with competitive rates and minimal slippage. - **Liquidity Provision**: Earn trading fees by providing liquidity to pools. - **Yield Strategies**: Participate in farming opportunities for additional returns. - **Portfolio Management**: Access diverse tokens for portfolio allocation. # Pocket Network (/integrations/pocket-network) --- title: Pocket Network category: RPC Endpoints available: ["C-Chain"] description: "Pocket Network is a decentralized RPC provider offering reliable, censorship-resistant access to blockchain networks." logo: /images/pocket.png developer: Pocket Network website: https://www.pokt.network/ documentation: https://docs.pokt.network/ --- ## Overview Pocket Network is a decentralized infrastructure protocol that provides censorship-resistant RPC services for blockchain networks. It uses a distributed network of node operators to deliver high availability, redundancy, and geographic diversity for accessing blockchain data. ## Features - **Decentralized Infrastructure**: Distributed network of thousands of independent node operators with no single point of failure. - **Multi-Chain Support**: Access to multiple blockchain networks including Ethereum, Avalanche, and many others. - **High Availability**: Redundant node infrastructure provides reliable uptime and performance. - **Censorship-Resistant**: Decentralized architecture prevents any single entity from controlling or censoring access. - **Cost-Effective**: Competitive pricing through a decentralized marketplace of node operators. - **Load Balancing**: Automatic routing to the best available nodes for optimal performance. - **Native Token Economics**: POKT token incentivizes node operators to provide quality service. ## Documentation For more details, visit the [Pocket Network Documentation](https://docs.pokt.network/). ## Use Cases - **DApp Development**: Reliable RPC access for decentralized applications. - **Infrastructure Redundancy**: Backup RPC provider for improved reliability. - **Decentralized Access**: Censorship-resistant blockchain data access. - **Multi-Chain Applications**: Single provider for accessing multiple blockchain networks. # Portable (/integrations/portable) --- title: Portable category: KYC / Identity Verification available: ["C-Chain", "All Avalanche L1s"] description: "Portable provides portable identity verification enabling users to carry verified credentials across Web3 applications." logo: /images/portable.png developer: Portable website: https://www.portable.io/ --- ## Overview > **Note:** The Portable.io website now appears to be a data integration (ELT) tool, not a KYC/identity verification platform. The Web3 identity verification product described below may no longer be available. Verify current offerings before integrating. Portable was described as an identity verification platform that lets users carry their verified identity credentials across multiple Web3 applications. By making KYC portable, it aimed to reduce verification friction for users while helping applications meet compliance requirements efficiently. ## Features - **Portable Credentials**: Carry verified identity across applications - **One-Time Verification**: Complete KYC once, use everywhere - **Compliance Framework**: Meet regulatory requirements efficiently - **User-Controlled Data**: Users maintain control of their credentials - **Easy Integration**: Simple APIs and SDKs for developers - **Privacy Options**: Selective disclosure of identity attributes ## Getting Started 1. **Sign Up**: Create account at [Portable](https://www.portable.io/) 2. **API Integration**: Access Portable APIs for verification 3. **Configure Flow**: Set up verification requirements 4. **User Experience**: Implement user-facing verification flow 5. **Verify Users**: Begin accepting portable credentials ## Use Cases - **Streamlined Onboarding**: Reduce KYC friction for new users - **Cross-Platform Identity**: Accept verified credentials from other platforms - **Regulatory Compliance**: Meet KYC requirements efficiently - **Repeat Users**: Smooth experience for verified returning users # Privy (/integrations/privy) --- title: Privy category: Wallets and Account Abstraction available: ["C-Chain", "All Avalanche L1s"] description: "Spin up embedded wallets and beautiful authentication flows for all users." logo: /images/privy.png developer: Privy website: https://privy.io/ documentation: https://docs.privy.io/ --- ## What is Privy? Privy is a tool for onboarding users to web3, regardless of whether they already have a wallet, across mobile and desktop. ## Terminology | Term | Meaning | | -------- | ------- | | Social Login | A form of single sign-on (SSO) that lets users log in using credentials from Google, X (formerly Twitter), Apple, or other social platforms instead of creating a new username and password. | | Magic Link | A passwordless authentication method where users log in by clicking a unique, time-sensitive link sent to their email address, removing the need for a password. | | Next.js | Next.js’ Pages Router is a full-stack React framework. It’s versatile and lets you create React apps of any size—from a mostly static blog to a complex dynamic application. | ## Prerequisites and Recommended Knowledge - Privy account - Basic understanding of EVM, and transaction data - Basic understanding of frontend development ## Preparation ### Create Application and Retrieve API Keys - Log in to [Privy Dashboard](https://dashboard.privy.io/) - Click on `New App` on the Applications page. - Click on the Settings button on the left pane. Under the Basic tab, you'll find the API keys. ### Start a New React Project with Next.js To create a new Next.js project, run in your terminal: ```bash npx create-next-app@latest ``` ### Install Dependencies After the React project is created, we need to install some dependencies: to use Privy (its own package), to add a custom chain, to create and use an HTTP client (viem), and to include utilities (ethers). ```bash npm install @privy-io/react-auth@latest npm i viem@latest npm i ethers@latest ``` ### Define Your Avalanche L1 In this guide, we are using the `Echo L1` as an example Avalanche L1. However, you can use any Avalanche L1 that has a public RPC URL. If the L1 has an explorer page, you can see better what is happening, but it is not required. ```typescript export const echo = defineChain({ id: 173750, name: 'Echo L1', network: 'echo', nativeCurrency: { decimals: 18, name: 'Ech', symbol: 'ECH', }, rpcUrls: { default: { http: ['https://subnets.avax.network/echo/testnet/rpc'] }, }, blockExplorers: { default: {name: 'Explorer', url: 'https://subnets-test.avax.network/echo'}, }, }); ``` ### Import Privy into Your App After the React project is created, navigate to `page.tsx`. Inside the `Home` function, wrap your content with `PrivyProvider`. ```typescript title="page.tsx" export default function Home() { return ( { content } ); } ``` ## Walkthrough So far in the guide, we have installed the necessary dependencies, created our Privy application, and obtained our API key. Now, we are ready to use Privy. Here is our walkthrough: - We will create a simple login flow. - We will create a welcome page for users who have logged in, showing their embedded wallet address and balance. - We will trigger a test transfer of the `ECH` token via Privy. ### Login Flow To onboard your users into your application, you just need to know the following hooks: ```typescript const { ready, authenticated } = usePrivy(); const { login } = useLogin(); ``` It's really that simple! - `ready`: Checks whether the `PrivyProvider` is `ready` to be used. - `authenticated`: Returns `true` if the user is `authenticated`, `false` otherwise. - `login`: Opens the Privy login modal and prompts the user to log in. ```typescript ``` We've only added a button to trigger `login` function, and Privy handles the rest. When we click on the `Login via Privy` button, this modal appears. You can choose any login method to log in. We’ve already defined these options in the `PrivyProvider` when we wrapped our content. ### Welcome Page After checking whether the user is authenticated or not, we display the following information to the user who has logged in. As you can see, Privy has generated an embedded wallet for our user. We’ve displayed the following properties on the screen: Privy ID, embedded wallet address, and embedded wallet balance. ```typescript {user.id} {user.wallet.address} {balance} ECH ``` We’ve used the user object of Privy to retrieve the user ID and wallet address. To fetch the user’s balance, we need to create an HTTP client for our Avalanche L1, which we’ve already defined earlier. ```typescript const client = createPublicClient({ chain: echo, transport: http(), }); // get native asset balance client.getBalance({ address: address, }).then(balance => { setBalance(ethers.formatEther(balance)); }); ``` We can allow users to log out using the following hook: ```typescript const { logout } = useLogout(); ``` ### Fund the New Wallet Using the Faucet We’ve funded the new wallet that was generated for our user with some ECH tokens from the [Echo AWM Testnet Faucet](https://test.core.app/tools/testnet-faucet/?avalanche-l1=echo&token=echo). After the ECH tokens were sent, our balance updated accordingly. ### Send Test Transfer via Privy Now that we have a balance, we can send some ECH tokens to another recipient via Privy to test Privy's `sendTransaction` flow. Privy provides the following hook for this: ```typescript const { sendTransaction } = useSendTransaction(); ``` We’ve already added the following button to trigger the `transfer` function, which will, in turn, trigger the `sendTransaction` function provided by Privy. ```typescript ``` We have built a simple transaction that sends some ECH tokens to another recipient. ```typescript const transfer = () => { if (authenticated === false || address === undefined) { return; } let receiver = "0x..."; // receiver address sendTransaction({ "value": ethers.parseUnits("0.01", "ether"), "to": receiver, // destination address "from": address, // logged in user's embedded wallet address }).catch(() => { // handle err }); } ``` When we click on the `Send Test Transfer via Privy` button, this modal appears. Users can see the following details related to the transaction. # Proof of Play vRNG (/integrations/proof-of-play) --- title: Proof of Play vRNG category: VRF available: ["C-Chain"] description: "A hyper-optimized, secure, onchain verified random number generator (vRNG). Battle-tested at scale, faster, more reliable, and more affordable than alternatives." logo: /images/proofofplay.jpg developer: Proof of Play website: https://proofofplay.com/ documentation: https://docs.proofofplay.com/ --- ## Overview Proof of Play vRNG (Verified Random Number Generator) is an optimized, secure, onchain random number generation solution tested at scale with over 485 million transactions. It is faster, more reliable, and more affordable than traditional VRF solutions. Developers can use it to build decentralized applications and games that need verifiable randomness -- from blockchain games and NFTs to random assignment systems and consensus mechanisms. Proof of Play offers engineering support and straightforward integration. # Pyth Network (/integrations/pyth) --- title: Pyth Network category: Oracles available: ["C-Chain"] description: Pyth Network is a decentralized oracle solution that provides high-fidelity data for DeFi applications. logo: /images/pyth.png developer: Pyth Network website: https://pyth.network/ documentation: https://docs.pyth.network/ --- ## Overview Pyth Network is a decentralized oracle that provides high-fidelity price data for DeFi applications. It aggregates data from a network of first-party providers and validators to deliver accurate real-world data for smart contract execution. ## Features - **High-Fidelity Data**: Aggregates and validates data from diverse sources for accuracy and reliability. - **Decentralized Network**: Operates as a decentralized oracle network, reducing reliance on any single data source. - **Real-Time Data**: Provides up-to-date feeds for various assets, including cryptocurrencies, commodities, and equities. - **DeFi Integration**: Integrates with DeFi applications, providing data for trading, lending, and other financial operations. ## Getting Started 1. **Visit the Pyth Network Website**: Explore the [Pyth Network website](https://pyth.network/) to understand its offerings. 2. **Access the Documentation**: Refer to the [Pyth Network Documentation](https://docs.pyth.network/) for guides on integration, data feeds, and API usage. 3. **Integrate Data Feeds**: Use the provided APIs to integrate Pyth Network’s data feeds into your smart contracts and DeFi applications. 4. **Test and Deploy**: Test the integration in a development environment before deploying to production. ## Documentation For more details, visit the [Pyth Network Documentation](https://docs.pyth.network/). ## Use Cases - **Decentralized Finance (DeFi)**: Price feeds for trading, lending, and other financial operations in DeFi platforms. - **Risk Management**: Real-time data for risk assessment and decision-making. - **Market Analytics**: Accurate market data for analysis and forecasting in financial applications. # QuickNode (/integrations/quicknode) --- title: QuickNode category: RPC Endpoints available: ["C-Chain"] description: QuickNode is a blockchain developer platform that provides a suite of APIs and tools for building and scaling blockchain applications. logo: /images/quicknode.png developer: QuickNode website: https://www.quicknode.com/ documentation: https://www.quicknode.com/docs --- ## Overview QuickNode is a blockchain developer platform that provides APIs and tools for building and scaling blockchain applications. It gives developers reliable, scalable infrastructure for interacting with blockchain networks. ## Features - **Scalable APIs**: Access a range of APIs designed to handle high-throughput and low-latency requirements for blockchain interactions. - **Multi-Chain Support**: Connect with various blockchain networks, including Ethereum, Binance Smart Chain, and others, through a unified platform. - **Developer Tools**: Utilize advanced tools and services for building, monitoring, and optimizing blockchain applications. - **Reliable Infrastructure**: High-performance infrastructure with consistent access to blockchain networks. - **Analytics and Monitoring**: Track and analyze blockchain interactions with built-in monitoring and analytics tools. ## Getting Started 1. **Visit the QuickNode Website**: Explore the [QuickNode website](https://www.quicknode.com/) to understand its features. 2. **Access the Documentation**: Refer to the [QuickNode Documentation](https://www.quicknode.com/docs) for setup and API guides. 3. **Create an Account**: Sign up for a QuickNode account to get API keys. 4. **Integrate APIs**: Use the APIs for data retrieval, transaction submission, and more. 5. **Monitor and Optimize**: Use QuickNode’s monitoring and analytics tools to track performance. ## Documentation For more details, visit the [QuickNode Documentation](https://www.quicknode.com/docs). ## Use Cases - **Decentralized Applications (dApps)**: Build and scale dApps with APIs and tools for blockchain interactions. - **Blockchain Analytics**: Gain insights into blockchain data and application performance. - **Smart Contract Development**: Develop, test, and deploy smart contracts with reliable infrastructure. - **Transaction Management**: Manage and monitor blockchain transactions with low latency. # Rain (/integrations/rain) --- title: Rain category: Payments available: ["C-Chain"] description: Rain is a global card issuing platform enabling partners to launch fully integrated credit and debit card programs with direct wallet spending, supporting fintechs globally as a Visa principal member. logo: /images/rain.svg developer: Rain website: https://www.rain.xyz/ documentation: https://www.rain.xyz/resources --- ## Overview Rain is a global card issuing and payment infrastructure platform that lets businesses launch credit and debit card programs powered by digital assets and stablecoins. As a Visa principal member, Rain sponsors card programs end-to-end without relying on bank partners, giving fintech platforms, exchanges, wallets, and financial applications full control over their card programs. Rain's cards work with Google Pay and Apple Pay at over 150 million Visa-accepting merchants worldwide. They also support non-custodial spending, allowing users to spend directly from self-custodial wallets. Use cases include cross-border payments, remittances, cryptocurrency exchanges, neobanks, and Web3 applications. ## Features - **Visa Principal Membership**: Direct Visa network sponsorship enabling end-to-end card program ownership without bank dependencies. - **Global Card Issuance**: Issue virtual and physical debit and credit cards accepted at 150+ million merchants worldwide. - **Digital Wallet Integration**: Full support for Apple Pay and Google Pay for mobile-first payment experiences. - **Non-Custodial Spending**: Unique capability for users to spend directly from self-custodial wallets without custody intermediaries. - **Multi-Currency Support**: Support for stablecoins, cryptocurrencies, and traditional fiat currencies. - **Stablecoin-Native**: Purpose-built for stablecoin-powered transactions and digital dollar spending. - **Cross-Border Payments**: Enable instant, low-cost international payments and remittances. - **White-Label Solutions**: Fully customizable card programs branded for your business. - **Modular Platform**: Choose specific components (cards, accounts, money-in, money-out) based on needs. - **Real-Time Settlement**: Instant settlement of transactions on blockchain networks. - **Compliance Infrastructure**: Built-in KYC/AML and regulatory compliance tools. - **API-First Design**: APIs for integrating into any platform. - **Multi-Blockchain Support**: Compatible with multiple blockchain networks including Avalanche. - **Instant Card Issuance**: Generate virtual cards instantly for immediate use. ## Platform Capabilities ### Cards Issue globally-accepted payment cards linked to users' digital asset balances: - **Virtual Cards**: Instant issuance of virtual cards for online and mobile payments - **Physical Cards**: Branded physical cards mailed to users globally - **Credit Cards**: Full credit card programs with spending limits and billing cycles - **Debit Cards**: Direct debit from user balances with real-time settlement - **Prepaid Cards**: Preloaded cards for controlled spending - **Multi-Currency**: Cards supporting multiple currencies and automatic conversion ### Money-In (On-Ramp) Enable users to deposit funds and convert to digital assets: - **Fiat Deposits**: Bank transfers, card deposits, and local payment methods - **Crypto Purchases**: Buy cryptocurrencies and stablecoins with fiat - **Instant Conversion**: Real-time conversion from local currencies to stablecoins - **Global Payment Methods**: Support for payment methods in 150+ countries - **Low Fees**: Competitive pricing for on-ramp transactions ### Accounts (Embedded Wallets) Integrate secure digital asset accounts into applications: - **Embedded Wallets**: Secure wallets integrated directly into partner applications - **Custodial Options**: Managed custody for simplified user experience - **Self-Custodial Support**: Non-custodial wallets for users maintaining control - **Multi-Asset**: Support for multiple cryptocurrencies and stablecoins - **Account Management**: Tools for managing balances, transactions, and settings ### Money-Out (Off-Ramp) Facilitate withdrawals and cross-border transfers: - **Fiat Withdrawals**: Convert digital assets to local currency and withdraw to banks - **Cross-Border Payments**: Send payments internationally with low fees - **Stablecoin Transfers**: Send stablecoins globally instantly - **Local Currency Conversion**: Convert to any supported currency for withdrawal - **Multiple Destinations**: Support for bank transfers, mobile money, and other methods ## Getting Started To integrate Rain: 1. **Partnership Discussion**: Contact Rain's team to discuss your card program requirements and use case. 2. **Program Design**: Work with Rain to design your card program: - Define card types (credit, debit, virtual, physical) - Choose currencies and digital assets to support - Determine custodial vs non-custodial approach - Select which Rain modules to integrate (cards, accounts, money-in, money-out) - Design branded card appearance and user experience 3. **Compliance Setup**: Establish necessary compliance infrastructure: - Complete Rain's partner onboarding and due diligence - Set up KYC/AML processes for your users - Navigate regulatory requirements for your markets - Establish spending limits and controls 4. **Technical Integration**: Implement Rain's platform: - Integrate Rain's APIs for card issuance and management - Connect wallet infrastructure (custodial or non-custodial) - Implement user interfaces for card management - Set up webhooks for transaction notifications - Test in sandbox environment 5. **Card Program Launch**: Go live with your Rain-powered card program: - Issue cards to your users - Enable spending at merchants worldwide - Process transactions in real-time - Provide customer support with Rain's assistance ## Avalanche Support Rain's multi-blockchain infrastructure supports Avalanche C-Chain, enabling card programs powered by AVAX and Avalanche-based stablecoins. Notably, Rain partnered with Wyoming and Avalanche to launch the Frontier Stable Token (FRNT), Wyoming's state-issued, dollar-pegged stablecoin, enabling real-world spending through Rain-issued Visa cards. This demonstrates Rain's commitment to bringing Avalanche assets into everyday commerce at 150+ million merchants globally. ## Use Cases Rain supports diverse payment use cases: **Cryptocurrency Exchanges**: Enable exchange users to spend their crypto holdings at any merchant accepting Visa. **Neobanks**: Launch full-featured banking services with card programs without traditional banking infrastructure. **Web3 Wallets**: Add card spending directly from self-custodial wallets for seamless crypto-to-fiat usage. **Cross-Border Remittances**: Facilitate low-cost international money transfers using stablecoins and local currency conversion. **Fintech Applications**: Embed payment cards into any fintech app or platform. **DeFi Platforms**: Bridge DeFi yields and holdings to real-world spending. **Gaming Platforms**: Enable gamers to spend in-game earnings in the real world. **Payroll Solutions**: Issue payroll cards for instant wage access and spending. ## Non-Custodial Innovation Rain's non-custodial card capability: - **User Control**: Users maintain custody of assets in self-custodial wallets - **Direct Spending**: No need to deposit to custodial accounts before spending - **Real-Time Settlement**: Transactions settle directly from user wallets - **Security**: Users never surrender custody to third parties - **Web3 Native**: True Web3 payment experience maintaining decentralization principles This is particularly useful for Web3-native applications where user custody matters. ## Visa Principal Membership Rain's Visa principal membership provides these advantages: - **Independent Operation**: No reliance on sponsoring banks or third-party BINs - **Full Control**: Complete ownership and control of card programs - **Faster Implementation**: Direct relationship with Visa accelerates deployment - **Flexibility**: Greater flexibility in program design and features - **Stability**: Not dependent on bank partner relationships or changes - **Global Reach**: Access to Visa's global acceptance network ## Compliance and Regulation Rain maintains compliance infrastructure including: - **Money Transmitter Licenses**: Licensed in multiple jurisdictions for money transmission - **Visa Compliance**: Full compliance with Visa network rules and regulations - **KYC/AML**: Integrated identity verification and anti-money laundering screening - **Transaction Monitoring**: Real-time monitoring for fraudulent or suspicious activity - **Data Security**: PCI DSS compliant infrastructure for secure payment data handling - **Regulatory Expertise**: Experienced team navigating global financial regulations ## Partner Benefits **Bank-Free Card Programs**: Launch card programs without needing bank sponsors. **Global Reach**: Immediate access to 150+ million merchants worldwide. **Fast Time-to-Market**: Modular platform enables quick deployment of card programs. **Brand Control**: Fully white-labeled solutions matching your brand experience. **Flexible Custody**: Support both custodial and non-custodial models. **Multi-Asset**: Accept payments in stablecoins, crypto, and fiat. **Developer APIs**: Developer-friendly integration with documentation. **Revenue Sharing**: Earn interchange revenue from card transactions. ## Technology Infrastructure Rain's technology stack includes: - **Card Issuance APIs**: Instant virtual card generation and physical card ordering - **Transaction Processing**: Real-time authorization and settlement - **Wallet Integration**: Connect custodial and non-custodial wallets - **Multi-Chain Support**: Compatibility with multiple blockchain networks - **Webhooks**: Real-time notifications for all card events - **Admin Dashboard**: Portal for managing cards, users, and transactions - **Fraud Prevention**: Machine learning-based fraud detection - **Reporting**: Transaction reporting and analytics ## Pricing Rain offers partnership-based pricing: - **Setup Fees**: Initial integration and card program setup costs - **Card Issuance Fees**: Fees for virtual and physical card issuance - **Transaction Fees**: Percentage or fixed fees on card transactions - **Interchange Revenue**: Partners share in Visa interchange fees - **Monthly Fees**: Platform access and maintenance fees - **Custom Pricing**: Tailored pricing for high-volume partners Contact Rain for detailed pricing based on your specific program requirements. ## Global Coverage Rain supports card programs in 150+ countries with: - **Multi-Currency**: Support for major fiat currencies and stablecoins - **Local Payment Methods**: On-ramp via local payment methods globally - **Regional Compliance**: Compliance with regional regulations - **Global Support**: 24/7 customer support for international users - **Multi-Language**: Platform localization for key markets # Ramp Network (/integrations/ramp-network) --- title: Ramp Network category: Fiat On-Ramp available: ["C-Chain", "All Avalanche L1s"] description: Ramp Network is a global provider of fiat-to-crypto on-ramp and off-ramp infrastructure, enabling users to purchase and sell cryptocurrencies directly within applications. logo: /images/ramp-network.png developer: Ramp Network website: https://ramp.network/ documentation: https://docs.ramp.network/ --- ## Overview Ramp Network is a fiat-to-crypto infrastructure provider that lets users purchase and sell cryptocurrencies directly within blockchain applications. Ramp handles regulatory compliance and offers on-ramp and off-ramp widgets that integrate with minimal developer effort. It supports multiple payment methods across 150+ countries and provides a customizable widget that matches your application's design. ## Features - **Global Coverage**: Available in 150+ countries with localized payment methods and support for multiple currencies. - **Diverse Payment Options**: Bank transfers, credit/debit cards, Apple Pay, Google Pay, and region-specific payment methods. - **Full Off-Ramp Solution**: Complete selling experience allowing users to convert crypto back to fiat with direct bank deposits. - **Customizable Widget**: White-labeled solution that can be styled to match your application's branding. - **Low Integration Effort**: Simple SDK integration requiring just a few lines of code. - **Multi-Platform Support**: Web SDK, React SDK, and mobile SDKs for iOS and Android. - **Built-in Compliance**: KYC and AML processes that meet regulatory requirements across jurisdictions. - **Competitive Fees**: Transparent fee structure with some of the lowest rates in the industry. - **Developer Tools**: Webhooks, callbacks, and detailed documentation for integration. - **Advanced Purchase Flows**: Support for automatic purchases and recurring purchases. ## Getting Started To integrate Ramp: 1. **Create an Account**: Register on the [Ramp Developer Dashboard](https://app.ramp.com/sign-in) to get your API key. 2. **Install the SDK**: For web applications, add the Ramp SDK to your project: ```bash npm install @ramp-network/ramp-instant-sdk ``` 3. **Initialize the On-Ramp Widget**: Implement the widget in your application: ```javascript import { RampInstantSDK } from '@ramp-network/ramp-instant-sdk'; const ramp = new RampInstantSDK({ hostAppName: 'Your App Name', hostLogoUrl: 'https://yourdomain.com/logo.png', swapAsset: 'AVAX', swapAmount: '100000000000000000', // Optional, in wei userAddress: '0x...', // User's wallet address hostApiKey: 'YOUR_API_KEY', variant: 'auto', // or 'desktop', 'mobile', 'embedded' webhookStatusUrl: 'https://your-webhook-endpoint.com', // Optional defaultFlow: 'ONRAMP', // or 'OFFRAMP' }); ramp.show(); ``` 4. **Implement Off-Ramp** (if needed): ```javascript import { RampInstantSDK } from '@ramp-network/ramp-instant-sdk'; const ramp = new RampInstantSDK({ hostAppName: 'Your App Name', hostLogoUrl: 'https://yourdomain.com/logo.png', swapAsset: 'AVAX', userAddress: '0x...', // User's wallet address hostApiKey: 'YOUR_API_KEY', variant: 'auto', defaultFlow: 'OFFRAMP', }); ramp.show(); ``` 5. **Handle Events**: Listen for purchase events: ```javascript ramp.on('PURCHASE_CREATED', (event) => { console.log(`User started purchase: ${event.purchase.id}`); }); ramp.on('PURCHASE_SUCCESSFUL', (event) => { console.log(`Purchase successful: ${event.purchase.id}`); }); ramp.on('WIDGET_CLOSE', () => { console.log('Widget closed'); }); ``` 6. **Test in Sandbox**: Use the sandbox environment to test your integration before going live: ```javascript const ramp = new RampInstantSDK({ // ...your configuration url: 'https://ri-widget-staging.firebaseapp.com/', // Use staging environment }); ``` ## Documentation For more details, visit the [Ramp Documentation](https://docs.ramp.network/). ## Use Cases **Crypto Wallets**: Let users purchase crypto within your wallet application and cash out to bank accounts. **DeFi Platforms**: Provide entry and exit points for users to acquire and liquidate tokens for DeFi protocols. **NFT Marketplaces**: Allow direct purchase of NFTs with fiat. **Blockchain Games**: Let players purchase in-game tokens or assets without needing a crypto exchange. **Enterprise Solutions**: Compliant on/off-ramp solution for corporate blockchain applications. ## Pricing Ramp operates on a transaction fee model: - **Standard Fee Range**: 0.49% to 2.9% per transaction, depending on payment method and region - **Volume Discounts**: Available for applications with high transaction volumes - **Revenue Sharing**: Partnership opportunities for qualified integrators - **No Monthly Fees**: Pay only for processed transactions For specific pricing details and potential custom arrangements, contact Ramp's sales team. # RD Technology (/integrations/rdtechnology) --- title: RD Technology category: Payments available: ["C-Chain"] description: RD Technology provides blockchain payment solutions and infrastructure for businesses to integrate cryptocurrency and digital asset payment capabilities. logo: /images/rdtechnology.jpeg developer: RD Technology website: https://rd.group/ documentation: https://rd.group/products/wallet/enterprise-solution/ --- ## Overview RD Technology is a blockchain technology company specializing in payment solutions and infrastructure for businesses to integrate cryptocurrency and digital asset payments. It provides tools and APIs for accepting, processing, and managing digital currency payments while maintaining compliance and security standards. The platform serves businesses looking to add cryptocurrency and stablecoin payment options. ## Features - **Payment Infrastructure**: Tools for cryptocurrency payment processing. - **Multi-Currency Support**: Accept various cryptocurrencies and stablecoins. - **Blockchain Integration**: Connect with multiple blockchain networks including Avalanche. - **Payment Processing**: Handle payment authorization and settlement efficiently. - **API Platform**: Developer-friendly APIs for custom integrations. - **Security**: Enterprise-grade security for payment processing. - **Compliance Tools**: Built-in compliance and regulatory features. - **Transaction Management**: Tools for tracking and managing payments. - **Settlement Options**: Flexible settlement in crypto or fiat. - **Reporting**: Transaction reporting and analytics. ## Getting Started To integrate RD Technology, follow these steps: 1. **Contact RD Technology**: Reach out to discuss your payment infrastructure needs. 2. **Platform Assessment**: Determine integration requirements: - Payment volumes and currencies - Technical integration approach - Compliance requirements - Settlement preferences 3. **Implementation**: Integrate RD Technology's solutions: - Connect to payment APIs - Implement payment flows - Configure settlement options - Test in development environment 4. **Launch**: Go live with cryptocurrency payment capabilities. ## Avalanche Support RD Technology's infrastructure supports Avalanche C-Chain, enabling businesses to accept and process payments using AVAX and Avalanche-based assets with fast transactions and low fees. ## Use Cases **Business Payments**: Accept cryptocurrency payments for products and services. **Payment Processing**: Process digital currency payments at scale. **Cross-Border**: Facilitate international payments using blockchain technology. **E-Commerce**: Integrate crypto payments into online stores. **B2B Transactions**: Enable business-to-business cryptocurrency payments. **Financial Services**: Build fintech applications with crypto payment capabilities. # Re (/integrations/re) --- title: Re category: Assets available: ["C-Chain"] description: "Re provides tokenized insurance and risk products, bringing traditional insurance mechanisms to blockchain infrastructure." logo: /images/re.png developer: Re website: https://www.re.xyz/ documentation: https://docs.re.xyz/ --- ## Overview Re tokenizes insurance and risk products on blockchain. By bringing traditional insurance mechanisms on-chain, Re enables transparent and accessible risk transfer products, allowing users to participate in insurance markets through tokenized instruments. ## Features - **Tokenized Insurance**: Insurance products in tokenized form - **Risk Markets**: Access to diversified risk transfer markets - **Transparency**: On-chain transparency for insurance products - **Yield Generation**: Earn yield through insurance capacity provision - **Smart Contracts**: Automated claims and policy management - **Decentralized Risk**: Participate in decentralized risk pools ## Getting Started 1. **Visit Re**: Access the [Re platform](https://www.re.xyz/) 2. **Explore Products**: Learn about available risk products 3. **Connect Wallet**: Connect your Web3 wallet 4. **Participate**: Provide capacity or purchase protection 5. **Manage**: Manage positions through the platform ## Documentation For technical documentation, visit [Re Documentation](https://docs.re.xyz/). ## Use Cases - **Insurance Capacity**: Provide capacity to insurance pools - **Risk Protection**: Purchase on-chain risk protection - **Yield Earning**: Earn yield from insurance premiums - **Risk Diversification**: Diversify exposure across risk products # Reactive Network (/integrations/reactive-network) --- title: Reactive Network category: Crosschain Solutions available: ["C-Chain"] description: "Reactive Network is a fully on-chain, EVM-compatible, events-driven if-this-then-that network for decentralized automation of on-chain workflows through enabling reactivity between contracts deployed either on the same or different chains." logo: /images/Symbol_ColorBlack_H32.png developer: PARSIQ website: https://reactive.network/ documentation: https://dev.reactive.network --- ## Overview Reactive Network is a fully on-chain, EVM-compatible, events-driven if-this-then-that network for decentralized automation of on-chain workflows through enabling reactivity between contracts deployed either on the same or different chains. Core purpose of Reactive is to enable more autonomous, self-sufficient and user friendly dApps. ## Features - **Decentralized automation**: Web3’s first IFTTT infrastructure. Automate multi-chain workflows with event-driven logic – execute actions without compromising decentralization. - **Cost-efficient Execution**: Offload heavy computations to Reactive’s parallelized EVM. Complex logic at minimal cost. - **Inversion of Control**: RSCs invert the traditional execution model by allowing the contract itself to decide when to execute based on predefined events, eliminating the need for external triggers like bots or users. - **DeFAI-Ready Infrastructure**: Create autonomous DeFi agents with on-chain data triggers and off-chain AI intelligence that trade, optimize, and adapt. ## Getting Started To get started: 1. **Review Documentation**: Study the [Reactive Network Documentation](https://dev.reactive.network). 2. **Check out Reactive's [origins and destinations](https://dev.reactive.network/origins-and-destinations)**, along with their Callback Proxy addresses. 3. **Connect to [Reactive Mainnet or Kopli Testnet](https://dev.reactive.network/reactive-mainnet)**. 4. **Explore what can be built**: - Hands-on [demonstrations](https://dev.reactive.network/demos) for the Reactive Network. - Clone the [GitHub](https://github.com/Reactive-Network/reactive-smart-contract-demos) project and start building. 5. **Learn more** about Reactive Smart Contracts in our [educational course](https://dev.reactive.network/education/introduction). ## Documentation For more details, visit the [Reactive Network Documentation](https://dev.reactive.network). ## Use Cases - **Decentralized Trading Automation**: Automate gasless cross-chain swaps, enabling a frictionless P2P crypto trading experience. - **Dynamic NFT Royalty System**: Enable real-time royalty adjustments, automate cross-chain transactions, and ensure transparent, fair revenue distribution for creators, buyers, and marketplaces. - **Flash Profit Extractor**: Automate token swaps, dynamic pricing, and real-time price tracking, making DeFi arbitrage accessible and efficient. - **Cross-Chain Lending**: Borrow assets across multiple blockchains. # Reap Protocol (/integrations/reap-protocol) --- title: Reap Protocol category: x402 available: ["C-Chain"] description: Reap Protocol gives AI Agent's the power to search real products, verify inventory, and purchase autonomously. Reap bridges Web2 shops with Web3 settlement so agents can operate in the real economy — safely, verifiably, and on-chain. logo: /images/reap-protocol.png developer: April Labs website: https://protocol.reap.deals/ documentation: https://docs.reap.deals/ --- ## Overview Reap Protocol gives AI Agents the power to search real products, verify inventory, and purchase autonomously. Reap bridges Web2 shops with Web3 settlement so agents can operate in the real economy — safely, verifiably, and on-chain. We provide both Python and Typescript SDK's. ## Features - **Product Search → On-Chain Inventory**: Agents can index any Web2 product with a single call. - **x402 Negotiation Engine**: Turns HTTP into a payment-capable negotiation loop. - **Agentic Cart**: Identity, stock, and settlement in a single atomic mission. - **Self-Custody by Design**: The agent controls its wallet; Reap handles the complexity. ## Getting Started ## Installation ```bash npm install @reap-protocol/sdk ethers axios ``` This example demonstrates the full Agentic Commerce Loop: Identity -> Discovery -> Settlement. ### 1. Setup If using TypeScript, ensure your tsconfig.json targets ES2020 or higher. ### 2. The Agent Code (agent.ts) ```typescript import { ReapClient } from "@reap-protocol/sdk"; // Load your private key securely const PRIVATE_KEY = process.env.MY_WALLET_KEY || ""; async function main() { // 1. Initialize the Agent // (Points to official middleware by default) const client = new ReapClient(PRIVATE_KEY); console.log("🤖 Agent Online"); try { // 2. Identity (One-time setup) // Registers your wallet as an authorized Agent on the Protocol console.log("🆔 Checking Identity..."); await client.registerIdentity(); // 3. JIT Stocking (Discovery) // Searches Web2 (Reap Deals), registers items on-chain, and returns inventory console.log("📦 Stocking Shelf with 'Gaming Laptop'..."); const result = await client.stockShelf("Gaming Laptop"); const inventory = result.items; console.log(` 🔍 Found ${inventory.length} items on-chain.`); if (inventory.length > 0) { // 4. Decision Logic // Example: Pick the first available item const target = inventory[0]; console.log(` 🎯 Selected: ${target.name} ($${target.price})`); console.log(` ID: ${target.id}`); // 5. Agentic Cart (Settlement) // Automatically approves USDC and executes the atomic purchase console.log("💸 Buying Item..."); const receipt = await client.buyProduct(target.id); if (receipt) { console.log(`🎉 SUCCESS! Transaction Hash: ${receipt.hash}`); } } else { console.log("❌ No items found."); } } catch (e: any) { console.error("❌ Error:", e.message); } } main(); ``` ### 3. Run It ```bash # Install execution tools if you haven't already npm install --save-dev ts-node typescript @types/node # Run npx ts-node agent.ts ``` ## Configuration You can override defaults for custom RPCs or self-hosted middleware. ```typescript const client = new ReapClient( "YOUR_PRIVATE_KEY", "https://avax-fuji.g.alchemy.com/v2/YOUR_KEY", // Custom RPC "https://avax.api.reap.deals" // Middleware URL ); ``` ## Documentation For integration guides and API documentation, visit the [Reap Protocol Documentation](https://docs.reap.deals/). ## Use Cases - **Agentic Commerce**: Enable agents to conduct onchain commerce - **x402 payments**: Pay for x402 resources and goods - **On-chain shopping**: Create on-chain shopping flows # RedStone Oracles (/integrations/redstone-oracles) --- title: RedStone Oracles category: Data Feeds available: ["C-Chain"] description: RedStone is a modular, gas-optimized oracle providing diverse and reliable price feeds for dApps. logo: /images/RedStone.png developer: RedStone Oracles website: https://redstone.finance/ documentation: https://docs.redstone.finance/docs/introduction --- ## Overview RedStone Oracles is a modular oracle providing gas-optimized, reliable price feeds across over 50 blockchain networks and rollups. It specializes in yield-bearing collateral for lending markets, such as liquid staking tokens (LSTs) and liquid restaking tokens (LRTs). ## Features - **Modular Oracle System**: RedStone’s architecture integrates data feeds across L1s, L2s, app chains, and non-EVM chains. - **Gas-Optimized Data Feeds**: Minimizes gas costs for putting data on the blockchain. - **Diverse Data Sources**: Data aggregated from multiple sources, including directly from liquidity pools, for accurate pricing. ## Getting Started 1. **Visit the RedStone Website**: Explore the [RedStone Oracle website](https://redstone.finance/) to see available services and integrations. 2. **Access the Documentation**: Review the [RedStone Documentation](https://docs.redstone.finance/docs/introduction) for guidance on selecting the right oracle model. 3. **Integrate**: Implement RedStone’s data feeds in your smart contracts. Reach out to the RedStone team for integration support. ## Documentation For in-depth integration instructions, oracle model selection, and data feed details, visit the [RedStone Documentation](https://docs.redstone.finance/docs/introduction). ## Use Cases RedStone Oracles is suitable for applications that need reliable and cost-efficient data feeds: - **Traditional DeFi Platforms**: Easily integrate RedStone's Push model into existing DeFi protocols. - **Liquid Staking and Restaking Platforms**: Manage LSTs and LRTs with precise and timely data not supported by other oracles. - **Bitcoin in DeFi**: Unlock Bitcoin’s potential in DeFi. # Republic (/integrations/republic) --- title: Republic category: Tokenization Platforms available: ["C-Chain"] description: Republic is a global investment and advisory platform that merges private markets with Web3 technologies, enabling tokenized investments in startups, real estate, and digital securities. logo: /images/republic.jpg developer: Republic website: https://republic.com/ documentation: https://republic.com/tokenization --- ## Overview Republic is an investment platform that opens access to private market investments through tokenization and blockchain technology. It enables individuals and institutions to invest in startups, tokenized assets, and digital securities while offering end-to-end services for businesses across fundraising, tokenization, and on-chain infrastructure. Republic's tokenization platform creates Mirror Tokens and other digital representations of real-world assets, providing compliant entry points to investment opportunities that were historically reserved for institutional investors and venture funds. ## Features - **Mirror Tokens**: Digital representations of debt securities tied to late-stage private companies, enabling fractional ownership of traditionally illiquid assets. - **Multi-Asset Tokenization**: Support for tokenizing startups, real estate, video games, crypto projects, and other alternative assets. - **End-to-End Platform**: Complete infrastructure for fundraising, token issuance, investor management, and secondary trading. - **Regulatory Compliance**: Fully compliant digital securities structured to meet SEC and international regulatory requirements. - **Global Investment Access**: Platform available to investors worldwide with localized regulatory compliance. - **Fractional Ownership**: Enable smaller investors to participate in high-value assets through tokenized shares. - **Secondary Markets**: Facilitate liquidity through regulated secondary trading of tokenized securities. - **Investor Dashboard**: Portfolio management tools for tracking tokenized investments. - **Smart Contract Infrastructure**: Blockchain-based token issuance and management with transparent ownership records. - **Institutional Services**: Advisory and infrastructure services for businesses looking to tokenize assets or raise capital. - **Multi-Chain Support**: Tokenization infrastructure compatible with multiple blockchain networks. - **KYC/AML Integration**: Built-in compliance processes for investor verification and regulatory adherence. ## Getting Started To work with Republic for tokenization: 1. **Explore the Platform**: Visit [republic.com](https://republic.com/) to understand available investment opportunities and tokenization services. 2. **For Investors**: - Create an account on the Republic platform - Complete KYC verification process - Browse available tokenized investment opportunities - Invest in startups, real estate, or other tokenized assets - Track your portfolio through the investor dashboard 3. **For Issuers/Businesses**: - Contact Republic's advisory team to discuss tokenization needs - Determine which Republic service fits your requirements (fundraising, tokenization, infrastructure) - Work with Republic to structure compliant digital securities - Launch your tokenized offering on the platform - Access Republic's global investor network 4. **Integration Opportunities**: Businesses can explore partnerships with Republic for: - Tokenizing existing assets or securities - Launching security token offerings (STOs) - Building on Republic's blockchain infrastructure - Accessing advisory services for Web3 transition ## Avalanche Support Republic's tokenization infrastructure supports multiple blockchain networks. While specific chain support may vary by offering, Republic's platform is designed to work with EVM-compatible networks including Avalanche C-Chain, enabling efficient and low-cost tokenization of assets on Avalanche's high-performance infrastructure. ## Use Cases Republic's tokenization platform covers several investment categories: **Startup Equity**: Invest in early-stage companies through tokenized equity offerings, enabling fractional ownership and potential liquidity. **Real Estate**: Access tokenized real estate investments with lower capital requirements and improved liquidity compared to traditional real estate. **Gaming Projects**: Invest in video game development through tokenized revenue shares or equity stakes. **Crypto Projects**: Participate in Web3 and crypto project funding through compliant token offerings. **Private Company Access**: Mirror Tokens provide exposure to late-stage private companies before they go public. **Alternative Assets**: Tokenize and invest in art, collectibles, intellectual property, and other non-traditional assets. ## Republic's Investment Products **Republic Capital**: Venture capital arm providing institutional-grade investment opportunities. **Republic Crypto**: Focused on Web3 investments and blockchain technology companies. **Republic Real Estate**: Platform for investing in tokenized real estate projects. **Republic Gaming**: Dedicated to gaming industry investments and tokenization. **Republic Note**: Convertible securities that provide optionality across Republic's diverse portfolio. ## Tokenization Services Republic offers tokenization services for businesses: - **Asset Structuring**: Legal and financial structuring of tokenized securities - **Regulatory Compliance**: Navigation of SEC regulations and international securities laws - **Token Issuance**: Technical infrastructure for minting and distributing security tokens - **Investor Relations**: Platform for managing investor communications and distributions - **Secondary Trading**: Access to regulated secondary markets for liquidity - **Smart Contract Development**: Custom smart contract development for tokenized securities - **Advisory Services**: Strategic guidance on tokenization strategy and implementation ## Regulatory Framework Republic operates under strict regulatory compliance: - **SEC Registered**: Registered with the U.S. Securities and Exchange Commission - **FINRA Member**: Member of the Financial Industry Regulatory Authority - **Regulation CF**: Utilizes Regulation Crowdfunding for certain offerings - **Regulation D**: Offers securities under Regulation D exemptions - **Regulation A+**: Conducts mini-IPOs through Regulation A+ offerings - **International Compliance**: Adheres to securities regulations in multiple jurisdictions ## Track Record Republic has established presence in the tokenization and private markets space: - Facilitated hundreds of millions in investments across thousands of deals - Built a community of over 2 million users globally - Partnered with leading startups and established companies for tokenization - Expanded services across multiple asset classes and investment categories - Developed infrastructure trusted by both issuers and investors ## Why Choose Republic **Broad Access**: Republic makes private investment opportunities accessible to retail investors through tokenization and fractional ownership. **End-to-End Platform**: Covers everything from token issuance to secondary trading. **Regulatory Expertise**: Deep experience navigating complex securities regulations across multiple jurisdictions. **Global Reach**: Access to a worldwide network of investors and issuers. **Multi-Asset Focus**: Platform supports diverse asset classes from startups to real estate to gaming. **Web3 Integration**: Native understanding of blockchain technology and crypto markets. **Proven Track Record**: Years of experience facilitating billions in private market investments. # Request Network (/integrations/request) --- title: Request Network category: Payments available: ["C-Chain"] description: Request Network is the all-in-one finance platform for Web3 CFOs to manage stablecoin, crypto, and fiat operations covering accounts payable, receivable, payroll, and accounting in a compliant, audit-ready hub. logo: /images/request.png developer: Request Network website: https://www.request.finance/ documentation: https://docs.request.network/ --- ## Overview Request Network is a finance platform for Web3 CFOs and finance teams to manage accounts payable, receivable, payroll, and accounting in one compliant, audit-ready platform. It connects stablecoin, cryptocurrency, and fiat operations with tools like QuickBooks, Xero, and Gnosis Safe. With over $1 billion in payments processed, Request Network is used by Web3 companies, DAOs, and crypto-native organizations. It combines blockchain-based payments with the compliance and reporting requirements of traditional finance. ## Features - **All-in-One Finance Platform**: Complete financial operations in a single platform—AP, AR, payroll, accounting. - **Multi-Currency Support**: Handle stablecoins, cryptocurrencies, and fiat currencies seamlessly. - **Accounts Payable**: Manage vendor invoices, approvals, and batch payments efficiently. - **Accounts Receivable**: Create invoices, track payments, and manage collections. - **Payroll Management**: Process payroll in crypto or fiat with full compliance. - **Accounting Integration**: Sync with QuickBooks, Xero, and other accounting software automatically. - **Gnosis Safe Integration**: Direct integration with Gnosis Safe for secure multi-sig payments. - **$1B+ Processed**: Proven platform with over $1 billion in payment volume. - **Audit-Ready**: Comprehensive audit trails and reporting for compliance. - **Batch Payments**: Pay multiple vendors or employees in a single transaction. - **Invoice Creation**: Professional crypto-native invoices with multiple payment options. - **Payment Requests**: Send payment requests with automatic tracking and reminders. - **Tax Compliance**: Tools and reports for tax compliance across jurisdictions. - **Multi-Chain Support**: Support for Avalanche and other major blockchain networks. - **Real-Time Tracking**: Monitor all financial transactions in real-time dashboard. - **Approval Workflows**: Customizable approval processes for expenditures. - **Expense Management**: Track and categorize business expenses efficiently. ## Getting Started To implement Request Network: 1. **Create an Account**: Sign up on [request.finance](https://www.request.finance/) to access the platform. 2. **Connect Your Wallet**: Link your organization's wallet (Gnosis Safe, MetaMask, etc.) for crypto operations. 3. **Set Up Integrations**: Connect existing tools: - **Accounting Software**: QuickBooks, Xero, or other platforms - **Multi-Sig Wallets**: Gnosis Safe for secure payments - **Bank Accounts**: For fiat operations (if applicable) - **Payroll Systems**: For streamlined payroll processing 4. **Configure Your Organization**: - Add team members and set permissions - Define approval workflows - Set up expense categories and budgets - Configure payment methods (stablecoins, crypto, fiat) 5. **Start Managing Finance Operations**: - **Accounts Payable**: Import or create vendor invoices, route for approval, batch pay - **Accounts Receivable**: Create invoices for customers, track payments - **Payroll**: Set up employee payment schedules and process payroll - **Reporting**: Generate financial reports for stakeholders ## Accounts Payable Request streamlines the entire AP process: **Invoice Management**: Import invoices from vendors or create internally, with automatic data extraction. **Approval Workflows**: Route invoices through customizable approval chains before payment. **Batch Payments**: Pay multiple vendors in a single blockchain transaction, saving gas fees. **Vendor Portal**: Vendors can submit invoices directly through Request. **Payment Scheduling**: Schedule payments for specific dates or conditions. **Reconciliation**: Automatic matching of invoices to payments for easy reconciliation. ## Accounts Receivable Manage customer payments efficiently: **Professional Invoices**: Create crypto-native invoices with your branding. **Multiple Payment Options**: Accept payments in various stablecoins, cryptocurrencies, or fiat. **Payment Tracking**: Real-time tracking of invoice status and payment confirmations. **Automated Reminders**: Send automatic payment reminders to customers. **Collections Management**: Tools for managing overdue invoices. **Customer Portal**: Customers can view and pay invoices through dedicated portal. ## Payroll Management Handle payroll for crypto-native teams: **Global Payroll**: Pay employees and contractors globally in their preferred currency. **Compliance**: Maintain compliance with employment and tax regulations. **Multiple Currencies**: Pay in stablecoins, crypto, or fiat as needed. **Payment Scheduling**: Automate regular payroll cycles. **Tax Reporting**: Generate necessary tax documents for employees and compliance. **Contractor Management**: Separate workflows for employees vs. contractors. ## Accounting Integration Request connects with existing accounting tools: **QuickBooks Integration**: Two-way sync with QuickBooks for automatic reconciliation. **Xero Integration**: Full integration with Xero accounting platform. **Automatic Categorization**: Transactions automatically categorized for accounting. **Real-Time Sync**: Financial data syncs in real-time between Request and accounting software. **Journal Entries**: Automatic generation of journal entries for crypto transactions. **Tax Compliance**: Proper accounting treatment for crypto assets and transactions. ## Avalanche Support Request Network operates on multiple blockchain networks including Avalanche C-Chain, enabling: **Low-Cost Transactions**: Benefit from Avalanche's low fees for frequent payments. **Fast Finality**: Near-instant payment confirmation for better cash flow management. **Stablecoin Support**: Use USDC and other stablecoins on Avalanche for payments. **Scalability**: Handle high volumes of transactions without network congestion. **Enterprise-Ready**: Avalanche's institutional focus aligns with Request's finance use cases. ## Use Cases Request is used by diverse Web3 organizations: **Web3 Startups**: Manage all financial operations as the company scales. **DAOs**: Enable decentralized organizations to manage treasury and payments compliantly. **Crypto Companies**: Finance operations for exchanges, protocols, and infrastructure providers. **Remote Teams**: Pay global teams in crypto or fiat with full compliance. **Service Providers**: Invoice clients and manage receivables for crypto-native services. **Investment Funds**: Manage fund operations and investor distributions. **NFT Projects**: Handle creator payments and royalty distributions. **DeFi Protocols**: Manage protocol expenses and contributor compensation. ## Compliance and Audit Request provides these compliance features: **Complete Audit Trail**: Every transaction recorded with full details and blockchain proof. **Financial Reports**: Generate reports for auditors, investors, and regulators. **Tax Documentation**: Tax reports for various jurisdictions. **SOC 2 Compliance**: Platform maintains SOC 2 Type II certification. **Data Export**: Export financial data in standard formats for analysis. **Regulatory Reporting**: Tools for meeting reporting requirements. ## Platform Benefits **Time Savings**: Automate financial workflows that previously required hours of manual work. **Error Reduction**: Eliminate manual data entry errors through automation and integration. **Cost Efficiency**: Reduce accounting costs and payment processing fees. **Better Control**: Real-time visibility and control over all financial operations. **Audit-Ready**: Maintain compliance and audit-ready records automatically. **Scalability**: Platform grows with your organization from startup to enterprise. **Professional Image**: Present professional, compliant financial operations to stakeholders. ## Technology Infrastructure - **Smart Contract Layer**: Open-source smart contracts for payment requests and invoicing - **Multi-Chain Support**: Operate across Ethereum, Avalanche, Polygon, and other networks - **API Access**: APIs for custom integrations - **Wallet Integration**: Connect with Gnosis Safe, MetaMask, Ledger, and other wallets - **Real-Time Dashboard**: Monitor all financial activity in real-time - **Mobile Access**: Manage finances on the go with mobile-friendly interface - **Security**: Enterprise-grade security with encrypted data and secure key management ## Gnosis Safe Integration Deep integration with Gnosis Safe multi-sig wallets: **Direct Payments**: Pay directly from Gnosis Safe within Request platform. **Multi-Sig Approvals**: Leverage Gnosis Safe's approval process for payments. **Batch Transactions**: Combine multiple payments into single Gnosis transaction. **Treasury Management**: Manage organizational treasury through Gnosis + Request. **Security**: Maintain multi-sig security while streamlining finance operations. ## Pricing Request Network offers transparent pricing: - **Free Tier**: Basic features for small teams and startups - **Professional**: Enhanced features for growing companies - **Enterprise**: Custom solutions for large organizations - **Transaction Fees**: Small fees on processed payments - **Integration**: Accounting software integrations included Visit [request.finance/pricing](https://www.request.finance/pricing) for current pricing details. ## Track Record Request Network's track record: - **$1+ Billion**: Over $1 billion in payments processed - **Thousands of Organizations**: Used by companies, DAOs, and protocols globally - **Years of Operation**: Established platform with multi-year track record - **Open Source**: Core protocol is open-source and transparent - **Community Support**: Active community and ecosystem of integrations ## Ecosystem Request connects the broader Web3 finance ecosystem: **Accounting Software**: QuickBooks, Xero, and other platforms. **Wallets**: Gnosis Safe, MetaMask, Ledger, and hardware wallets. **Payment Rails**: Support for multiple blockchains and payment methods. **Service Providers**: Integration with tax advisors, accountants, and auditors. **DeFi Protocols**: Connections to lending, yield, and treasury management protocols. ## Customer Support Request provides support through: - **Documentation**: Guides and API documentation - **Customer Success**: Dedicated support for paid customers - **Community**: Active Discord and forums for peer support - **Training**: Onboarding and training for finance teams - **Professional Services**: Custom integrations and implementations available # Revolut (/integrations/revolut) --- title: Revolut category: Fiat On-Ramp available: ["C-Chain"] description: "Revolut is a global fintech platform enabling users to buy, hold, and manage cryptocurrencies alongside traditional banking services." logo: /images/revolut.png developer: Revolut website: https://www.revolut.com/ documentation: https://developer.revolut.com/ --- ## Overview Revolut is a global fintech platform that provides banking services alongside cryptocurrency trading and management. With millions of users worldwide, Revolut lets users buy cryptocurrencies directly from its app, offering a familiar gateway for traditional finance users entering the crypto ecosystem. ## Features - **Integrated Banking**: Crypto alongside traditional banking services - **Easy Purchase**: Buy crypto directly with bank accounts - **Multiple Cryptocurrencies**: Support for major cryptocurrencies - **Trusted Platform**: Millions of verified users worldwide - **Mobile-First**: Full-featured mobile app experience - **Instant Conversion**: Quick fiat-to-crypto conversions ## Getting Started For users: 1. **Download Revolut**: Get the Revolut app 2. **Create Account**: Complete verification process 3. **Add Funds**: Fund your Revolut account 4. **Buy Crypto**: Purchase cryptocurrencies including those on Avalanche 5. **Manage**: Track and manage your crypto holdings For developers, explore [Revolut Developer Portal](https://developer.revolut.com/). ## Documentation For API documentation, visit [Revolut Developer Docs](https://developer.revolut.com/). ## Use Cases - **Consumer On-Ramp**: Easy crypto access for mainstream users - **Portfolio Management**: Manage crypto alongside traditional assets - **Payments**: Use crypto for payments through Revolut - **Banking Integration**: Bridge traditional and crypto finance # Rise (/integrations/rise) --- title: Rise category: Payments available: ["C-Chain"] description: Rise is a global payroll and compliance platform enabling businesses to pay teams in 190+ countries with flexible options in local currencies, stablecoins, and cryptocurrencies with automated compliance. logo: /images/rise.webp developer: Rise website: https://riseworks.io/ documentation: https://docs.riseworks.io/ --- ## Overview Rise is a global payroll and compliance platform designed to help businesses of all sizes -- from startups to enterprises and Web3 organizations -- pay their distributed teams across 190+ countries. By offering flexible payment options including local currencies, stablecoins, and cryptocurrencies, Rise eliminates the complexity of international payroll while ensuring full compliance with local regulations through its Agent of Record (AOR) and Employer of Record (EOR) models. The platform automates critical payroll functions including onboarding, tax reporting, benefits administration, and compliance management, enabling companies to focus on building their business while Rise handles the intricacies of global employment. With secure, compliant, and audit-ready infrastructure, Rise has become the payroll solution of choice for companies navigating the complexities of a global, distributed workforce. ## Features - **Global Payroll Coverage**: Pay employees and contractors in 190+ countries worldwide. - **Multiple Payment Options**: Support for local currencies, stablecoins (USDC, USDT), and cryptocurrencies. - **Automated Onboarding**: Streamlined employee and contractor onboarding with digital workflows. - **Tax Compliance**: Automatic tax calculations, withholdings, and reporting for all jurisdictions. - **Agent of Record (AOR)**: Compliance services for contractor management globally. - **Employer of Record (EOR)**: Full employment services in countries without local entities. - **Benefits Administration**: Manage health insurance, retirement plans, and other benefits. - **Automated Compliance**: Stay compliant with local labor laws and regulations automatically. - **Audit-Ready Records**: Comprehensive documentation and reporting for audits. - **Multi-Currency Support**: Pay in local currencies or digital assets based on preference. - **Blockchain Payments**: Native support for stablecoin and crypto payments on networks like Avalanche. - **Self-Service Portal**: Employee portal for managing pay, benefits, and documents. - **API Integration**: Connect Rise with existing HR and accounting systems. - **Real-Time Exchange Rates**: Competitive FX rates for international payments. - **Payment Scheduling**: Automated recurring payroll with customizable schedules. - **Compliance Monitoring**: Continuous monitoring of changing regulations. ## Getting Started To implement Rise for your organization: 1. **Initial Consultation**: Contact Rise to discuss your global payroll needs: - Number of employees/contractors - Countries where team members are located - Current payroll challenges - Payment preferences (fiat, stablecoins, crypto) - Compliance requirements 2. **Platform Setup**: Configure your Rise account: - Set up company profile and entities - Define payroll schedules and policies - Configure payment methods and currencies - Set up benefits and compensation structures - Integrate with existing HR/accounting systems 3. **Employee Onboarding**: Add team members to the platform: - Digital onboarding workflows for employees and contractors - Collect necessary documentation and tax forms - Set up payment preferences (bank accounts, crypto wallets) - Enroll in benefits programs - Complete compliance requirements by jurisdiction 4. **Payroll Processing**: Run your first payroll: - Review employee hours/salaries - Approve payroll run - Automatic tax calculations and withholdings - Process payments in chosen currencies - Generate pay stubs and reports 5. **Ongoing Management**: Maintain compliant payroll operations: - Monitor compliance across jurisdictions - Handle tax filings automatically - Manage benefits and HR administration - Access real-time reporting and analytics - Scale to new countries as needed ## Agent of Record (AOR) Rise's AOR service simplifies contractor management: **Global Contractor Compliance**: Ensure proper classification and treatment of contractors worldwide. **Payment Processing**: Handle contractor payments in their preferred currency or digital asset. **Tax Documentation**: Manage tax forms and reporting requirements (1099s, etc.). **Contract Management**: Store and manage contractor agreements compliantly. **Invoice Processing**: Automate contractor invoice collection and payment. **Compliance Monitoring**: Stay updated on contractor regulations by country. Rise's AOR service eliminates the risk of misclassification while enabling fast, flexible contractor payments globally. ## Employer of Record (EOR) Rise's EOR service enables hiring without local entities: **Legal Employment**: Rise becomes the legal employer in countries where you don't have entities. **Full HR Services**: Complete HR administration including payroll, benefits, and compliance. **Local Expertise**: In-country experts ensure compliance with local labor laws. **Rapid Deployment**: Hire in new countries in days, not months. **Benefits Administration**: Provide competitive local benefits packages. **Offboarding Support**: Handle employee departures compliantly. This allows companies to hire the best talent globally without the cost and complexity of establishing local entities. ## Payment Flexibility Rise's multi-modal payment approach provides unprecedented flexibility: **Local Currency Payments**: Pay employees in their local currency via local banking rails. **Stablecoin Payments**: Offer payment in USDC, USDT, or other stablecoins for instant settlement. **Cryptocurrency Options**: Enable payment in AVAX, BTC, ETH, or other cryptocurrencies. **Mixed Payments**: Allow employees to split compensation between fiat and crypto. **Instant Settlement**: Blockchain payments settle instantly vs. days with traditional banking. **Lower Costs**: Reduce FX fees and international wire costs with crypto payments. This flexibility is particularly valuable for Web3 companies and teams preferring crypto compensation. ## Avalanche Integration Rise's blockchain payment infrastructure supports Avalanche C-Chain, enabling: **AVAX Payroll**: Pay employees directly in AVAX with fast finality and low fees. **Stablecoin Payments**: Leverage USDC on Avalanche for dollar-denominated crypto payroll. **Instant Settlement**: Benefit from Avalanche's sub-second transaction finality. **Low Costs**: Avalanche's minimal fees make frequent payments economically viable. **Global Reach**: Send payments to anywhere globally using Avalanche network. **Compliance**: Maintain necessary tax reporting even with crypto payments. Rise's Avalanche integration demonstrates commitment to supporting Web3-native payroll solutions. ## Use Cases Rise serves diverse organizations with global teams: **Web3 Startups**: Pay distributed crypto-native teams in their preferred currencies and digital assets. **Global Enterprises**: Manage payroll for thousands of employees across dozens of countries. **Remote-First Companies**: Support fully distributed workforce with compliant global payroll. **Rapid Expansion**: Quickly enter new markets and hire talent without local entity setup. **Contractor Management**: Compliantly manage large contractor workforces globally. **Hybrid Organizations**: Support mix of full-time employees and contractors worldwide. **DAOs**: Enable decentralized organizations to pay contributors compliantly. **Crypto Companies**: Offer crypto-native payroll to teams building blockchain applications. ## Compliance and Security Rise maintains rigorous compliance standards: - **Multi-Jurisdictional Compliance**: Expertise in employment law across 190+ countries - **Automated Tax Filing**: Handle federal, state, and local tax filings automatically - **Data Security**: SOC 2 Type II certified infrastructure with bank-level security - **Audit Trails**: Complete records for audits and regulatory requirements - **Data Privacy**: GDPR, CCPA, and other privacy regulation compliance - **Regular Updates**: Continuous monitoring of regulatory changes - **Insurance**: Employment practices liability insurance coverage - **Legal Expertise**: In-house legal team ensuring ongoing compliance ## Platform Capabilities ### Payroll Management - **Automated Payroll**: Set-and-forget recurring payroll processing - **Multi-Country**: Single platform for all countries - **Tax Calculations**: Automatic withholdings for all jurisdictions - **Payment Methods**: Bank transfer, crypto, stablecoins, local rails - **Approval Workflows**: Customizable payroll approval processes - **Pay Stubs**: Automatic generation of compliant pay stubs ### HR Administration - **Employee Records**: Centralized employee data and documentation - **Time Tracking**: Integration with time tracking systems - **Leave Management**: PTO, sick leave, and holiday tracking - **Benefits Enrollment**: Digital benefits selection and administration - **Performance Management**: Tools for reviews and feedback - **Onboarding/Offboarding**: Complete hire-to-retire workflows ### Reporting and Analytics - **Real-Time Dashboard**: View payroll status and metrics instantly - **Cost Analytics**: Understand total employment costs by country/department - **Compliance Reports**: Generate audit and compliance documentation - **Tax Reports**: Comprehensive tax reporting for all jurisdictions - **Custom Reports**: Build reports tailored to your needs - **Data Export**: Export data to accounting and analytics tools ## Integrations Rise connects with essential business tools: **Accounting Software**: QuickBooks, Xero, NetSuite for financial sync. **HRIS Platforms**: BambooHR, Workday, Greenhouse for HR data. **Time Tracking**: Harvest, Toggl, Clockify for hours worked. **Expense Management**: Expensify, Ramp for expense reimbursement. **Communication**: Slack, Microsoft Teams for notifications. **Blockchain Wallets**: MetaMask, Gnosis Safe for crypto payments. **Banking**: Integration with business bank accounts. ## Pricing Rise offers transparent, scalable pricing: - **Per-Employee Pricing**: Predictable costs based on headcount - **Country-Specific Rates**: Pricing varies by employment complexity - **EOR Premium**: Additional fees for Employer of Record services - **AOR Pricing**: Competitive rates for contractor management - **No Setup Fees**: No upfront costs to get started - **Volume Discounts**: Reduced rates for larger teams - **Custom Enterprise**: Tailored pricing for large organizations Contact Rise for detailed pricing based on your team size and locations. ## Competitive Advantages **Global Scale**: Support for 190+ countries from a single platform. **Payment Flexibility**: Unique combination of fiat, stablecoin, and crypto options. **Web3 Native**: Purpose-built for crypto-native companies and teams. **Compliance First**: Automated compliance reduces legal risk. **Fast Deployment**: Hire globally in days, not months. **Cost Efficiency**: Reduce costs vs. traditional international payroll. **Blockchain Integration**: Native support for Avalanche and other networks. ## Customer Support Rise provides comprehensive support: - **Dedicated Account Manager**: Personalized support for your organization - **24/7 Global Support**: Support teams across time zones - **Compliance Experts**: Access to employment law specialists - **Onboarding Assistance**: White-glove setup and migration support - **Training**: Platform training for HR and finance teams - **Knowledge Base**: Comprehensive documentation and guides - **Community**: Connect with other Rise customers ## Why Choose Rise **Simplicity**: Manage global payroll in one platform instead of dozens of providers. **Flexibility**: Pay teams however they prefer—fiat, stablecoins, or crypto. **Compliance**: Sleep easy knowing you're compliant in every jurisdiction. **Speed**: Hire and pay globally faster than traditional solutions. **Web3 Ready**: Built for the future of work with blockchain-native payments. **Scalability**: Start with one country, scale to hundreds. **Transparency**: Clear pricing with no hidden fees or surprises. # RockX (/integrations/rockx) --- title: RockX category: RPC Endpoints available: ["C-Chain"] description: "RockX provides blockchain infrastructure services including RPC endpoints and staking solutions for Web3 applications." logo: /images/rockX.jpg developer: RockX website: https://www.rockx.com/ documentation: https://www.rockx.com/ --- ## Overview RockX is a blockchain infrastructure provider offering RPC services, node infrastructure, and staking solutions for Web3 applications. It supports multiple blockchain networks including Avalanche, providing developers with reliable, high-performance access to blockchain data and networks. ## Features - **RPC Infrastructure**: High-performance RPC endpoints for blockchain access. - **Node Services**: Reliable node infrastructure for various blockchain networks. - **Staking Solutions**: Comprehensive staking services for proof-of-stake networks. - **Multi-Chain Support**: Support for multiple blockchain networks and protocols. - **High Availability**: Infrastructure with reliable uptime. - **Performance Optimized**: Optimized nodes for fast response times and low latency. - **Enterprise Solutions**: Scalable infrastructure for enterprise applications. - **Security Focus**: Institutional-grade security for node operations. ## Documentation For more information, visit the [RockX website](https://www.rockx.com/). ## Use Cases - **DApp Development**: Reliable RPC infrastructure for decentralized applications. - **Staking Services**: Infrastructure for staking and validator operations. - **Institutional Solutions**: Enterprise-grade infrastructure for institutional clients. - **Node Operations**: Managed node services for blockchain networks. # Routescan (/integrations/routescan) --- title: Routescan category: Explorers available: ["C-Chain", "All Avalanche L1s"] description: Routescan provides block explorer as a service for Avalanche L1s, offering real-time data on transactions, blocks, validators, and more. logo: /images/routescan.avif developer: Routescan website: https://routescan.io/explorer-as-a-service documentation: https://routescan.io/documentation --- ## Overview Routescan is a block explorer service for Avalanche L1 networks. It provides real-time data on transactions, blocks, validators, and other metrics. Routescan is scalable and customizable, suited for developers and enterprises integrating block explorer capabilities into Avalanche-based solutions. ## Features - **Real-Time Data**: Access up-to-date information on transactions, blocks, and validators across Avalanche L1s. - **Explorer as a Service**: Utilize Routescan’s block explorer as a service to integrate blockchain monitoring into your own platform or application. - **Customizable Solutions**: Tailor the explorer to fit specific needs, offering flexibility for different use cases. - **Detailed Validator Information**: Get insights into validator performance, staking details, and historical data. - **Scalable Infrastructure**: Built to handle large volumes of data and transactions. - **API Integration**: Use Routescan’s API to fetch and display blockchain data within your own applications. ## Getting Started 1. **Visit the Routescan Website**: Explore the [Routescan website](https://routescan.io/explorer-as-a-service) to learn about their explorer service. 2. **Access Documentation**: Check the [Routescan Documentation](https://routescan.io/documentation) for setup guides. 3. **Integrate the API**: Obtain an API key and start integrating Routescan’s data into your applications. 4. **Customize Your Explorer**: Tailor the block explorer for your Avalanche L1 project. 5. **Monitor Network Activity**: Use Routescan to monitor transactions, blocks, and validators in real-time. ## Documentation For more details, visit the [Routescan Documentation](https://routescan.io/documentation). ## Use Cases Routescan is ideal for: - **Blockchain Developers**: Integrate a scalable block explorer into your Avalanche L1 projects with real-time network data. - **Enterprises**: Use Routescan’s customizable explorer service to monitor and manage blockchain activity at scale. - **Validators**: Track validator performance and optimize operations using detailed real-time and historical data. - **Platform Providers**: Offer block explorer functionality as a value-added service on your own platforms using Routescan’s infrastructure. # RWA.xyz (/integrations/rwa-xyz) --- title: RWA.xyz category: Analytics & Data available: ["C-Chain"] description: "RWA.xyz provides analytics and data tracking for Real World Assets (RWA) tokenized on blockchain, including Avalanche." logo: /images/rwa.jpeg developer: RWA.xyz website: https://rwa.xyz/ documentation: https://rwa.xyz/ --- ## Overview RWA.xyz is an analytics platform focused on Real World Assets (RWA) tokenized on blockchain networks, including Avalanche. It provides data, metrics, and insights into the RWA sector, helping users track tokenized assets, market trends, and ecosystem developments. # Sardine (/integrations/sardine) --- title: Sardine category: Fiat On-Ramp available: ["C-Chain"] description: Sardine provides fraud prevention and risk management infrastructure for crypto on-ramp and off-ramp solutions with built-in compliance and KYC capabilities. logo: /images/sardine.png developer: Sardine website: https://www.sardine.ai/ documentation: https://docs.sardine.ai/ --- ## Overview Sardine is a fraud prevention and risk management platform for crypto and financial applications. It specializes in fraud detection and also offers integrated fiat-to-crypto on-ramp solutions, letting businesses offer secure crypto purchases with built-in compliance, KYC verification, and real-time fraud prevention. The platform uses machine learning and behavioral biometrics to detect fraud across the customer journey, from account creation to transaction processing. Sardine supports Avalanche and other major blockchains, helping applications reduce fraud while improving conversion rates. ## Features - **Fraud Prevention**: Real-time fraud detection using machine learning, behavioral biometrics, and device intelligence. - **Risk Management**: Risk scoring and monitoring across the customer lifecycle. - **KYC/AML Compliance**: Built-in identity verification and anti-money laundering screening that meets global regulatory requirements. - **Fiat On-Ramp Integration**: Payment rail integration for crypto purchases with fraud protection. - **Payment Method Support**: Accept credit cards, debit cards, ACH, and other payment methods with built-in fraud prevention. - **Behavioral Biometrics**: Analyze user behavior patterns to detect account takeovers and synthetic identity fraud. - **Device Intelligence**: Track and analyze device fingerprints to prevent fraud and account sharing. - **Transaction Monitoring**: Real-time monitoring of crypto transactions for suspicious activity. - **Sanctions Screening**: Automated screening against global sanctions lists and watchlists. - **Customizable Rules Engine**: Create custom risk rules tailored to your business needs. - **Webhooks and APIs**: REST APIs with real-time webhooks for fraud alerts and risk events. - **Dashboard and Analytics**: Access detailed analytics and insights on fraud patterns and user behavior. ## Getting Started To integrate Sardine into your application: 1. **Create a Sardine Account**: Sign up on the Sardine platform to access the dashboard and API credentials. 2. **Complete Onboarding**: Go through Sardine's onboarding process to configure your fraud prevention policies and risk thresholds. 3. **Obtain API Keys**: Generate your API keys from the Sardine dashboard for both sandbox and production environments. 4. **Choose Integration Approach**: Sardine offers multiple integration options: - **Full Platform Integration**: Integrate Sardine's complete fraud prevention suite across user onboarding, transactions, and account management - **On-Ramp Integration**: Add Sardine's fiat-to-crypto on-ramp with built-in fraud prevention - **API Integration**: Build custom solutions using Sardine's fraud detection and risk management APIs 5. **Configure Risk Rules**: Set up your fraud detection rules, risk scoring thresholds, and automated actions. 6. **Test in Sandbox**: Use Sardine's sandbox environment to test fraud scenarios and verify your integration. 7. **Set Up Webhooks**: Configure webhook endpoints to receive real-time notifications about fraud events and risk alerts. 8. **Go Live**: After testing, activate your production environment and start protecting your users from fraud. ## Avalanche Support Sardine supports fraud prevention and on-ramp services for Avalanche-based assets, including AVAX and USDC on the Avalanche C-Chain. ## Documentation For integration guides, API references, and fraud prevention best practices, visit: - [Sardine Documentation](https://docs.sardine.ai/) - [Integration Guides](https://docs.sardine.ai/guides/integration/integrationguides/overview) - [API Reference](https://docs.sardine.ai/guides/api-reference/overview) - [Knowledge Base](https://docs.sardine.ai/guides/knowledge-base/overview) ## Use Cases on Avalanche Some examples of how Sardine fits into Avalanche applications: **Cryptocurrency Exchanges**: Prevent fraud across user registration, deposits, withdrawals, and trades involving AVAX and Avalanche-based tokens. **DeFi Platforms**: Protect your platform from fraudulent activities while enabling secure fiat on-ramps for Avalanche DeFi protocols. **NFT Marketplaces**: Detect and prevent fraud in NFT purchases on Avalanche, protecting both buyers and sellers. **Wallet Applications**: Secure user onboarding and transactions with behavioral analysis and fraud detection for Avalanche wallets. **GameFi Applications**: Prevent account takeovers, payment fraud, and bot activity in Avalanche-based gaming platforms. **On-Ramp Providers**: Reduce chargebacks and fraud losses while maintaining high approval rates for legitimate users purchasing AVAX and Avalanche tokens. ## Pricing Sardine's pricing is based on transaction volume and features used: - **Risk Assessment**: Priced per risk assessment or transaction screened - **KYC Verification**: Per verification with volume-based pricing tiers - **On-Ramp Services**: Transaction-based fees for fiat-to-crypto conversions - **Custom Enterprise Plans**: Tailored pricing for high-volume applications - **Volume Discounts**: Available for applications with significant transaction volumes For detailed pricing information and custom enterprise solutions, contact Sardine's sales team. ## Fraud Prevention Capabilities Sardine covers multiple fraud vectors: - **Account Opening Fraud**: Detect synthetic identities, stolen identities, and fake accounts during registration - **Payment Fraud**: Prevent credit card fraud, ACH fraud, and payment method abuse - **Account Takeover**: Identify unauthorized access through behavioral analysis and device intelligence - **Transaction Fraud**: Monitor crypto purchases and transfers for suspicious patterns - **Bot Detection**: Identify and block automated attacks and bot activity - **Money Laundering**: Detect suspicious transaction patterns indicative of money laundering - **Promo Abuse**: Prevent exploitation of promotional offers and referral programs ## Compliance and Security Sardine maintains the following compliance and security standards: - **SOC 2 Type II Certified**: Independently audited security controls and processes - **GDPR Compliant**: Full compliance with European data protection regulations - **PCI DSS Compliant**: Secure handling of payment card information - **Global Compliance**: Support for KYC/AML requirements across multiple jurisdictions - **Data Encryption**: End-to-end encryption for sensitive user data - **Regular Audits**: Ongoing security assessments by independent third parties ## Why Choose Sardine **Adaptive ML Models**: Machine learning models that continuously adapt to new fraud patterns. **Real-Time Decisions**: Instant accept/reject decisions without sacrificing security for speed. **Low False Positives**: Risk scoring that reduces false declines while catching actual fraud. **Full Journey Coverage**: Protection from onboarding through transactions. **Easy Integration**: Well-documented APIs and SDKs. **Proven Track Record**: Trusted by leading crypto and fintech companies. # Securitize (/integrations/securitize) --- title: Securitize category: Tokenization Platforms available: ["C-Chain"] description: Securitize is a leading regulated platform that enables compliant tokenization, issuance, trading, and management of real-world assets on blockchain, serving institutional and retail investors. logo: /images/securitize.png developer: Securitize website: https://securitize.io/ documentation: https://sec-connect-api-docs.securitize.io/ --- ## Overview Securitize is a digital securities platform for tokenizing, issuing, and managing real-world assets on blockchain networks. As a regulated transfer agent and SEC-registered broker-dealer, Securitize enables institutions and asset managers to bring traditional financial assets on-chain in a compliant manner. With over $4 billion in assets tokenized, Securitize works with leading financial institutions including BlackRock, KKR, and Hamilton Lane. The platform handles the full lifecycle of digital securities -- from token creation and investor onboarding to transfer agent services, fund administration, and secondary trading through its regulated Alternative Trading System (ATS). ## Features - **End-to-End Tokenization**: Complete platform for structuring, issuing, and managing tokenized securities. - **Regulatory Compliance**: SEC-registered transfer agent and broker-dealer with full regulatory infrastructure. - **Multi-Asset Support**: Tokenize private equity, credit funds, real estate, public stocks, treasuries, and more. - **Alternative Trading System (ATS)**: Regulated secondary marketplace for trading tokenized securities. - **Transfer Agent Services**: Compliant shareholder recordkeeping and corporate actions management. - **Fund Administration**: Fund operations including NAV calculations and investor reporting. - **Investor Onboarding**: Streamlined KYC/AML processes with accredited investor verification. - **Smart Contract Infrastructure**: Secure, audited smart contracts for token issuance and lifecycle management. - **Distribution Network**: Access to Securitize's network of qualified investors and institutional buyers. - **Compliance Automation**: Automated compliance checks for transfer restrictions and regulatory requirements. - **Multi-Chain Support**: Tokenization across multiple blockchain networks with interoperability. - **Institutional-Grade Security**: Enterprise security standards with SOC 2 Type II certification. - **Cap Table Management**: Digital cap table with real-time ownership tracking and reporting. - **Dividend Distribution**: Automated dividend and distribution payments to token holders. ## Getting Started To work with Securitize: 1. **For Asset Managers and Issuers**: - Contact Securitize's institutional team to discuss tokenization requirements - Define the asset or fund to be tokenized - Work with Securitize to structure the digital security - Complete legal documentation and regulatory filings - Launch the tokenized offering on Securitize's platform - Utilize transfer agent and fund administration services - Access secondary trading through Securitize Markets 2. **For Investors**: - Create an account on the Securitize platform - Complete KYC/AML verification and accreditation - Browse available tokenized investment opportunities - Invest in digital securities through the platform - Manage your portfolio and receive distributions - Trade on Securitize Markets (for eligible securities) 3. **For Technology Partners**: - Explore Securitize's APIs and integration capabilities - Partner with Securitize to embed tokenization services - Access Securitize's infrastructure for building complementary services ## Avalanche Support Securitize's platform supports multiple blockchain networks for token issuance. While Ethereum is commonly used, Securitize's infrastructure is designed to support EVM-compatible chains including Avalanche C-Chain, enabling efficient, low-cost tokenization of securities on Avalanche's high-throughput network. ## Major Partnerships and Assets Securitize has tokenized significant institutional assets: **BlackRock USD Institutional Digital Liquidity Fund (BUIDL)**: BlackRock's tokenized money market fund providing qualified investors with access to U.S. dollar yields on-chain. **KKR Funds**: Tokenization of KKR's Health Care Strategic Growth Fund II, expanding access to institutional private equity. **Hamilton Lane**: First tokenized Hamilton Lane funds in the U.S., democratizing access to private markets. **Distributing for Franklin Templeton**: Distribution partnership for Franklin Templeton's tokenized funds. These partnerships reflect Securitize's focus on institutional-grade tokenization. ## Use Cases Securitize's platform enables tokenization across multiple asset classes: **Private Equity Funds**: Tokenize PE funds to increase accessibility, reduce minimums, and improve liquidity for LPs. **Real Estate**: Transform real estate holdings into digital securities with fractional ownership and enhanced liquidity. **Credit Funds**: Tokenize debt funds and fixed-income products for broader distribution. **Public Equities**: Create tokenized representations of public stocks with programmable features. **Treasury Products**: Offer tokenized exposure to U.S. Treasuries and money market instruments. **Revenue-Sharing Agreements**: Structure and tokenize revenue participation rights for businesses. **Art and Collectibles**: Fractionally own high-value art and collectibles through compliant tokenization. ## Securitize Markets Securitize operates a regulated Alternative Trading System (ATS) for secondary trading: - **Regulated Marketplace**: SEC-registered ATS providing compliant secondary liquidity - **Qualified Investors**: Marketplace accessible to accredited and institutional investors - **Price Discovery**: Transparent order book for discovering fair market value - **Settlement Infrastructure**: Efficient on-chain settlement of trades - **Market Making**: Potential liquidity provision through market makers - **Trading Hours**: Extended trading windows beyond traditional market hours ## Technology Infrastructure Securitize's technology stack includes: - **Blockchain Agnostic**: Support for multiple blockchain networks - **Smart Contract Security**: Audited contracts with formal verification - **API Integration**: RESTful APIs for integration - **Compliance Layer**: On-chain compliance rules enforced through smart contracts - **Identity Management**: Secure identity verification and management - **Data Room**: Secure document management for offering materials - **Reporting Tools**: Reporting for investors and regulators - **Custody Integration**: Compatible with institutional-grade custody providers ## Regulatory Licensing Securitize holds the following regulatory licenses: - **SEC-Registered Transfer Agent**: Authorized to maintain shareholder records - **SEC-Registered Broker-Dealer**: Member of FINRA for securities distribution - **Alternative Trading System (ATS)**: Registered ATS for secondary trading - **Multi-Jurisdictional**: Compliance frameworks for international operations - **SOC 2 Type II Certified**: Independently audited security and compliance controls ## Benefits of Securitize **Institutional Trust**: Chosen by BlackRock, KKR, and Hamilton Lane for multi-billion dollar tokenizations. **Full Stack**: Only platform offering issuance, transfer agent, fund admin, and secondary trading. **Regulatory Certainty**: Full regulatory licensing eliminates compliance risks. **Proven at Scale**: Over $4 billion in tokenized assets demonstrates capability and reliability. **Investor Network**: Access to network of qualified investors seeking tokenized securities. **Technology**: Blockchain infrastructure with security and compliance built-in. **Secondary Liquidity**: Regulated ATS provides true liquidity for tokenized securities. ## Pricing Securitize offers customized pricing for institutional clients: - **Initial Setup**: One-time fees for structuring and launching tokenized securities - **Transfer Agent Fees**: Ongoing fees for shareholder recordkeeping and corporate actions - **Fund Administration**: Monthly fees for NAV calculation and investor reporting - **Trading Fees**: Transaction fees for secondary market trading - **Platform Fees**: Annual platform access and technology fees - **Custom Solutions**: Tailored pricing for complex or large-scale tokenizations Contact Securitize's institutional team for detailed pricing. # Sensei Node (/integrations/sensei-node) --- title: Sensei Node category: Validator Infrastructure available: ["C-Chain", "All Avalanche L1s"] description: "Sensei Node provides institutional-grade validator infrastructure and staking services for proof-of-stake networks." logo: /images/sensei-node.png developer: Sensei Node website: https://www.senseinode.com/ documentation: https://docs.senseinode.com/ --- ## Overview Sensei Node is a professional blockchain infrastructure provider offering institutional-grade validator services and staking solutions. With a focus on security, reliability, and performance, Sensei Node enables institutions and individuals to participate in Avalanche network validation with confidence. ## Features - **Professional Validation**: Institutional-grade validator operations - **High Availability**: Redundant infrastructure for maximum uptime - **Multi-Chain Support**: Validation services across leading PoS networks - **Staking Services**: Comprehensive staking solutions for clients - **Security Focus**: Enterprise security practices for validator operations - **Performance Monitoring**: Real-time validator performance tracking ## Getting Started 1. **Contact Sensei Node**: Reach out through [Sensei Node](https://www.senseinode.com/) 2. **Discuss Requirements**: Share your validation or staking needs 3. **Onboarding**: Complete the setup process 4. **Operations**: Begin validation or staking operations 5. **Monitor**: Track performance through provided dashboards ## Documentation For more information, visit [Sensei Node Documentation](https://docs.senseinode.com/). ## Use Cases - **Validator Operations**: Run professional Avalanche validators - **Staking Delegation**: Delegate to Sensei Node validators - **Institutional Services**: Staking solutions for institutions - **L1 Infrastructure**: Validation services for Avalanche L1s # Sherry (/integrations/sherry-protocol) --- title: Sherry category: Mini-Apps available: ["C-Chain"] description: "Sherry is the publishing layer that turns any interaction — from a tweet to a chatbot message — into a verifiable, tappable onchain action." logo: /images/sherry.png developer: Sherry Labs website: https://sherry.social documentation: https://docs.sherry.social/docs/intro --- **Note:** The sherry.social website and documentation site appear to be experiencing downtime as of March 2026. The project's source code remains available on [GitHub](https://github.com/SherryLabs). ## Overview Sherry Protocol enables decentralised, verifiable distribution and execution of onchain actions. As the publishing layer for Web3, it allows anyone to create, embed, and amplify interactive Web3 actions anywhere on the internet and execute complex multi-step flows directly from their context. Our SDK empowers developers to create rich, composable mini-apps called Triggers that seamlessly integrate into social networks making blockchain interactions as simple as sharing a link. Sherry is building the foundation for a universal communication standard between AI Agents and Web3 applications. ## Features - **Interactive Triggers**: Transform static posts into dynamic Web3 experiences with built-in validation and cross-chain support. - **Multi-Chain Support**: Native integration across Avalanche, Ethereum, and more blockchain networks. - **Multiple Action Types**: - **Blockchain Actions**: Smart contract interactions with rich parameter configuration - **Transfer Actions**: Native token and ERC20 transfers with customizable UIs - **Dynamic Actions**: User-defined actions with flexible parameters for custom logic - **HTTP Actions**: API calls and form submissions - **Nested Action Flows**: Complex multi-step processes with conditional logic - **Frictionless UX**: Users can interact with Web3 applications without leaving their social feed. - **Developer-Friendly SDK**: Full TypeScript support with comprehensive type definitions and built-in validation. - **AI Agent Ready**: Structured metadata designed to become a common language for AI agents to discover and execute blockchain capabilities. ## Getting Started To begin building with Sherry: 1. **Install the SDK**: ```bash npm install @sherrylinks/sdk # or yarn add @sherrylinks/sdk ``` 2. **Create Your First Trigger**: ```typescript import { createMetadata, Metadata } from '@sherrylinks/sdk'; const metadata: Metadata = { url: 'https://myapp.example', icon: 'https://example.com/icon.png', title: 'Send AVAX', description: 'Quick AVAX transfer', actions: [ { label: 'Send 0.1 AVAX', description: 'Transfer 0.1 AVAX to recipient', to: '0x1234567890123456789012345678901234567890', amount: 0.1, chains: { source: 'avalanche' }, }, ], }; const validatedMetadata = createMetadata(metadata); ``` 3. **Deploy and Share**: Embed your trigger into social media posts and let users interact directly. ## Documentation For guides and technical documentation, visit [Sherry Documentation](https://docs.sherry.social/docs/intro). Key resources include: - **[SDK Introduction](https://docs.sherry.social/docs/intro)**: Getting started with the Sherry SDK - **[Action Types](https://docs.sherry.social/docs/api/action-types)**: Learn about different interaction types - **[Parameters Guide](https://docs.sherry.social/docs/api-reference/parameters/)**: Configure user inputs and validation - **[Chain Support](https://docs.sherry.social/docs/core-concepts/chains)**: Multi-chain deployment guide - **[Guides](https://docs.sherry.social/docs/getting-started/examples)**: Guides and code samples for common use cases ## Use Cases Sherry serves diverse Web3 integration needs: - **Social DeFi**: Enable token swaps, staking, and yield farming directly within social posts. - **DAO Governance**: Create voting interfaces and proposal submission forms embedded in community posts. - **NFT Experiences**: Allow minting, trading, and showcasing NFTs without leaving social platforms. - **Fundraising Campaigns**: Build donation and crowdfunding triggers with real-time progress tracking. - **Cross-Chain Operations**: Execute asset transfers between different blockchain networks. - **AI Agent Integration**: Provide structured interfaces for AI agents to interact with Web3 protocols. - **Community Engagement**: Create interactive experiences that boost social media engagement while driving Web3 adoption. ## Supported Chains Sherry currently supports the following blockchain networks: - **Avalanche C-Chain** (`avalanche`) - **Avalanche Fuji Testnet** (`fuji`) Additional L1 chains are being added over time. ## Recent Milestones **Avalanche Codebase **: Part of Avalanche Codebase S25. ## Vision & Roadmap Sherry aims to be the performance layer of the permissionless internet — where every link, post, or message can carry embedded economic intent. We're building the foundation for a universal standard of communication between AI Agents and blockchain applications. Our structured metadata is designed to become a common language that allows AI Agents to: - **Discover Blockchain Capabilities**: Enable AI agents to understand available actions in each Web3 application - **Execute Complex Actions**: Make it easier for agents to compose and execute sequences of blockchain actions - **Universal Interoperability**: Establish bridges between different AI and blockchain ecosystems - **Complexity Abstraction**: Hide technical details so agents can focus on meeting user needs # Silo (/integrations/silo) --- title: Silo category: DeFi available: ["C-Chain"] description: "Silo is a decentralized lending protocol with isolated money markets, providing secure liquidity solutions on Avalanche." logo: /images/silo.png developer: Silo Finance website: https://silo.finance documentation: https://docs.silo.finance --- ## Overview Silo is a decentralized lending protocol on Avalanche's C-Chain with isolated money markets that minimize risk by segregating assets. This approach lets Silo support many tokens while maintaining strong security guarantees and efficient liquidity management. ## Features - **Isolated Markets**: Each token has its own isolated lending market, reducing systemic risk. - **Risk Containment**: Protocol-wide risk is minimized through market isolation. - **Permissionless Listings**: Support for any token without requiring governance approval. - **Bridge Assets**: Connect isolated markets through bridge assets like ETH and stablecoins. - **Flexible Collateral**: Use a variety of assets as collateral for borrowing. - **Security Focused**: Enhanced security through isolated market architecture. ## Getting Started To use Silo: 1. **Access Platform**: Visit [Silo](https://silo.finance) and connect to Avalanche. 2. **Connect Wallet**: Link your Web3 wallet with AVAX for transaction fees. 3. **Supply Assets**: - Deposit assets to isolated markets to earn interest - Each market operates independently - Monitor your positions across different silos 4. **Borrow Safely**: Take out loans against your collateral with contained risk exposure. ## Documentation For detailed protocol information and guides, visit the [Silo Documentation](https://docs.silo.finance). ## Use Cases - **Safe Lending**: Earn interest with reduced protocol-wide risk through isolated markets. - **Diverse Assets**: Access lending and borrowing for many different tokens. - **Risk Management**: Isolated market architecture contains potential exploits. - **Yield Generation**: Earn yields by supplying assets to various silos. - **Flexible Borrowing**: Access liquidity against diverse collateral types. # Skybridge (/integrations/skybridge) --- title: Skybridge category: Assets available: ["C-Chain"] description: "Skybridge Capital offers tokenized fund products providing institutional investors access to digital asset strategies." logo: /images/skybridge.png developer: Skybridge Capital website: https://www.skybridge.com/ --- ## Overview Skybridge Capital is a global alternative investment firm offering tokenized fund products and digital asset strategies. Through blockchain technology, Skybridge gives institutional and qualified investors access to alternative investment opportunities including digital assets and tokenized fund structures. ## Features - **Tokenized Funds**: Fund products available in tokenized form - **Digital Asset Strategies**: Professional digital asset investment strategies - **Alternative Investments**: Broad alternative investment expertise - **Institutional Standards**: Investment processes meeting institutional requirements - **Transparency**: Enhanced transparency through tokenization - **Professional Management**: Experienced investment management ## Getting Started 1. **Visit Skybridge**: Explore [Skybridge Capital](https://www.skybridge.com/) 2. **Review Products**: Learn about available fund products 3. **Qualification**: Complete investor qualification process 4. **Investment**: Access Skybridge investment products 5. **Monitoring**: Track investment performance ## Use Cases - **Fund Access**: Access alternative investment funds - **Digital Asset Exposure**: Gain exposure to digital asset strategies - **Tokenized Ownership**: Own tokenized fund shares - **Institutional Portfolio**: Add digital assets to institutional portfolios # Snowpeer (/integrations/snowpeer) --- title: Snowpeer category: Analytics & Data available: ["C-Chain", "All Avalanche L1s"] description: "Snowpeer provides Avalanche network analytics, offering real-time monitoring, validator insights, and network performance metrics." logo: /images/snowpeer.png developer: Snowpeer website: https://snowpeer.io/ documentation: https://snowpeer.io/ --- ## Overview Snowpeer is an analytics platform built specifically for the Avalanche network, providing real-time data and insights into network performance, validator operations, and blockchain metrics. It offers monitoring tools for developers, validators, and network participants to track and analyze Avalanche ecosystem activity. ## Features - **Network Analytics**: - Real-time network metrics - Validator performance tracking - Node monitoring - Network health indicators - **Data Visualization**: - Interactive dashboards - Historical data charts - Performance graphs - Network topology views - **Validator Insights**: - Uptime tracking - Delegation statistics - Reward analytics - Performance comparisons - **Monitoring Tools**: - Alert notifications - Custom metrics tracking - API access - Data exports ## Getting Started To use Snowpeer: 1. **Access Platform**: - Visit [Snowpeer](https://snowpeer.io) - Explore available metrics and dashboards - Set up monitoring preferences 2. **Monitor Network**: - Track validator performance - View network statistics - Analyze historical trends 3. **Integration**: - Access API endpoints - Configure alerts - Export data for analysis ## Documentation For more information and detailed guides, visit the [Snowpeer website](https://snowpeer.io/). ## Use Cases Snowpeer enables: - **Validator Operations**: Monitor and optimize validator performance - **Network Analysis**: Track Avalanche network health and metrics - **Research**: Access historical data for blockchain research - **Delegation Decisions**: Evaluate validators for informed delegation - **Portfolio Tracking**: Monitor staking rewards and performance # SnowScan (/integrations/snowscan) --- title: SnowScan category: Explorers available: ["C-Chain", "All Avalanche L1s"] description: SnowScan is a block explorer for the Avalanche network, providing real-time data on transactions, blocks, validators, and more. logo: /images/snowscan.png developer: Etherscan website: https://snowscan.xyz/ documentation: https://docs.snowscan.xyz/ --- ## Overview SnowScan is a block explorer for the Avalanche network, developed by Etherscan. It provides real-time data on transactions, blocks, validators, and other key metrics within the Avalanche ecosystem. If you've used Etherscan before, the interface will feel familiar. ## Features - **Real-Time Data**: Access up-to-the-minute information on transactions, block confirmations, and validator activities on the Avalanche network. - **Detailed Analytics**: Get in-depth analytics on blocks, transactions, and addresses to better understand blockchain activity. - **Validator Monitoring**: Track validator performance and staking information with detailed insights and historical data. - **User-Friendly Interface**: Navigate the Avalanche blockchain with an intuitive, easy-to-use interface similar to Etherscan. - **API Integration**: Utilize the SnowScan API to integrate blockchain data into your applications and services. - **Comprehensive Search Tools**: Easily find specific transactions, addresses, or blocks with advanced search functionalities. ## Getting Started To start using SnowScan: 1. **Visit the SnowScan Website**: Explore the [SnowScan website](https://snowscan.xyz/) to begin navigating the Avalanche network. 2. **Search and Analyze**: Use the search functionality to find transactions, addresses, and block information, or dive into detailed analytics. 3. **Monitor Validators**: Access performance data for validators, including staking details and historical metrics. 4. **Use the API**: For developers, refer to the [SnowScan Documentation](https://docs.snowscan.xyz/) to learn how to integrate SnowScan’s API into your projects. 5. **Explore the Avalanche Network**: Use SnowScan’s tools to monitor and analyze network activity. ## Documentation For more detailed information on using SnowScan and integrating its features, visit the [SnowScan Documentation](https://docs.snowscan.xyz/). ## Use Cases SnowScan is ideal for: - **Blockchain Developers**: Monitor and analyze transactions, blocks, and validators to support development and deployment on the Avalanche network. - **Validators**: Optimize performance and track staking activities with real-time data and insights. - **Researchers and Analysts**: Conduct detailed analyses of the Avalanche blockchain using comprehensive data provided by SnowScan. - **General Users**: Explore and verify transactions, blocks, and network activities with a user-friendly block explorer. # SnowTrace (/integrations/snowtrace) --- title: SnowTrace category: Explorers available: ["C-Chain"] description: "SnowTrace is a block explorer for the Avalanche network, now operated by Routescan, providing real-time data on transactions, blocks, and more." logo: /images/snowtrace.jpg developer: Routescan website: https://snowtrace.io/ documentation: https://snowtrace.io/documentation --- > **Note:** The original Etherscan-powered Snowtrace was discontinued on November 30, 2023. Snowtrace.io is now operated by Routescan and remains active as an Avalanche C-Chain block explorer. ## Overview SnowTrace is a block explorer for the Avalanche network, developed by Routescan. It provides real-time data on transactions, blocks, validators, and other blockchain metrics. SnowTrace offers features for developers, validators, and anyone tracking Avalanche network activities. ## Features - **Real-Time Blockchain Data**: Access up-to-date information on transactions, block confirmations, and validator activities. - **Comprehensive Search Tools**: Easily search for specific transactions, addresses, blocks, and more using advanced search functionality. - **Validator Insights**: Get detailed information on validator performance, including staking details and historical data. - **Network Overview**: View a holistic overview of the Avalanche network, including key metrics and network status. - **User-Friendly Interface**: Navigate through blockchain data with a clean, intuitive interface suitable for both beginners and advanced users. - **API Access**: Integrate SnowTrace data into applications via its API, providing developers with reliable access to blockchain data. ## Getting Started To start using SnowTrace: 1. **Visit SnowTrace**: Navigate to the [SnowTrace website](https://snowtrace.io/) to begin exploring the Avalanche network. 2. **Search and Explore**: Use the search functionality to find specific transactions, addresses, or blocks, and explore detailed blockchain data. 3. **Validator Data**: Dive into validator statistics to analyze performance and staking information. 4. **Check Network Status**: Monitor the overall health and status of the Avalanche network with real-time metrics. 5. **API Documentation**: Refer to the [SnowTrace Documentation](https://snowtrace.io/documentation) for information on using the API, including endpoints and usage examples. ## Documentation For more information on using SnowTrace and integrating its features into your projects, visit the [SnowTrace Documentation](https://snowtrace.io/documentation). ## Use Cases SnowTrace is ideal for: - **Blockchain Developers**: Analyze transactions and blocks for development purposes, debugging, and smart contract interaction. - **Validators**: Monitor validator operations and optimize performance based on real-time and historical data. - **Blockchain Enthusiasts**: Explore the Avalanche network and track transactions, blocks, and network activity. - **Analysts and Researchers**: Access data for in-depth blockchain analysis and research. # SocialScan (/integrations/socialscan) --- title: SocialScan category: Explorers available: ["C-Chain", "All Avalanche L1s"] description: "SocialScan is a social-focused blockchain explorer providing address labeling, identity verification, and community-driven data." logo: /images/socialscan.png developer: SocialScan website: https://socialscan.io/ documentation: https://docs.thehemera.com/ --- ## Overview SocialScan is a blockchain explorer with a focus on social identity and community-driven data. Beyond traditional block explorer features, SocialScan provides address labeling, identity verification, and social context for blockchain addresses, helping users understand who they're interacting with on-chain. ## Features - **Social Labels**: Community-contributed address identification - **Identity Verification**: Verified labels for known entities - **Transaction Explorer**: Standard block explorer functionality - **Address Analytics**: Activity analysis for blockchain addresses - **Scam Detection**: Community reporting of suspicious addresses - **Multi-Chain Support**: Coverage across multiple blockchain networks ## Getting Started 1. **Visit SocialScan**: Access [SocialScan](https://socialscan.io/) 2. **Search Addresses**: Look up addresses with social context 3. **Contribute Labels**: Add labels for addresses you can verify 4. **Explore Transactions**: Use standard explorer features 5. **Report Issues**: Flag suspicious addresses for the community ## Documentation For more information, visit [SocialScan Documentation](https://docs.thehemera.com/). ## Use Cases - **Due Diligence**: Verify addresses before transacting - **Scam Prevention**: Check addresses against community reports - **Research**: Analyze on-chain activity with social context - **Community Building**: Contribute to address labeling # Space and Time (/integrations/spaceandtime) --- title: Space and Time category: Analytics & Data available: ["C-Chain", "Subnet"] description: "SpaceandTime is a decentralized data warehouse for indexing blockchain data and supporting onchain/offchain analytics." logo: /images/spacentime.jpg developer: Space and Time website: https://www.spaceandtime.io documentation: https://docs.spaceandtime.io --- ## Overview Space and Time is a decentralized data warehouse that provides a blockchain indexing services for major blockchains, including Bitcoin, Ethereum, Avalanche etc., for developers to access real-time data and build data-driven apps. ## Key Features - **Integration with Major Chains**: Space and Time integrates with major blockchains like Ethereum, Polygon, and Avalanche, connecting data to smart contracts. - **Zero-Knowledge Proofs**: Uses zero-knowledge proofs for data privacy and integrity, allowing secure and verifiable computations. - **Proof of SQL**: This protocol ensures that off-chain compute results are cryptographically verifiable, enhancing data integrity and trust. - **Smart Contract Indexing**: Allows developers to generate custom tables in Space and Time from their own smart contract events. ## Getting Started To explore Avalanche data from Space and Time: 1. **Create UserID**: Create a new user with a username and password by registering via [Dreamspace](https://app.spaceandtime.ai). 2. **Create API key**: To authenticate your app queries, create an API key. - Go to "My Account" then click on "Account settings" - Add a label (e.g., "AvalancheApp") and click add. - Copy the API key shown; it will not be displayed again. 3. **Explore Indexed Avalanche Data Sets**: Go to the "Datasets" tab where you'll see all tables. 4. **Run your first avalanche Query**: Use the Query Editor to run queries directly on Avalanche chain data. 5. **Use Avalanche Data in your App**: Once your query works, you can fetch the results from your app via Space and Time's REST API. Here is a detailed quick-start guide: https://docs.makeinfinite.com/docs/gettingstarted ## Documentation For more details on querying data and building with Space and Time, visit the [Space and Time Documentation](https://docs.makeinfinite.com). ## Use-Cases 1. **Sub-second ZK Coprocessor**: Space and Time pioneered the first sub-second ZK coprocessor so that your smart contract can ask data-driven questions about indexed data from every major chain and offchain data from any source in a realtime, ZK-proven way. 2. **Onchain app development**: Build data-driven apps with pre-built APIs for SQL operations. Run SQL against your own offchain data tables and tables with realtime blockchain data that we've indexed from major chains. 3. **DeFi/lending**: With Space and Time, you can combine real-world credit scores with onchain transactions to create new Web3 credit scores for decentralized lending platforms. 4. **Gaming/NFTs**: Power your game with extremely low-latency transactions and run scale-out analytics against terabytes of data. 5. **Tamperproof SQL ledger**: Space and Time lets you prove compliance, track supply chain history, maintain an auditable record of financial transactions, and more by ensuring that your data is accurate, verifiable, and traceable at all times. 6. **Verifiable LLMs**: Build transparent, tamperproof, and provably neutral LLMs. # Spearbit (/integrations/spearbit) --- title: Spearbit category: Audit Firms available: ["C-Chain"] description: Spearbit is a network of top security researchers providing smart contract audits through a decentralized model that matches projects with the right auditor expertise. logo: /images/spearbit.jpg developer: Spearbit website: https://spearbit.com/ documentation: https://cantina.xyz/portfolio?section=spearbit-guild --- ## Overview Spearbit takes a different approach to smart contract security through a decentralized auditing model that connects projects with an elite network of top independent security researchers. Rather than employing auditors full-time, Spearbit curates a network of world-class security experts who have proven their capabilities through rigorous vetting. This approach ensures projects get matched with auditors who have the perfect expertise for their specific protocol type and technology stack. Spearbit's network includes security researchers who have found critical vulnerabilities in major protocols, contributed to Ethereum security, and built security tools used across the industry. By leveraging this distributed network of specialists, Spearbit provides institutional-grade security audits that rival or exceed traditional audit firms while maintaining flexibility and deep domain expertise. ## Services - **Smart Contract Audits**: Elite-tier security audits by top independent researchers. - **Protocol Security Reviews**: Comprehensive architecture and design assessment. - **Audit Matching**: Perfect matching between project needs and auditor expertise. - **Continuous Security**: Ongoing security monitoring and support. - **Security Reviews**: Multiple independent reviewers for maximum coverage. - **Cantina**: Competitive audit platform for additional security assurance. - **Incident Response**: Emergency support from experienced security researchers. - **Security Consulting**: Advisory services from leading experts. - **Custom Engagements**: Tailored security services for unique requirements. ## Decentralized Auditing Model Spearbit's innovative approach offers unique advantages: **Expert Matching**: Projects are matched with security researchers who specialize in their specific protocol type, technology, or domain. **Depth of Expertise**: Access to specialists in DeFi mechanisms, cryptography, governance, NFTs, or any other specific area. **Flexible Resourcing**: Scale audit teams up or down based on project complexity. **Independent Reviewers**: Multiple independent auditors provide diverse perspectives. **Quality Consistency**: All auditors vetted through rigorous security researcher screening. **Global Talent**: Access to the best security talent globally, not limited by geography. ## Security Researcher Network Spearbit's network includes auditors who have: - Found critical vulnerabilities in major protocols - Contributed to Ethereum core security - Built widely-used security tools - Published security research - Won major audit competitions - Worked at leading security firms This ensures every audit is conducted by proven experts. ## Cantina Platform Spearbit operates Cantina, a competitive audit platform: **Competitive Audits**: Multiple security researchers compete to find vulnerabilities. **Higher Coverage**: More eyes on code means better vulnerability detection. **Time-Bounded**: Fixed-timeline competitive reviews. **Prize Pools**: Incentivizes thorough security research. **Complementary**: Can be used alongside traditional audits for extra assurance. ## Audit Methodology Spearbit employs a comprehensive approach: 1. **Requirements Analysis**: Understand protocol design, business logic, and security priorities 2. **Auditor Matching**: Select perfect security researchers for the engagement 3. **Automated Analysis**: Run comprehensive security tooling 4. **Manual Review**: Deep expert review by matched specialists 5. **Economic Analysis**: Review tokenomics and incentive mechanisms 6. **Integration Testing**: Test interactions with external protocols 7. **Attack Simulation**: Adversarial testing by security experts 8. **Findings Documentation**: Detailed report with severity classifications 9. **Review Session**: In-depth discussion with development team 10. **Remediation Support**: Ongoing consultation during fixes 11. **Verification Audit**: Thorough re-audit of all changes ## Avalanche Expertise Spearbit's network includes security researchers with deep Avalanche experience: - Avalanche C-Chain smart contract security - Subnet architecture and security - Cross-chain bridge audits - High-throughput protocol optimization - Avalanche-specific attack vectors - EVM compatibility considerations ## Access Through Areta Marketplace Avalanche projects can engage Spearbit through the [Areta Audit Marketplace](https://areta.market/avalanche): - **Elite Access**: Connect with Spearbit's network of top security researchers - **Competitive Process**: Receive proposals from Spearbit and other leading firms - **Transparent Pricing**: Clear costs and scope - **Subsidy Eligibility**: Qualify for up to $10k in audit cashback - **Fast Matching**: Get connected within 48 hours - **Ecosystem Support**: Marketplace built for Avalanche projects ## Audit Focus Areas **Complex DeFi**: Advanced DeFi mechanisms, derivatives, and innovative protocols. **Infrastructure**: Layer 1/2 protocols, consensus, and core infrastructure. **Bridges**: Cross-chain bridges and interoperability protocols. **Novel Mechanisms**: New protocols requiring specialized expertise. **High-Value Protocols**: Systems securing significant assets. **Cryptography**: Novel cryptographic implementations. ## Notable Engagements Spearbit's network has audited major protocols including: - Leading DeFi protocols with billions in TVL - Cross-chain bridge infrastructure - Layer 2 scaling solutions - Novel cryptographic systems - Major NFT platforms ## Why Choose Spearbit **Elite Talent**: Access to the industry's top independent security researchers. **Perfect Matching**: Get auditors with exact expertise your protocol needs. **Deep Expertise**: Specialists in specific protocol types and technologies. **Flexible Model**: Scale engagement based on complexity and budget. **Proven Results**: Network has found critical vulnerabilities in major protocols. **Multiple Reviewers**: Leverage multiple independent perspectives. **Cantina Option**: Additional competitive audit platform for extra assurance. **Institutional Quality**: Matches or exceeds traditional top-tier firms. ## Pricing Spearbit offers: - Pricing based on codebase complexity and required expertise - Flexible engagement models - Competitive rates for elite-tier security - Premium pricing reflecting top-tier talent Contact via Areta marketplace or directly for proposals. ## Getting Started To engage Spearbit: 1. **Via Areta Marketplace** (Recommended for Avalanche): - Visit [areta.market/avalanche](https://areta.market/avalanche) - Submit audit request with protocol details - Receive proposal from Spearbit - Access subsidies and streamlined process 2. **Direct Contact**: - Visit [spearbit.com](https://spearbit.com/) - Submit audit inquiry - Discuss requirements and matching - Receive detailed proposal ## Deliverables Spearbit provides: - **Comprehensive Audit Report**: Detailed findings from elite security researchers - **Executive Summary**: High-level overview for stakeholders - **Technical Analysis**: Deep technical assessment and recommendations - **Remediation Guidance**: Expert guidance for addressing issues - **Re-Audit Report**: Verification by same expert auditors - **Ongoing Access**: Continued access to auditors for questions # Stork (/integrations/stork-oracle-integration) --- title: Stork category: ["Oracles"] available: ["C-Chain"] description: Stork is a low-latency oracle delivering verifiable price feeds for DeFi protocols on Avalanche logo: /images/stork.png developer: Stork Labs website: https://stork.network documentation: https://docs.stork.network --- # Stork ## Overview Stork delivers price data to smart contracts on Avalanche with ultra-low latency, making it ideal for perpetuals markets, lending protocols, and other applications where timing matters. Stork supports new assets from day one and maintains consistent uptime during volatile market conditions. ## Features * **Low-Latency Data**: Stork delivers price updates faster than traditional oracles, keeping onchain markets functional during high volatility. * **Day-One Asset Support**: New digital assets are available immediately at launch, when users need them most. * **Proven Reliability**: Powers more than half the volume in decentralized perpetuals markets with best-in-class uptime. * **Accurate Price Feeds**: Transparent data provenance with verifiable accuracy for critical DeFi operations. ## Getting Started [Visit docs.stork.network](https://docs.stork.network) for integration guides, contract addresses, API documentation and available price feeds. # StraitsX (/integrations/straitsx) --- title: StraitsX category: Assets available: ["C-Chain"] description: "StraitsX provides regulated stablecoins including XSGD (Singapore Dollar stablecoin), offering tokenized fiat currencies for Southeast Asian markets." logo: /images/straitsx.png developer: StraitsX website: https://www.straitsx.com/ documentation: https://www.straitsx.com/ --- ## Overview StraitsX is a regulated stablecoin issuer providing XSGD (Singapore Dollar stablecoin) and other tokenized fiat currencies for Southeast Asian markets. Compliant under Singapore's Payment Services Act with transparent reserve management, StraitsX lets users and businesses access stable digital representations of regional currencies on blockchain for cross-border payments, remittances, and financial services. # Stripe (/integrations/stripe) --- title: Stripe category: Fiat On-Ramp available: ["C-Chain"] description: Stripe's fiat-to-crypto onramp enables users to securely purchase cryptocurrencies including AVAX and USDC directly from your platform or decentralized application. logo: /images/stripe.png developer: Stripe website: https://stripe.com/ documentation: https://docs.stripe.com/crypto/onramp --- ## Overview Stripe is a global payment infrastructure provider powering payments for millions of businesses. Its fiat-to-crypto onramp extends the payment platform to enable crypto purchases directly within applications and decentralized platforms. Stripe supports AVAX and USDC on Avalanche C-Chain, handling all regulatory requirements, KYC verification, fraud prevention, and dispute management. Stripe acts as the merchant of record for onramp transactions, assuming full liability for fraud and disputes. Returning users can complete purchases faster with saved payment methods and KYC data via Stripe Link. ## Features - **Merchant of Record**: Stripe acts as the legal entity responsible for facilitating crypto sales, handling all liability. - **Avalanche Support**: Native support for AVAX and USDC on Avalanche C-Chain for fast, low-cost transactions. - **Multiple Integration Options**: Choose from Stripe-hosted, embedded web, or native mobile SDK integrations. - **Zero Platform Fraud Liability**: Stripe handles all fraud prevention, disputes, and chargebacks. - **Comprehensive Compliance**: Built-in KYC/AML verification, sanctions screening, and regulatory compliance. - **Stripe Link Integration**: Returning users can check out faster with saved payment and KYC data through Stripe Link. - **Multiple Payment Methods**: Support for credit cards, debit cards, Apple Pay, and ACH (US only). - **Instant Crypto Delivery**: Eligible payment methods receive instant cryptocurrency delivery after KYC completion. - **Real-Time Quotes**: Automated pricing and exchange rate quotes for transparent transactions. - **Webhook Notifications**: Every session status change generates a webhook for real-time transaction tracking. - **Brand Customization**: Customize the onramp widget to match your application's branding. - **Pre-Population**: Pre-fill transaction parameters including wallets, amounts, currencies, and networks. - **Proven Infrastructure**: Built on Stripe's payment infrastructure used by millions of businesses. ## Integration Options Stripe offers three ways to integrate the crypto onramp: **Stripe-Hosted Onramp**: Redirect customers to a standalone Stripe-hosted page at crypto.link.com. This option requires minimal code and provides quick implementation with some customization options for amounts, currencies, and networks. **Embedded Onramp**: Embed the onramp directly into your website or mobile webview using the Onramp API. This option provides full brand customization and parameter control while keeping users within your application experience. **Embedded Components Onramp**: Native Android and iOS mobile integration with the Onramp SDK, offering the most UI customization and a fully native app experience. This option is currently in private preview. ## Getting Started To integrate Stripe's crypto onramp: 1. **Create a Stripe Account**: Sign up at [stripe.com](https://stripe.com/) or sign in to your existing Stripe account. 2. **Activate Your Account**: Complete Stripe's account activation process if you haven't already. 3. **Submit Onramp Application**: Apply for access to the crypto onramp feature through the Stripe Dashboard. Most applications are reviewed within 48 hours. 4. **Wait for Approval**: Stripe will notify you when your application is approved or if additional information is needed. You can check your application status anytime in the Dashboard. 5. **Choose Integration Method**: After approval, select between Stripe-hosted, embedded web, or mobile SDK integration based on your needs. 6. **Start Development**: Use Stripe's sandbox environment to develop and test your integration without processing real transactions. 7. **Configure Parameters**: Set up your preferred defaults for destination currencies, networks, amounts, and wallet addresses. 8. **Set Up Webhooks**: Configure webhook endpoints to receive real-time notifications about transaction status changes. 9. **Test Thoroughly**: Complete end-to-end testing in the sandbox environment before going live. 10. **Go Live**: Switch to production mode and start offering crypto purchases to your users. ## Avalanche Support Stripe's crypto onramp supports both AVAX and USDC on the Avalanche C-Chain, enabling users to purchase Avalanche-based assets with fiat currencies. **Note**: AVAX and USDC (Avalanche) are available in the United States (excluding New York) but are not currently supported in EU countries. ## Documentation For integration guides, API references, and implementation details, visit: - [Stripe Crypto Onramp Documentation](https://docs.stripe.com/crypto/onramp) - [Embedded Onramp Guide](https://docs.stripe.com/crypto/onramp/embedded-quickstart) - [Stripe-Hosted Onramp Guide](https://docs.stripe.com/crypto/onramp/stripe-hosted) - [Onramp API Reference](https://docs.stripe.com/api/crypto/onramp-sessions) - [Stripe Dashboard](https://dashboard.stripe.com/) ## Use Cases on Avalanche Examples of how Stripe's crypto onramp fits into Avalanche applications: **Decentralized Applications (Dapps)**: Enable users to purchase AVAX or USDC at checkout without leaving your Dapp, reducing friction in the user journey. **Cryptocurrency Wallets**: Integrate a trusted, compliant on-ramp directly within wallet applications to help users acquire Avalanche assets. **DeFi Platforms**: Provide a seamless entry point for users to purchase USDC on Avalanche for use in lending, borrowing, or trading protocols. **NFT Marketplaces**: Allow users to purchase AVAX or USDC on Avalanche at checkout to buy NFTs, eliminating the need for users to acquire crypto elsewhere. **GameFi Applications**: Enable players to purchase in-game tokens or assets on Avalanche using familiar payment methods like credit cards and Apple Pay. **Enterprise Applications**: Corporate use cases requiring Avalanche-based transactions, with Stripe handling compliance. **Payment Platforms**: Build crypto-enabled payment solutions that allow merchants to accept fiat while settling in AVAX or USDC on Avalanche. ## Geographic Availability Stripe's crypto onramp is currently available in: - **United States**: Full support for all payment methods and cryptocurrencies (note: AVAX and USDC on Avalanche not available in New York) - **European Union**: Available for supported cryptocurrencies (note: AVAX and USDC on Avalanche not currently supported in EU countries) Availability is subject to regulatory requirements and may expand to additional regions over time. ## Payment Methods Stripe supports multiple payment methods for crypto purchases: - **Credit Cards**: Major credit cards for instant crypto delivery after KYC - **Debit Cards**: Debit card payments with instant crypto delivery after KYC - **Apple Pay**: Streamlined mobile checkout experience with instant delivery - **ACH Bank Transfers**: US only, bank account-based payments for larger purchases All payment methods are eligible for instant crypto delivery after successful KYC completion. ## Pricing Stripe's pricing for crypto onramp transactions includes: - **Transaction Fees**: Stripe charges a fee per crypto purchase transaction - **Payment Processing**: Standard Stripe payment processing fees apply based on payment method - **No Hidden Fees**: Transparent pricing with no setup fees or monthly minimums - **Volume Pricing**: Contact Stripe for custom pricing for high-volume applications For detailed pricing information and custom enterprise arrangements, contact Stripe's sales team through the Dashboard. ## Compliance and Security Stripe's compliance and security posture includes: - **Regulatory Compliance**: Fully licensed and compliant with US and EU regulations for crypto transactions - **KYC/AML Verification**: Automated know-your-customer and anti-money laundering screening for all users - **Sanctions Screening**: Built-in screening against global sanctions lists - **Fraud Prevention**: Stripe's advanced machine learning-based fraud detection protects all transactions - **PCI Compliance**: PCI DSS Level 1 certified, the highest level of payment security - **Data Security**: Security infrastructure with regular third-party audits - **Dispute Management**: Stripe handles all disputes and chargebacks, removing liability from platforms - **Privacy Protection**: Compliant with GDPR, CCPA, and other global privacy regulations ## Why Choose Stripe for Avalanche **Trusted Brand**: Stripe processes hundreds of billions of dollars annually for millions of businesses. **Proven Infrastructure**: The same payment infrastructure powering startups through Fortune 500 companies. **Simplified Compliance**: Stripe handles all regulatory requirements, reducing your compliance burden. **Faster Checkout**: Returning users benefit from Stripe Link for faster repeat purchases. **Developer-Friendly**: Well-documented APIs and responsive support. **Zero Fraud Liability**: Stripe assumes all fraud liability for your platform. # SubQuery (/integrations/subquery) --- title: SubQuery category: Indexers available: ["C-Chain"] description: SubQuery's decentralized infrastructure makes your dApp lightning quick, infinitely scalable, and absolutely unstoppable. logo: /images/subquery.webp developer: SubQuery website: https://subquery.network/indexer documentation: https://academy.subquery.network/ --- ## Overview SubQuery is a decentralized data indexing and querying solution for decentralized applications (dApps). It supports EVM-compatible chains like Avalanche's C-Chain, letting developers build high-performance dApps with fast access to blockchain data. ## Features - **Decentralized Infrastructure**: Runs on a decentralized network for high availability and resilience. - **Fast Queries**: Optimized for quick access to blockchain data. - **Scalable**: Handles increasing loads and data volumes as your application grows. - **Custom Indexing**: Define indexing logic specific to your application's needs. ## Getting Started To start using SubQuery: 1. **Visit the SubQuery Website**: Explore the features and offerings on the [SubQuery website](https://subquery.network/indexer). 2. **Access the Documentation**: Follow the [SubQuery Documentation](https://academy.subquery.network/) for detailed setup guides and tutorials. 3. **Set Up Indexing**: Implement custom indexing solutions using SubQuery’s tools and infrastructure. 4. **Deploy on C-Chain**: Use SubQuery to index and query data on the Avalanche C-Chain, ensuring high performance for your dApp. 5. **Monitor and Scale**: Utilize SubQuery’s scalable infrastructure to keep your dApp running smoothly as it grows. ## Documentation For guides, tutorials, and support, visit the [SubQuery Academy](https://academy.subquery.network/). ## Use Cases SubQuery is suitable for: - **Avalanche Developers**: Build efficient and scalable dApps on the Avalanche C-Chain using SubQuery's indexing solutions. - **High-Performance dApps**: Ensure your decentralized application can quickly and efficiently query blockchain data. - **Custom Data Indexing**: Create and deploy custom indexing solutions tailored to your project’s needs. # Subsquid (/integrations/subsquid) --- title: Subsquid category: Indexers available: ["C-Chain", "All Avalanche L1s"] description: "Subsquid is a decentralized data indexing network providing fast, customizable blockchain data access for Web3 applications." logo: /images/subsquid.png developer: Subsquid website: https://www.subsquid.io/ documentation: https://docs.subsquid.io/ --- ## Overview Subsquid is a decentralized data indexing network that provides fast, customizable access to blockchain data. Built for performance and flexibility, Subsquid enables developers to index and query blockchain data efficiently, powering data-intensive Web3 applications with sub-second query response times. ## Features - **Decentralized Network**: Distributed network of indexers - **High Performance**: Sub-second query response times - **Customizable**: Build custom data schemas and transformations - **Multi-Chain**: Support for Avalanche and many other chains - **GraphQL API**: Query data through GraphQL endpoints - **Real-Time**: Support for real-time data subscriptions ## Getting Started 1. **Install CLI**: Install the Subsquid CLI tools 2. **Create Squid**: Initialize a new squid project 3. **Define Schema**: Create your data schema and processors 4. **Deploy**: Deploy to the Subsquid network 5. **Query**: Access your indexed data via GraphQL ## Documentation For more details, visit [Subsquid Documentation](https://docs.subsquid.io/). ## Use Cases - **DApp Backends**: Power dApp backends with indexed data - **Analytics**: Build blockchain analytics platforms - **NFT Indexing**: Index NFT metadata and ownership - **DeFi Data**: Track DeFi protocol activity # Sumsub (/integrations/sumsub) --- title: Sumsub category: KYC / Identity Verification available: ["C-Chain"] description: Sumsub is an identity verification platform that offers KYC/AML solutions for compliant user onboarding, well suited for txAllowlist implementations. logo: /images/sumsub.png developer: Sumsub website: https://sumsub.com/ documentation: https://docs.sumsub.com/ --- ## Overview Sumsub is an identity verification platform providing KYC (Know Your Customer) and AML (Anti-Money Laundering) solutions for blockchain projects. It offers a streamlined verification process to help businesses ensure regulatory compliance. Sumsub is particularly useful for projects implementing txAllowlist precompiles that need to verify user identities before granting transaction permissions. ## Features - **Automated Verification Checks**: Sumsub provides a suite of automated verification tools including ID verification, proof of address checks, and watchlist screening. - **Multi-Channel Verification**: Offers various verification methods including document verification, biometric verification, and liveness detection. - **AML Screening**: Screens users against global sanctions lists, PEP (Politically Exposed Persons) databases, and adverse media listings. - **Customizable Workflows**: Build custom verification flows tailored to your specific risk profile and regional requirements. - **No-Code Integration**: Integrates easily with your platform through WebSDK, MobileSDK, or API connections. - **Global Compliance**: Ensures compliance with regulations across different jurisdictions worldwide. - **Fraud Prevention**: Advanced fraud detection technologies to prevent identity theft and account takeovers. ## Getting Started To integrate Sumsub into your Avalanche-based application, follow these steps: 1. **Sign Up**: Create an account on the [Sumsub website](https://sumsub.com/). 2. **Configure Verification Flow**: Set up your verification requirements and customize the user flow. 3. **Integrate**: Choose your preferred integration method (WebSDK, MobileSDK, or API) and follow the implementation guides in the [documentation](https://docs.sumsub.com/). 4. **Test**: Use Sumsub's testing environment to ensure the integration works as expected. 5. **Go Live**: Once testing is complete, move to the production environment and begin verifying users. ## Integration with txAllowlist Sumsub is an excellent choice for projects implementing txAllowlist precompiles on Avalanche. The txAllowlist precompile restricts who can issue transactions to the chain, making it ideal for permissioned networks where user verification is required. By integrating Sumsub: 1. **KYC/AML Verification**: Users complete verification through Sumsub's interface. 2. **Allowlist Approval**: Upon successful verification, your backend can automatically add the user's address to the txAllowlist. 3. **Transaction Permission**: Verified users can then submit transactions to the chain, while unverified users are blocked. This creates a compliant, permissioned environment where only KYC-verified users can interact with your blockchain. ## Documentation For detailed integration guides, API references, and customization options, visit the [Sumsub Documentation](https://docs.sumsub.com/). ## Use Cases Sumsub is ideal for a variety of Avalanche-based applications requiring identity verification: - **Financial Services**: DeFi protocols requiring regulatory compliance. - **Enterprise Blockchains**: Private or consortium networks that need to verify participant identities. - **Regulated Markets**: Applications operating in jurisdictions with strict KYC requirements. - **High-Value Transactions**: Platforms handling significant asset values that need enhanced security. # Supra dVRF (/integrations/supra-vrf) --- title: Supra dVRF category: VRF available: ["C-Chain", "EVM L1s"] description: Supra dVRF work behind the scenes to make dApps more engaging, fair, and secure. logo: /images/supra.png developer: Supra Labs website: https://supra.com/ documentation: https://docs.supra.com/dvrf --- ## Overview For Web3 dApps, you need randomness that's decentralized and verifiably recorded on-chain. That’s exactly what Supra dVRF solves. With a novel on-chain randomness generation mechanism, Supra’s decentralized dVRF is designed to power dApps with effectively random outcomes that are responsive, scalable, and easily verifiable. - **Low-Latency Response**: The novel architecture of Supra dVRFs can compute and ship random outcomes in just moments, not minutes. - **Designed to Scale**: Our network leverages batching to achieve greater network efficiency as well as lower costs. - **Truly Decentralized**: Supra dVRFs use a secret sharing algorithm to distribute power across multiple nodes, so no one node has the power to compromise your apps or users. - **Natively Cross-chain**: Supra dVRF solves on-chain randomness for Web3 at large, not just a few networks. We’re already on 27 networks like Aptos, Arbitrum, Avalanche, Ethereum, Optimism, and Polygon. ## Documentation 1. **Supra dVRF**: Integrate [Supra dVRF](https://docs.supra.com/dvrf/build-with-supra-dvrf/getting-started) for tamper-proof, unbiased, and cryptographically verifiable random numbers for smart contracts. Also see the [Supra dVRF dashboard](https://supra.com/data/dvrf). 2. **Explore Supra's Layer 1**: Check [Docs](https://docs.supra.com/) to start building with the Full Supra Stack. ## Get Started Supra’s dVRF can provide the exact properties required for a random number generator (RNG) to be fair with tamper-proof, unbiased, and cryptographically verifiable random numbers to be employed by smart contracts. - **Unbiased and Unpredictable** - The threshold signature of the nonce, client-provided input, and blockhash of the transaction that requests the randomness (which is unknown at the time of request) is used as the seed for the RNG function. - **Tamper Proof and Verifiable** - Cryptographic proof will be provided to verify that random numbers were generated and communicated with the highest fidelity. ## How to subscribe to consume random numbers: #### Subscription Model Think of it like a prepaid phone plan, but for random numbers. You deposit funds upfront, and Supra uses them to pay gas fees for your VRF callbacks. - **Predictable costs:** Set gas limits upfront, no surprises - **Simplified contracts:** Your VRF consumer contracts don't need to handle payments - **Bulk management:** One subscription can serve multiple contracts - **Reliability:** Reserved minimum balance ensures your requests don't fail due to insufficient funds #### Why use subscriptions? - You create a subscription with your wallet address as the manager. - Deposit funds into your subscription account. - Register (whitelist) your smart contracts under this subscription. - When your contracts request random numbers, Supra automatically pays the callback gas fees from your subscription balance. - No need to handle gas payments in your contract code - it's all automated! **To start using dVRF, you need to create a subscription that will manage your random number requests and handle gas payments for callbacks. You can create a subscription in two ways: using the new web interface or through on-chain functions.** 1. **Via the web interface**: The easiest way to create your dVRF subscription is through subscription manager UI at [supra.com/data/dvrf.](https://supra.com/data/dvrf) 2. **Via Onchain functions**: For developers who prefer programmatic subscription creation, you can interact directly with the smart contracts. - EVM Chains ``` // Interface for dVRF 3.0 Deposit Contract interface IDeposit { function addClientToWhitelist( uint128 maxGasPrice, uint128 maxGasLimit ) external payable; } ``` - Supra L1 ``` // Create subscription on Supra L1 public entry fun create_subscription( sender: &signer, max_gas_fee: u64, initial_deposit: u64 ) { // Call the deposit module to create subscription deposit::add_client_to_whitelist(sender, max_gas_fee); deposit::deposit_fund(sender, initial_deposit); } ``` - **Verification**: Use web interface or getSubscriptionByClient() function, After creating your subscription, verify it's working correctly. You can explore more on how to Integrate and learn more from our broader [Documentation.](https://docs.supra.com/dvrf/build-with-supra-dvrf/create-your-subscription) # Supra (/integrations/supra) --- title: Supra category: Oracles available: ["C-Chain"] description: "Supra is the first MultiVM Layer 1 with full vertical integration of Native Oracles, dVRF, bridging, automation — creating a unified platform for building Super dApps." logo: /images/supra.png developer: Supra Labs website: https://supra.com/ documentation: https://docs.supra.com/ --- ## Overview Supra is a MultiVM Layer 1. It recorded 500k TPS on 300 nodes with sub-second consensus latency. It's the first chain with full vertical integration of native oracles, DVRF, bridging, automation creating a unified platform for building Super dApps. ## Features - **Fast Finality:** Near-instant refresh rates with full on-chain finality in 600-900ms. - **Decentralized:** Decentralized at every level, from multi-source data collection to a globally distributed node network. - **Security:** Randomized node network with built-in fail-safes for security guarantees. - **Natively Interoperable:** Blockchain agnostic and compatible with over 58 networks including Aptos, Arbitrum, Avalanche, Ethereum, Optimism, and Polygon. - **Scalable:** Novel consensus algorithm that processes hundreds of thousands of transactions per second. ## Getting Started 1. **Start Building using Supra Oracle**: Refer to the [docs](https://docs.supra.com/oracles) to gain a deeper understanding and detailed guides on integration. 2. **Explore Supra's Layer 1**: Check [Docs](https://docs.supra.com/) to start building with the Full Supra Stack. 3. **Supra dVRF**: Integrate [Supra dVRF](https://docs.supra.com/dvrf/build-with-supra-dvrf/getting-started) for tamper-proof, unbiased, and cryptographically verifiable random numbers to be employed by smart contracts. 4. **Supra Automation**: Check Supra's block level [Automation Docs](https://docs.supra.com/automation) to create financial systems that are responsive, intelligent, and user-friendly without sacrificing decentralization. 5. **SupraNova Bridge**: SupraNova is a cross-chain communication framework developed by Supra. Check [Docs Here](https://docs.supra.com/supranova) ## Documentation Explore our detailed Oracle Docs for more info on: - Data Feeds: [CHECK HERE](https://docs.supra.com/oracles/data-feeds) - APIs For Real time and Historical data: [CHECK HERE](https://docs.supra.com/oracles/apis-real-time-and-historical-data) - Indices: [CHECK HERE](https://docs.supra.com/oracles/indices) ## Use Cases 1. **For DeFi** - Instant price feeds for decentralized exchanges - Monitoring stablecoin collateral values in real-time - Automatic portfolio rebalancing with accuracy - Instant tradFi prices for synthetics trading 2. **Gaming** - Real-time real-world data for prediction markets - Dynamic and evolving in-game assets - Monitoring floor prices and marketplaces 3. **Supply Chain** - Tracking product origin and provenance - Managing inventory levels and reordering triggers - Accurate and secure inventory management 4. **Web3 Identity** - Decentralized identity verification - Credit scoring and lending risk assessment - Reputation systems for community platforms Explore how oracles can help your Web3 project from [HERE](https://supra.com/oracles-product/) # SushiSwap (/integrations/sushiswap) --- title: SushiSwap category: DeFi available: ["C-Chain"] description: "SushiSwap is a multichain automated market maker (AMM) that enables users to trade, earn, and build on Avalanche's C-Chain." logo: /images/sushiswap.jpeg developer: Sushi website: https://www.sushi.com/ documentation: https://docs.sushi.com/ --- ## Overview SushiSwap is a decentralized exchange (DEX) operating across multiple chains, including Avalanche's C-Chain. It supports token swapping, yield farming, and liquidity provision. Built on the AMM model, SushiSwap maintains deep liquidity across various trading pairs. ## Features - **Multi-Chain Support**: Seamlessly trade assets across different blockchain networks. - **Trident AMM**: Advanced AMM framework offering multiple pool types for optimal trading. - **BentoBox**: Innovative vault system for capital efficient lending and borrowing. - **Yield Strategies**: Various farming opportunities for liquidity providers. - **Route Processor**: Optimized trading routes for better rates and reduced slippage. - **Concentrated Liquidity**: Enhanced capital efficiency through targeted liquidity provision. ## Getting Started To begin using SushiSwap on Avalanche: 1. **Access Platform**: Visit [Sushi](https://www.sushi.com/) and select Avalanche network. 2. **Connect Wallet**: Link your Web3 wallet and ensure you have AVAX for fees. 3. **Start Trading**: - Choose tokens to swap - Review rates and slippage settings - Confirm transaction 4. **Earn Yields**: Explore liquidity provision and farming opportunities. ## Documentation For guides and technical documentation, visit the [Sushi Documentation](https://docs.sushi.com/). ## Use Cases SushiSwap accommodates various DeFi activities: - **Token Trading**: Efficient swaps with competitive rates and minimal slippage. - **Liquidity Provision**: Earn fees by providing liquidity to trading pairs. - **Yield Farming**: Access additional rewards through SUSHI token emissions. - **Cross-Chain Operations**: Trade and manage assets across multiple networks. - **DeFi Integration**: Build on top of Sushi's infrastructure using their SDK. # Suzaku (/integrations/suzaku) --- title: Suzaku category: Validator Marketplace available: ["All Avalanche L1s"] description: Suzaku is the (re)staking protocol for sovereign networks. logo: /images/suzaku.png developer: Suzaku website: https://suzaku.network documentation: https://docs.suzaku.network --- ## Overview Suzaku is the (re)staking protocol for sovereign networks. With the Suzaku Framework, you can bootstrap and increase the cryptoeconomic security of your Avalanche L1, as well as scale and decentralize its validator set. ## Features - **Cryptoeconomic security to bootstrap**: Suzaku allows users to secure Avalanche L1s through (re)staking, enabling liquid staking of L1s' native tokens and dual staking security models with blue-chip tokens like AVAX, BTC, and ETH as collateral. - **Operators to scale and decentralize**: High-tier infrastructure providers and validators offer their services to Avalanche L1s through Suzaku. - **An ecosystem of decentralized services**: Some networks, like the [Suzaku Relayer Network](https://docs.suzaku.network/suzaku-rn), are purpose-built to provide critical services (i.e. censorship-resistant interoperability) to other Avalanche L1s. ## Getting Started The best way to get started with Suzaku is to go through the [Restaking Guide](https://docs.suzaku.network/suzaku-restaking/for-stakers/restaking-guide). ## Documentation The Suzaku documentation is available at [https://docs.suzaku.network](https://docs.suzaku.network). ## Use Cases - **Validator set management**: Open-source security modules to manage your L1 validator set using PoA, PoS, dual-staking, etc. - **L1 liquid staking**: LST-as-a-service for your Avalanche L1 and multi-LST liquidity pools. - **Scaling and decentralization**: Scale and decentralize your L1 with high-tier infrastructure providers and validators. - **Suzaku Relayer Network**: Adds censorship-resistance and enhanced security to Avalanche Warp Messaging for L1 bridges. # Synapse Protocol (/integrations/synapse) --- title: Synapse Protocol category: Crosschain Solutions available: ["C-Chain"] description: Synapse is a cross-chain liquidity network enabling asset bridging and swaps across 20+ blockchain networks with optimized routing and unified liquidity pools. logo: /images/synapse.jpg developer: Synapse Protocol website: https://synapseprotocol.com/ documentation: https://docs.synapseprotocol.com/ --- ## Overview Synapse Protocol is a cross-chain liquidity network for transferring assets and executing swaps across 20+ blockchain ecosystems. It combines bridge infrastructure with cross-chain AMM functionality, providing optimized routing and unified liquidity pools to reduce slippage and improve capital efficiency. The protocol has facilitated billions in cross-chain volume and is used by hundreds of thousands of users. ## Features - **20+ Supported Chains**: Bridge assets across Ethereum, Avalanche, BNB Chain, Polygon, Arbitrum, Optimism, and more. - **Cross-Chain Swaps**: Swap assets across chains in a single transaction. - **Unified Liquidity Pools**: Shared liquidity across all chains improves efficiency. - **Optimized Routing**: Automatic routing finds the best path for transactions. - **Stablecoin Bridging**: Specialized in efficient stablecoin cross-chain transfers. - **Native Asset Support**: Bridge native assets like ETH, AVAX, and more. - **Low Slippage**: Unified pools provide better pricing than fragmented liquidity. - **Fast Transactions**: Optimized for quick cross-chain transfers. - **User-Friendly Interface**: Simple, intuitive bridge interface. - **Liquidity Provision**: Earn fees by providing liquidity to cross-chain pools. - **SYN Token**: Native token for governance and liquidity incentives. - **Developer SDK**: Tools for integrating Synapse into applications. ## Core Functionality ### Cross-Chain Bridge Synapse's bridge enables asset transfers between chains: **Asset Bridging**: Transfer tokens from any supported chain to another. **Native Assets**: Bridge native assets without wrapping when possible. **Stablecoin Focus**: Optimized for USDC, USDT, DAI, and other stablecoins. **Gas Optimization**: Minimized gas costs for bridging operations. **Transaction Tracking**: Real-time status updates for all bridges. ### Cross-Chain AMM Synapse's automated market maker functionality: **Cross-Chain Swaps**: Swap token A on Chain X for token B on Chain Y. **Unified Pools**: Liquidity shared across all chains for better pricing. **Low Slippage**: Deeper liquidity provides better execution. **Optimal Routing**: Automatically finds best path through pools. **Swap and Bridge**: Combined swapping and bridging in one transaction. ## Avalanche Integration Synapse's Avalanche support includes: **AVAX Bridging**: Bridge AVAX to and from 20+ other chains. **Stablecoin Support**: Bridge USDC, USDT, and other stablecoins to Avalanche. **DeFi Connectivity**: Connect Avalanche DeFi to liquidity on other chains. **Fast Finality**: Leverage Avalanche's speed for quick bridging. **Low Fees**: Benefit from Avalanche's low transaction costs. **Growing Adoption**: Increasing usage within Avalanche ecosystem. ## Use Cases **Cross-Chain DeFi**: Access DeFi opportunities across multiple chains. **Liquidity Migration**: Move assets between chains as opportunities emerge. **Multichain Treasury**: Manage treasury assets across multiple blockchains. **Yield Optimization**: Chase yields across different blockchain ecosystems. **Bridge Aggregation**: Integrate Synapse into applications for cross-chain functionality. **Stablecoin Transfers**: Efficiently move stablecoins between ecosystems. ## Liquidity Provision Earn fees by providing liquidity: **LP Rewards**: Earn trading fees from bridge and swap users. **SYN Incentives**: Additional rewards in SYN tokens. **Multiple Pools**: Provide liquidity to various asset pools. **Cross-Chain Liquidity**: Single deposit earns fees across all chains. **Impermanent Loss**: Mitigated through stablecoin-focused pools. ## Getting Started To use Synapse: 1. **Visit Synapse Bridge**: Go to [synapseprotocol.com](https://synapseprotocol.com/) 2. **Connect Wallet**: Connect your wallet (MetaMask, etc.) 3. **Select Chains**: Choose source and destination chains 4. **Select Assets**: Choose asset to bridge or swap 5. **Review Transaction**: Check routing, fees, and estimated time 6. **Execute**: Complete the cross-chain transaction For developers, visit [docs.synapseprotocol.com](https://docs.synapseprotocol.com/) for integration guides. ## Security Synapse maintains strong security: - Multiple security audits by leading firms - Bug bounty program - Regular security assessments - Monitored by security teams - Transparent incident response # Tatum (/integrations/tatum) --- title: Tatum category: Developer Tooling available: ["C-Chain", "All Avalanche L1s"] description: "Tatum is a blockchain development platform providing unified APIs and SDKs for building Web3 applications across multiple networks." logo: /images/tatum.png developer: Tatum website: https://tatum.io/ documentation: https://docs.tatum.io/ --- ## Overview Tatum is a blockchain development platform that simplifies Web3 application development with unified APIs and SDKs. It supports 40+ blockchain networks including Avalanche, letting developers build, test, and deploy blockchain applications without managing infrastructure or learning multiple chain-specific APIs. ## Features - **Unified API**: Single API for 40+ blockchains - **Pre-Built Functions**: Ready-to-use blockchain operations - **Multi-Language SDKs**: SDKs for JavaScript, Java, PHP, and more - **Wallet Management**: Create and manage wallets programmatically - **NFT APIs**: Complete NFT minting and management - **Virtual Accounts**: Off-chain accounting for scalability ## Getting Started To use Tatum: 1. **Sign Up**: Create account at [Tatum](https://tatum.io/) 2. **Get API Key**: Access your credentials from dashboard 3. **Choose SDK**: Select SDK for your programming language 4. **Start Building**: Use Tatum APIs to build your application 5. **Deploy**: Launch your Web3 application ## Documentation For guides and API references, visit [Tatum Documentation](https://docs.tatum.io/). ## Use Cases - **Wallet Applications**: Build multi-chain wallet apps - **NFT Platforms**: Create NFT minting and marketplace features - **Token Operations**: Token creation, transfers, and management - **Exchange Development**: Build trading and exchange features # Taurus (/integrations/taurus) --- title: Taurus category: Custody available: ["C-Chain", "All Avalanche L1s"] description: "Taurus provides institutional-grade digital asset infrastructure including custody, trading, and tokenization solutions for financial institutions." logo: /images/taurushq.jpg developer: Taurus website: https://www.taurusgroup.ch/ documentation: https://www.taurusgroup.ch/ --- ## Overview Taurus is a Swiss-based digital asset infrastructure provider for banks, asset managers, and financial institutions. The platform covers custody, trading, and tokenization of digital assets with regulatory compliance built in. ## Features - **Regulated Custody**: Swiss-regulated custody solutions meeting institutional standards. - **Digital Asset Infrastructure**: End-to-end infrastructure for custody, trading, and settlement. - **Tokenization Platform**: Tools for creating and managing tokenized securities and assets. - **Multi-Asset Support**: Support for cryptocurrencies, stablecoins, and tokenized securities. - **Bank-Grade Security**: Security infrastructure designed for financial institutions. - **Compliance Framework**: Built-in regulatory compliance for institutional requirements. - **API Integration**: APIs for system integration. - **White-Label Solutions**: Customizable infrastructure for institutional branding. ## Documentation For more information, visit the [Taurus website](https://www.taurusgroup.ch/). ## Use Cases - **Banking Services**: Enable banks to offer digital asset services to clients. - **Asset Management**: Infrastructure for institutional asset managers and fund operators. - **Tokenization**: Issue and manage tokenized securities and real-world assets. - **Custody Services**: Regulated custody for institutional portfolios. - **Trading Infrastructure**: Institutional-grade trading and execution. # Tenderly (/integrations/tenderly) --- title: Tenderly category: Development Infrastructure available: ["C-Chain", "All Avalanche L1s"] description: Tenderly is a full-stack Web3 infrastructure platform that combines high-performance node services, mainnet-like development environments, and developer tools for building, testing, debugging, and scaling decentralized applications with real-time monitoring, transaction simulation, and automated security responses. logo: /images/tenderly.png developer: Tenderly website: https://tenderly.co/ documentation: https://docs.tenderly.co/ --- ## Overview Tenderly is a Web3 infrastructure platform for developing, testing, and scaling dApps. It provides high-performance node RPC services, development environments that mirror mainnet conditions, and developer tools for monitoring and debugging smart contracts. ## Features - **[Virtual TestNets](https://docs.tenderly.co/virtual-testnets?mtm_campaign=ext-docs&mtm_kwd=avalanche)**: Provides mainnet-like development environments for testing and staging smart contracts with unlimited faucet access. - **[Simulator UI & Debugger](https://docs.tenderly.co/debugger?mtm_campaign=ext-docs&mtm_kwd=avalanche)**: Offers visual transaction debugging tools that reduce debugging time from hours to minutes. - **[Simulation RPC](https://docs.tenderly.co/simulations/single-simulations#simulate-via-rpc?mtm_campaign=ext-docs&mtm_kwd=avalanche)**: Enables accurate prediction of transaction outcomes and gas costs before on-chain execution. - **[Alerts & Web3 Actions](https://docs.tenderly.co/alerts/intro-to-alerts?mtm_campaign=ext-docs&mtm_kwd=avalanche)**: Deliver real-time monitoring and automated responses to on-chain events. - **[Node RPC](https://docs.tenderly.co/node/rpc-reference?mtm_campaign=ext-docs&mtm_kwd=avalanche)**: High-performance, low-latency access across multiple regions. ## Getting Started 1. **Create an Account**: Sign up at [Tenderly Dashboard](https://dashboard.tenderly.co/register/?mtm_campaign=ext-docs&mtm_kwd=avalanche) 2. **Set Up a TestNet**: Create a virtual TestNet for your development environment 3. **Access Faucet**: Utilize the unlimited faucet for testing purposes 4. **Explore Documentation**: Review the [official documentation](https://docs.tenderly.co/?mtm_campaign=ext-docs&mtm_kwd=avalanche) 5. **Integrate Tools**: Implement Tenderly's tools into your development workflow ## Documentation For more details, visit [Tenderly Documentation](https://docs.tenderly.co/?mtm_campaign=ext-docs&mtm_kwd=avalanche). ## Use Cases - **Smart Contract Development**: Test and debug contracts in isolated environments - **Protocol Updates**: Validate protocol upgrades and DAO proposals safely - **dApp Development and Staging**: Develop, stage, and test smart contracts and entire dApps - **Security Monitoring**: Implement automated security responses and real-time alerts - **Transaction Optimization**: Predict and optimize transaction outcomes and gas costs # Tesseract (/integrations/tesseract) --- title: Tesseract category: Crosschain Solutions available: ["C-Chain", "All Avalanche L1s"] description: "Avalanche's liquidity marketplace enabling fast asset movement and swaps across Avalanche and its ecosystem of L1s using ICM and ICTT technology." logo: /images/tesseract.jpg developer: Yield Yak Contributors website: https://www.tesseract.finance/swap documentation: https://docs.tesseract.finance/ --- ## Overview Tesseract is Avalanche's liquidity marketplace for moving and swapping assets across C-Chain and Avalanche L1s. Built on Avalanche Interchain Messaging (ICM) and Interchain Token Transfer (ICTT), it is a trustless, non-custodial, on-chain trading platform that connects liquidity between C-Chain and Avalanche L1 blockchains. By aggregating the Avalanche liquidity ecosystem, users on C-Chain and L1s get better prices on trades. L1 operators do not need to deploy their own DEX or manage liquidity pools -- a simple integration gives them access to Avalanche's deep liquidity. Tesseract is built by the contributors behind Yield Yak and has been audited by OpenZeppelin. ## Features - **Fast Cross-Chain Swaps**: - Move assets between C-Chain and L1s quickly - Uses Avalanche's fast finality for near-instant settlements - Minimal wait times - **Unified Liquidity Marketplace**: - Access liquidity across the entire Avalanche ecosystem - Best price execution by aggregating liquidity sources - No fragmentation between C-Chain and L1 liquidity - **Trustless & Non-Custodial**: - Built on Avalanche ICM for trustless interchain communication - No intermediaries or custodians required - Users maintain full control of their assets - **ICM & ICTT Technology**: - Powered by Avalanche Interchain Messaging (ICM) - Utilizes Interchain Token Transfer (ICTT) for secure asset movement - Native integration with Avalanche's interchain infrastructure - **Simple L1 Integration**: - L1 operators can integrate without deploying a DEX - No need to bootstrap or manage liquidity pools - Instant access to C-Chain's deep liquidity - **Broad Asset Support**: - Swap any assets across supported chains - Support for native tokens and bridged assets - Expanding asset coverage as ecosystem grows ## Core Technology ### Avalanche ICM Integration Tesseract uses Avalanche's native Interchain Messaging (ICM) protocol: **Trustless Communication**: ICM provides secure, trust-minimized messaging between Avalanche chains. **Native Protocol**: Built on Avalanche's native interchain infrastructure, not external bridges. **Fast Finality**: Benefits from Avalanche's sub-second finality for rapid cross-chain operations. **Reliable Delivery**: Guaranteed message delivery between connected chains. ### ICTT for Token Transfers Interchain Token Transfer (ICTT) powers Tesseract's asset movements: **Secure Transfers**: Cryptographically secure token transfers between chains. **Native Assets**: Support for native asset transfers without wrapping. **Efficient Routing**: Optimized routing for minimal gas costs. **Standardized Protocol**: Uses Avalanche's standardized token transfer protocol. ### Liquidity Aggregation Tesseract's liquidity aggregation: **Unified Pools**: Access liquidity from multiple sources simultaneously. **Best Price Discovery**: Automatically finds the best available prices. **Capital Efficiency**: Maximizes liquidity utilization across the ecosystem. **Smart Routing**: Intelligent routing algorithms for optimal execution. ## Avalanche L1 Benefits Advantages for Avalanche L1 operators: **Instant Liquidity**: Access C-Chain liquidity without bootstrapping. **Zero Infrastructure**: No need to deploy or maintain a DEX. **Lower Costs**: Avoid expenses of liquidity incentives and pool management. **Better UX**: Users get access to deep liquidity immediately. **Focus on Core**: L1 teams can focus on their unique value proposition. **Easy Integration**: Simple technical integration process. ## Use Cases **Cross-Chain Trading**: Swap tokens between C-Chain and any connected L1. **L1 Ecosystem Access**: Move assets into new L1 ecosystems. **Arbitrage Opportunities**: Take advantage of price differences across chains. **Portfolio Management**: Rebalance portfolios across multiple chains efficiently. **Liquidity Provisioning**: Provide liquidity across the entire Avalanche ecosystem. **DeFi Strategies**: Execute complex DeFi strategies spanning multiple L1s. **Asset Bridging**: Bridge assets to L1s for specific applications or use cases. ## Security Security measures: **OpenZeppelin Audit**: Audited by industry-leading security firm OpenZeppelin. **Battle-Tested Team**: Built by contributors behind Yield Yak's proven track record. **Native Protocol**: Uses Avalanche's native ICM instead of external bridges. **Non-Custodial**: No custody of user funds at any point. **On-Chain Execution**: All operations executed on-chain with full transparency. **Continuous Monitoring**: Active monitoring of protocol operations. ## Getting Started 1. **Access Platform**: - Visit [Tesseract Finance](https://www.tesseract.finance/swap) - Connect your wallet (MetaMask, Core, or other compatible wallets) 2. **Select Networks**: - Choose source chain (C-Chain or any L1) - Select destination chain - Tesseract will show available liquidity 3. **Execute Swap**: - Enter the amount you want to swap - Review the route and estimated output - Confirm the transaction in your wallet 4. **Track Transfer**: - Monitor your cross-chain transfer in real-time - Receive assets on destination chain automatically ## For L1 Operators L1 teams looking to integrate: 1. **Review Documentation**: Visit [Tesseract Docs](https://docs.tesseract.finance/) 2. **Technical Integration**: Follow integration guides for L1 connectivity 3. **Testing**: Test integration on testnet environment 4. **Launch**: Go live and provide instant liquidity access to your users 5. **Community**: Join Tesseract's community channels for support ## Supported Chains Tesseract connects: **C-Chain**: Avalanche's Contract Chain with deep liquidity. **Avalanche L1s**: Growing ecosystem of Avalanche Layer 1 blockchains. **Expanding Network**: Continuous addition of new L1 integrations. Check the [platform](https://www.tesseract.finance/swap) for the current list of supported chains. ## Supported DEXes Tesseract aggregates liquidity from multiple DEXes: - Access to major Avalanche DEXes - Intelligent routing across multiple sources - Best price execution guaranteed - Expanding DEX integrations Visit the [documentation](https://docs.tesseract.finance/) for the complete list of supported DEXes and L1s. ## Advantages **Avalanche-Native**: Built specifically for Avalanche's architecture and ICM. **Proven Team**: Developed by contributors behind successful Yield Yak platform. **Best Prices**: Aggregates liquidity for optimal price execution. **Lightning Speed**: Uses Avalanche's fast finality for quick operations. **No Wrapping**: Direct asset transfers without wrapped intermediaries. **Audited Security**: OpenZeppelin audit provides security assurance. **Simple Integration**: Easy onboarding for L1 operators. **Growing Ecosystem**: Expanding coverage of L1s and DEXes. ## Community and Support - **Discord**: Active community for discussions and support - **Twitter**: Updates and ecosystem announcements - **Documentation**: Guides and technical documentation - **Developer Channel**: Telegram channel for developers - **Dune Dashboard**: Analytics and metrics tracking Visit the [documentation](https://docs.tesseract.finance/) for links to all community channels. ## Developer Resources For developers building with Tesseract: **Technical Documentation**: Complete integration guides at [docs.tesseract.finance](https://docs.tesseract.finance/) **API Access**: APIs for integrating Tesseract into applications **Smart Contracts**: Open-source smart contracts for review **Support Channels**: Developer-focused Telegram channel **Analytics**: Dune Dashboard for protocol metrics # The Graph (/integrations/the-graph) --- title: The Graph category: Indexers available: ["C-Chain"] description: The Graph is a decentralized indexing protocol that enables easy querying of blockchain data using GraphQL. logo: /images/thegraph.png developer: The Graph website: https://thegraph.com/ documentation: https://thegraph.com/docs/ --- ## Overview Getting historical data on a smart contract can be frustrating when building a dapp. [The Graph](https://thegraph.com/) provides an easy way to query smart contract data through APIs known as **subgraphs**. The Graph’s infrastructure relies on a decentralized network of indexers, enabling your dapp to become truly decentralized. ## Features - **Decentralized Indexing**: Enables indexing blockchain data through multiple indexers, thus eliminating any single point of failure - **GraphQL Queries**: Provides a powerful GraphQL interface for querying indexed data, making data retrieval super simple. - **Customizable & Reusable**: Define your own logic for transforming & storing blockchain data. Reuse subgraphs published by other developers. - **Data Aggregation**: Aggregates data from multiple blockchain sources, offering a unified view for easier access and analysis. - **Scalability**: Designed to handle large volumes of data and scale with the needs of growing dApps and data requirements. ## Getting Started Building a subgraph only takes a few minutes. It primarily consists of the following steps: 1. Initialize your subgraph project 2. Deploy & Publish 3. Query from your dapp Here’s a detailed quick-start guide: https://thegraph.com/docs/en/quick-start/ ## Documentation For detailed instructions on creating subgraphs, querying data, and integrating with The Graph, visit the [The Graph Documentation](https://thegraph.com/docs/). ## Use Cases - **Decentralized Applications (dApps)**: Index and query data for dApps. - **Data Aggregation**: Aggregate data from multiple blockchain sources for analytics. - **Data Retrieval**: Fast and reliable data retrieval for applications that need blockchain data. - **Analytics and Reporting**: Collect and analyze blockchain data for reporting. # The TIE (/integrations/thetie) --- title: The TIE category: Analytics & Data available: ["C-Chain"] description: "The TIE provides real-time and historical data APIs for Avalanche, offering sentiment analysis, trading signals, and on-chain metrics through their SigDev Terminal." logo: /images/thetie.jpg developer: The TIE website: https://thetie.io/ documentation: https://docs.thetie.io/ --- ## Overview The TIE's SigDev platform provides crypto data APIs including Avalanche-specific metrics, sentiment analysis, and on-chain data. Their SDK enables integration of both real-time and historical data for applications built on Avalanche. ## Features - **Data APIs**: - Real-time market data - Social sentiment analysis - On-chain metrics - Trading signals - **Integration Options**: - REST API - WebSocket feeds - Python SDK - R SDK - **Data Types**: - Price data - Volume analytics - Social metrics - News sentiment - On-chain activity - **Developer Tools**: - API authentication - Rate limiting options - Data filtering - Custom endpoints ## Getting Started 1. **Access Setup**: - Register for API access - Get API credentials - Choose data endpoints 2. **Implementation**: ```python from thetie import TieClient client = TieClient(api_key='your-api-key') # Fetch Avalanche metrics avalanche_data = client.get_metrics( asset='AVAX', metrics=['price', 'volume', 'sentiment'], interval='1h' ) ``` ## Documentation For more details, visit [The TIE Documentation](https://docs.thetie.io/). ## Use Cases - **Market Analysis**: Access market data - **Sentiment Tracking**: Monitor social sentiment metrics - **Trading Applications**: Integrate trading signals and metrics - **Research Tools**: Build research and analysis platforms - **Portfolio Analytics**: Track and analyze asset performance # Thirdweb x402 (/integrations/thirdweb-x402) --- title: Thirdweb x402 category: x402 available: ["C-Chain"] description: Thirdweb provides an x402 facilitator service that handles verifying and submitting x402 payments on Avalanche. logo: /images/thirdweb.png developer: Thirdweb website: https://thirdweb.com/ documentation: https://portal.thirdweb.com/payments/x402/facilitator --- ## Overview Thirdweb's x402 facilitator is a service that handles verifying and submitting x402 payments on Avalanche's C-Chain. It uses your own server wallet and EIP-7702 to submit transactions gaslessly, making it easy to integrate payment-gated APIs and AI services. The thirdweb facilitator is compatible with any x402 backend and middleware libraries like `x402-hono`, `x402-next`, `x402-express`, and more. ## How It Works - **Verification**: Validates payment signatures and requirements - **Settlement**: Submits the payment transaction on-chain - **Gasless**: Uses EIP-7702 for gasless transactions - **Your Wallet**: Uses your own server wallet for receiving payments You can view all transactions processed by your facilitator in your thirdweb project dashboard. ## Chain and Token Support The thirdweb facilitator supports payments on **any EVM chain**, including Avalanche C-Chain, as long as the payment token supports either: - **ERC-2612 permit** (most ERC20 tokens) - **ERC-3009 sign with authorization** (USDC on all chains) ## Key Features - **Multi-Chain Support**: Works on Avalanche C-Chain and all EVM-compatible chains - **Gasless Transactions**: Uses EIP-7702 for user-friendly payment experience - **Your Own Wallet**: Use your own server wallet to receive payments directly - **Dashboard Monitoring**: Track all facilitator transactions in your thirdweb dashboard - **Compatible Middleware**: Works with all x402 middleware libraries ## Getting Started ### Creating a Facilitator Create a facilitator instance to use with x402 payments: ```typescript import { facilitator } from "thirdweb/x402"; import { createThirdwebClient } from "thirdweb"; const client = createThirdwebClient({ secretKey: "your-secret-key", }); const thirdwebFacilitator = facilitator({ client: client, serverWalletAddress: "0x1234567890123456789012345678901234567890", }); ``` ### Configuration Options ```typescript const thirdwebFacilitator = facilitator({ // Required: Your thirdweb client with secret key client: client, // Required: Your server wallet address that will execute transactions // get it from your project dashboard serverWalletAddress: "0x1234567890123456789012345678901234567890", // Optional: Wait behavior for settlements // - "simulated": Only simulate the transaction (fastest) // - "submitted": Wait until transaction is submitted // - "confirmed": Wait for full on-chain confirmation (slowest, default) waitUntil: "confirmed", }); ``` ## Integration Examples ### Usage with settlePayment() Use the facilitator with the `settlePayment()` function on Avalanche: ```typescript import { settlePayment, facilitator } from "thirdweb/x402"; import { createThirdwebClient } from "thirdweb"; import { avalanche } from "thirdweb/chains"; const client = createThirdwebClient({ secretKey: process.env.THIRDWEB_SECRET_KEY, }); const thirdwebFacilitator = facilitator({ client, serverWalletAddress: "0x1234567890123456789012345678901234567890", }); export async function GET(request: Request) { const paymentData = request.headers.get("x-payment"); const result = await settlePayment({ resourceUrl: "https://api.example.com/premium-content", method: "GET", paymentData, payTo: "0x1234567890123456789012345678901234567890", network: avalanche, // Use Avalanche C-Chain price: "$0.10", facilitator: thirdwebFacilitator, // Pass the facilitator here }); if (result.status === 200) { return Response.json({ data: "premium content" }); } else { return Response.json(result.responseBody, { status: result.status, headers: result.responseHeaders, }); } } ``` ### Usage with x402-hono Middleware Use the facilitator with Hono middleware on Avalanche: ```typescript import { Hono } from "hono"; import { paymentMiddleware } from "x402-hono"; import { facilitator } from "thirdweb/x402"; import { createThirdwebClient } from "thirdweb"; const client = createThirdwebClient({ secretKey: "your-secret-key", }); const thirdwebFacilitator = facilitator({ client: client, serverWalletAddress: "0x1234567890123456789012345678901234567890", }); const app = new Hono(); // Add the facilitator to the x402 middleware app.use( paymentMiddleware( "0xYourWalletAddress", { "/api/paywall": { price: "$0.01", network: "avalanche", // Use Avalanche mainnet config: { description: "Access to paid content", }, }, }, thirdwebFacilitator, // Pass the facilitator to the middleware ), ); app.get("/api/paywall", (c) => { return c.json({ message: "This is premium content!" }); }); export default app; ``` ### Usage with x402-express Middleware Use the facilitator with Express middleware on Avalanche: ```typescript import express from "express"; import { paymentMiddleware } from "x402-express"; import { facilitator } from "thirdweb/x402"; import { createThirdwebClient } from "thirdweb"; const client = createThirdwebClient({ secretKey: "your-secret-key", }); const thirdwebFacilitator = facilitator({ client: client, serverWalletAddress: "0x1234567890123456789012345678901234567890", }); const app = express(); app.use( paymentMiddleware( "0xYourWalletAddress", { "GET /api/premium": { price: "$0.05", network: "avalanche", // Use Avalanche mainnet }, }, thirdwebFacilitator, ), ); app.get("/api/premium", (req, res) => { res.json({ content: "Premium AI content" }); }); app.listen(3000); ``` ## Getting Supported Payment Methods Query which payment methods are supported by the facilitator on Avalanche: ```typescript // Get all supported payment methods const allSupported = await thirdwebFacilitator.supported(); // Filter by Avalanche C-Chain const avalancheSupported = await thirdwebFacilitator.supported({ chainId: 43114, // Avalanche C-Chain }); // Filter by chain and token (e.g., USDC on Avalanche) const usdcOnAvalanche = await thirdwebFacilitator.supported({ chainId: 43114, tokenAddress: "0xB97EF9Ef8734C71904D8002F8b6Bc66Dd9c48a6E", // USDC on Avalanche }); ``` ## Avalanche C-Chain Integration When using thirdweb's x402 facilitator on Avalanche's C-Chain: - **Chain ID**: 43114 for mainnet, 43113 for Fuji testnet - **Network String**: Use `"avalanche"` for mainnet or `"avalanche-fuji"` for testnet - **Fast Settlement**: Benefit from Avalanche's sub-second transaction finality - **Low Costs**: Enable micropayments with minimal gas fees - **Dashboard Tracking**: Monitor all Avalanche transactions in your thirdweb dashboard ## Use Cases ### AI Agent Monetization Use thirdweb's facilitator to enable AI agents to charge for services on a pay-per-use basis. ### Payment-Gated APIs Protect your API endpoints with automatic payment verification and settlement. ### Micropayment Services Enable micropayments for content, data, or compute resources with minimal overhead. ### Multi-Chain AI Services Build AI services that accept payments across Avalanche and other EVM chains. ## Documentation For detailed implementation guides and API references: - [Thirdweb x402 Facilitator Documentation](https://portal.thirdweb.com/payments/x402/facilitator) - [Thirdweb Portal](https://portal.thirdweb.com/) # ThirdWeb (/integrations/thirdweb) --- title: ThirdWeb category: Wallets and Account Abstraction available: ["C-Chain", "All Avalanche L1s"] description: ThirdWeb provides a low-latency API for generating keys and signing transactions within secure hardware. logo: /images/thirdweb.avif developer: ThirdWeb website: https://thirdweb.com/ documentation: https://portal.thirdweb.com/ --- ## Overview ThirdWeb offers a low-latency API for generating keys and signing transactions within secure hardware. The SDK lets developers integrate wallet functionality into their applications with hardware-backed security for key management and transaction signing. ## Features - **Low-Latency API**: Provides fast and responsive API access for generating keys and signing transactions. - **Secure Hardware Integration**: Utilizes secure hardware to protect sensitive information and enhance security. - **User-Friendly SDK**: Easy-to-integrate SDK with documentation and support. - **Cross-Platform Support**: Compatible with various platforms and blockchain networks. - **Enhanced Security**: Hardware-backed key management and transaction signing. ## Getting Started 1. **Visit the ThirdWeb Website**: Explore the [ThirdWeb website](https://thirdweb.com/) for SDK details. 2. **Access the Documentation**: Refer to the [ThirdWeb Documentation](https://portal.thirdweb.com/) for integration guides and API references. 3. **Install the SDK**: Follow the documentation to install and configure the ThirdWeb SDK. 4. **Implement Key Generation and Signing**: Use the API for key generation and transaction signing. 5. **Test and Deploy**: Test the integration before deploying. ## Documentation For more details, visit the [ThirdWeb Documentation](https://portal.thirdweb.com/). ## Use Cases - **Wallet Applications**: Integrate key management and signing into digital wallet applications. - **DeFi Platforms**: Secure transaction signing for DeFi applications. - **Blockchain Games**: Transaction signing for in-game assets and interactions. - **Financial Services**: Secure transaction management for blockchain-based financial services. # Token Relations (/integrations/token-relations) --- title: Token Relations category: Analytics & Data available: ["C-Chain"] description: "Token Relations provides advanced token analytics, relationship mapping, and on-chain data insights for the Avalanche ecosystem." logo: /images/token-relations.png developer: Token Relations website: https://www.token-relations.com/ documentation: https://www.token-relations.com/ --- ## Overview Token Relations is an analytics platform that provides token data, relationship mapping, and on-chain insights for the Avalanche ecosystem. It helps developers, analysts, and investors understand token dynamics, holder relationships, and market behavior. # Tokeny (/integrations/tokeny) --- title: Tokeny category: Tokenization Platforms available: ["C-Chain"] description: Tokeny (now part of Apex Group) is a leading onchain finance platform enabling financial institutions to securely and compliantly issue, manage, and distribute tokenized securities using the ERC-3643 standard. logo: /images/tokeny.png developer: Tokeny / Apex Group website: https://tokeny.com/ documentation: https://docs.tokeny.com/ --- ## Overview Tokeny is an onchain finance platform, now part of Apex Group (one of the world's largest fund administration firms). It enables financial institutions to issue, manage, and distribute tokenized securities using enterprise-grade infrastructure. Tokeny developed and promotes the ERC-3643 standard (formerly T-REX), the most widely adopted protocol for compliant security tokens. Financial institutions can tokenize funds, bonds, equities, and structured products while maintaining regulatory compliance through built-in identity management, transfer restrictions, and automated compliance rules. Backed by Apex Group's $2 trillion+ in assets under administration, Tokeny combines blockchain technology with institutional trust. ## Features - **ERC-3643 Standard**: Built on the standard for compliant tokenized securities with built-in identity and compliance layers. - **Enterprise-Grade Platform**: Institutional infrastructure designed for banks, asset managers, and financial institutions. - **Multi-Asset Tokenization**: Support for tokenizing funds, bonds, equities, real estate, structured products, and alternative assets. - **Onchain Identity Management**: Decentralized identity system (ONCHAINID) for compliant investor verification and management. - **Automated Compliance**: Smart contract-based transfer restrictions ensuring regulatory compliance at the protocol level. - **Regulatory Flexibility**: Adaptable to different jurisdictions including EU MiFID II, US regulations, and other international frameworks. - **Investor Portal**: White-label investor portal for KYC, subscription, and portfolio management. - **Transfer Agent Services**: Digital transfer agent capabilities for shareholder recordkeeping and corporate actions. - **Distribution Network**: Connect to a network of regulated distributors and platforms. - **Multi-Chain Support**: Deploy tokenized securities across multiple EVM-compatible blockchain networks. - **Lifecycle Management**: Complete token lifecycle from issuance through corporate actions to redemption. - **Interoperability**: Open standard enabling integration with multiple platforms and service providers. - **API and SDK**: Developer tools for custom integrations. - **Security Audits**: Thoroughly audited smart contracts with formal verification. ## Getting Started 1. **For Financial Institutions and Issuers**: - Contact Tokeny (now Apex Group Digital Assets) to discuss tokenization requirements - Define the assets or securities to be tokenized - Structure the digital securities with Tokeny's legal and technical team - Implement the ERC-3643 token standard with compliance rules - Set up investor identity and verification workflows - Launch the tokenized offering on selected blockchain network(s) - Utilize Tokeny's lifecycle management tools for ongoing operations 2. **For Investors**: - Access tokenized securities through Tokeny-powered platforms - Complete KYC through the ONCHAINID identity system - Verify accreditation status and jurisdiction eligibility - Invest in available tokenized securities - Manage holdings through issuer's investor portal 3. **For Developers and Integrators**: - Explore ERC-3643 documentation and reference implementations - Use Tokeny's SDKs and APIs to build on the standard - Deploy compliant tokenized securities using the protocol - Integrate with existing systems and platforms ## ERC-3643 Standard The ERC-3643 standard (formerly T-REX) is the most widely adopted protocol for compliant security tokens. Key components: **Identity Layer**: Onchain identity management with verified credentials and claims. **Compliance Layer**: Smart contract-based transfer restrictions and compliance rules. **Modular Architecture**: Flexible design allowing customization for different regulatory requirements. **Permissioned Transfers**: Transfers automatically check compliance before execution. **Claim System**: Decentralized verification of investor attributes (accreditation, jurisdiction, etc.). **Standard Adoption**: Industry standard supported by multiple platforms and service providers. ERC-3643 keeps tokenized securities compliant across their lifecycle while enabling interoperability between platforms. ## Avalanche Support Tokeny's platform supports deployment on multiple EVM-compatible blockchain networks, including Avalanche C-Chain. The ERC-3643 standard is chain-agnostic, allowing issuers to leverage Avalanche's high performance, low fees, and robust infrastructure for deploying compliant tokenized securities. ## Use Cases Tokenization across diverse asset classes: **Investment Funds**: Tokenize mutual funds, hedge funds, and private equity funds to improve accessibility and reduce operational costs. **Corporate Bonds**: Issue digital bonds with automated interest payments and enhanced liquidity. **Equity Securities**: Tokenize private company shares with embedded compliance and shareholder management. **Real Estate**: Create fractionalized ownership of real estate assets with regulatory compliance. **Structured Products**: Issue complex financial instruments as programmable tokens. **Sustainable Finance**: Tokenize green bonds and ESG-focused investment products. **Private Debt**: Digitize loan participations and private credit instruments. ## Apex Group Integration As part of Apex Group, Tokeny benefits from: **Global Scale**: Apex Group manages over $2 trillion in assets with 14,000+ employees across 75 offices. **Fund Administration**: Integration with Apex's comprehensive fund administration services. **Regulatory Expertise**: Deep experience across multiple jurisdictions and asset classes. **Institutional Trust**: Relationship with major global financial institutions. **Comprehensive Services**: Combined offering of tokenization, fund admin, custody, and distribution. **Market Access**: Leverage Apex's global distribution network. This combination positions Tokeny as the institutional choice for tokenization infrastructure. ## Technology Stack Tokeny's technology stack: - **Smart Contract Suite**: Audited ERC-3643 contracts with modular compliance rules - **ONCHAINID**: Decentralized identity protocol for investor verification - **Token Factory**: Self-service tools for deploying tokenized securities - **Compliance Engine**: Configurable rules engine for regulatory requirements - **Agent Dashboard**: Interface for managing tokens, investors, and corporate actions - **Investor Portal**: White-label portal for subscriptions and portfolio management - **API Platform**: RESTful APIs for system integration - **Blockchain Nodes**: Infrastructure for reliable blockchain connectivity - **Data Analytics**: Reporting and analytics tools for issuers and regulators ## Regulatory Compliance The platform addresses regulatory requirements across jurisdictions: - **Multi-Jurisdictional**: Adaptable to EU, US, Asia-Pacific, and other regulatory frameworks - **MiFID II Compliant**: European securities regulation compliance - **SEC Compliant**: Support for US Regulation D, Regulation S, and Regulation A+ - **FINMA Compatible**: Swiss financial market regulations - **GDPR Compliant**: European data protection standards - **Flexible Rules**: Customizable compliance rules for specific regulatory requirements - **Audit Trail**: Complete on-chain and off-chain audit trails for regulators ## Competitive Advantages **Industry Standard**: ERC-3643 is the most widely adopted protocol for security tokens. **Institutional Backing**: Part of Apex Group with $2T+ AUA and global operations. **Open Architecture**: Interoperable standard avoiding vendor lock-in. **Proven Technology**: Years of production use with multiple successful tokenizations. **Regulatory First**: Compliance built into the protocol, not bolted on afterwards. **Developer Friendly**: Open-source standard with extensive documentation. **Multi-Chain**: Deploy on any EVM-compatible blockchain including Avalanche. ## Tokenization Services Tokeny (through Apex Group) provides end-to-end services: - **Structuring Advisory**: Legal and financial structuring of tokenized securities - **Technical Implementation**: Smart contract deployment and configuration - **Compliance Setup**: Configuration of regulatory rules and restrictions - **Investor Onboarding**: KYC/AML processes and identity verification - **Distribution**: Access to regulated distribution channels - **Lifecycle Management**: Ongoing token management and corporate actions - **Reporting**: Regulatory reporting and investor communications - **Secondary Markets**: Integration with trading venues and marketplaces ## Pricing Tokeny offers institutional pricing: - **Setup Fees**: Initial tokenization and platform setup costs - **Annual Platform Fees**: Ongoing access to technology and infrastructure - **Transaction Fees**: Fees for certain operations and transfers - **Service Fees**: Additional services like fund administration through Apex Group - **Custom Solutions**: Tailored pricing for large institutions and complex requirements Contact Tokeny/Apex Group Digital Assets for detailed pricing. # TomNext (/integrations/tomnext) --- title: TomNext category: Tokenization Platforms available: ["C-Chain"] description: TomNext is a discovery and management platform for wealth managers, advisors, and qualified investors to evaluate and access tokenized alternative assets across multiple issuers and blockchain networks. logo: /images/tomnext.jpg developer: TomNext website: https://www.tomnext.co/ documentation: https://www.tomnext.co/company --- ## Overview TomNext is building the essential distribution infrastructure for the tokenized alternatives market, providing a comprehensive platform where wealth managers, registered investment advisors (RIAs), and qualified investors can discover, evaluate, and manage tokenized alternative assets. While many platforms focus on issuing tokenized securities, TomNext addresses the critical missing piece in the tokenization landscape: distribution and investor access. The TomNext platform serves as a centralized hub where investors can access a growing universe of tokenized investment products from multiple issuers across various product categories and blockchain networks. With a streamlined user interface enhanced by AI-powered analysis tools, TomNext aims to be for tokenized alternatives what Interactive Brokers is for equities or what Coinbase is for cryptocurrency—the go-to platform for discovering and accessing this emerging asset class. ## Features - **Multi-Issuer Platform**: Access tokenized products from numerous issuers in one unified interface. - **Broad Asset Coverage**: Discover tokenized alternatives across real estate, private equity, credit, funds, and other categories. - **Cross-Chain Compatibility**: View and access tokens deployed across multiple blockchain networks. - **AI-Powered Analysis**: Advanced analytical tools using artificial intelligence to evaluate tokenized assets. - **Due Diligence Tools**: Comprehensive tools for assessing tokenized investment opportunities. - **Portfolio Management**: Unified dashboard for tracking tokenized alternative holdings. - **Streamlined UI**: User-friendly interface designed for traditional wealth managers and investors. - **Comparison Tools**: Side-by-side comparison of different tokenized products and issuers. - **Research and Insights**: Market intelligence and research on tokenization trends and opportunities. - **RIA Platform Integration**: Compatibility with traditional RIA and wealth management platforms. - **Qualified Investor Focus**: Platform designed for accredited and institutional investors. - **Discovery Engine**: Search and filter tools to find relevant tokenized investment opportunities. - **Transparency**: Clear information on fees, terms, underlying assets, and issuer details. - **Educational Resources**: Learning materials about tokenization and alternative assets. ## Getting Started ### For Wealth Managers and RIAs: 1. **Platform Access**: Request access to the TomNext platform by contacting their team. 2. **Account Setup**: - Register your advisory firm on the platform - Complete necessary verification and compliance processes - Set up user accounts for advisors 3. **Platform Exploration**: - Discover available tokenized alternative assets - Use AI tools to analyze investment opportunities - Compare products across different issuers - Conduct due diligence using platform tools 4. **Client Integration**: - Onboard qualified clients to the platform - Help clients complete accreditation verification - Build tokenized alternative allocations for client portfolios - Monitor and manage client holdings 5. **Portfolio Management**: - Track performance of tokenized investments - Rebalance portfolios as needed - Access reporting for client communications ### For Qualified Investors: 1. **Platform Registration**: Sign up on the TomNext platform directly or through your wealth manager. 2. **Accreditation Verification**: Complete accredited or qualified investor verification process. 3. **Asset Discovery**: Browse the growing universe of tokenized alternative investments available on the platform. 4. **Due Diligence**: Use TomNext's AI-powered tools to analyze and evaluate investment opportunities. 5. **Investment**: Allocate capital to selected tokenized products through the platform. 6. **Portfolio Tracking**: Monitor your tokenized alternative holdings and performance in real-time. ## Avalanche Support TomNext's cross-chain platform architecture supports tokenized assets deployed across multiple blockchain networks, including Avalanche C-Chain. As the platform aggregates tokenized products regardless of which blockchain they're issued on, investors and advisors can access Avalanche-based tokenized securities alongside those on other chains through TomNext's unified interface. ## Tokenized Asset Categories TomNext provides access to various tokenized alternative asset types: **Tokenized Real Estate**: Fractional ownership in commercial properties, residential properties, REITs, and real estate funds. **Private Equity**: Tokenized shares in private equity funds and direct investments in private companies. **Private Credit**: Tokenized debt instruments, credit funds, and lending products. **Hedge Funds**: Digital shares in alternative investment funds pursuing various strategies. **Infrastructure**: Tokenized infrastructure projects and assets. **Art and Collectibles**: Fractional ownership in high-value art, wine, and other collectibles. **Commodities**: Tokenized exposure to physical commodities and commodity-backed products. **Structured Products**: Tokenized structured notes and complex financial instruments. ## Use Cases TomNext serves multiple stakeholders in the wealth management ecosystem: **Registered Investment Advisors (RIAs)**: Access tokenized alternatives to diversify client portfolios beyond traditional stocks and bonds. **Wealth Management Platforms**: Integrate tokenized alternatives into existing wealth management technology stacks. **Family Offices**: Discover and manage tokenized alternative investments for ultra-high-net-worth families. **Qualified Individual Investors**: Self-directed accredited investors seeking direct access to tokenized alternatives. **Institutional Investors**: Foundations, endowments, and institutions exploring tokenization. **Multi-Family Offices**: Platforms serving multiple wealthy families needing tokenized asset access. ## Platform Capabilities ### Discovery and Search - **Advanced Filters**: Filter by asset class, blockchain, minimum investment, expected returns, duration, and more - **Issuer Information**: Comprehensive profiles of tokenization platforms and issuers - **Product Details**: Detailed offering documents, terms, and underlying asset information - **Market Trends**: Insights into tokenization market trends and popular products ### AI-Powered Analysis - **Risk Assessment**: AI-driven analysis of investment risks across tokenized products - **Performance Projections**: Data-driven expectations for returns and distributions - **Comparative Analysis**: AI-powered comparison across similar products - **Portfolio Optimization**: Suggestions for optimal tokenized alternative allocations - **Document Analysis**: AI extraction of key information from lengthy offering documents ### Portfolio Management - **Unified Dashboard**: See all tokenized holdings across multiple issuers and blockchains - **Performance Tracking**: Real-time monitoring of tokenized investment performance - **Distribution Tracking**: Automatic tracking of dividends, interest, and other distributions - **Reporting**: Generate reports for clients, compliance, and tax purposes - **Alerts**: Notifications about important events affecting holdings ### Integration Capabilities - **RIA Platform Connectivity**: Integration with traditional wealth management platforms - **Custody Integration**: Connect with digital asset custodians - **Accounting Systems**: Export data to accounting and portfolio management software - **CRM Integration**: Connect to advisor CRM systems - **API Access**: APIs for building custom integrations and workflows ## Market Need TomNext addresses several critical gaps in the tokenization ecosystem: **Fragmented Market**: Tokenized products are scattered across many issuers and platforms—TomNext aggregates them. **Discovery Challenge**: Investors struggle to find relevant tokenized opportunities—TomNext provides comprehensive search. **Analysis Complexity**: Evaluating tokenized securities is difficult—TomNext offers AI-powered tools. **Multi-Chain Confusion**: Assets span multiple blockchains—TomNext provides a unified view. **Traditional Advisor Barriers**: Wealth managers lack tools for tokenized assets—TomNext builds for RIAs. ## Competitive Position **Distribution Focus**: While others build issuance platforms, TomNext focuses on distribution and investor access. **Multi-Issuer**: Aggregates products from many issuers rather than competing with them. **AI-Enhanced**: Leverages artificial intelligence to simplify complex analysis. **Traditional Finance Bridge**: Specifically designed for traditional wealth managers and RIAs, not just crypto natives. **Comprehensive Solution**: End-to-end platform from discovery through portfolio management. **Cross-Chain**: Blockchain-agnostic approach serves the entire tokenization market. ## Regulatory Considerations TomNext operates with attention to regulatory requirements: - **Broker-Dealer Relationships**: Partnerships or registrations to facilitate transactions - **Accreditation Verification**: Processes to confirm qualified investor status - **Securities Compliance**: Platform design accommodates securities regulations - **Privacy and Data Protection**: Secure handling of investor information - **Cross-Border**: Consideration of international regulations as market expands ## Vision and Mission **Mission**: Democratize access to tokenized alternative investments by building the distribution infrastructure that connects investors with opportunities. **Vision**: Become the standard platform for discovering, analyzing, and managing tokenized alternatives—the Bloomberg Terminal for tokenized assets. **Philosophy**: The tokenization revolution will only reach its potential when distribution matches issuance capabilities. ## Benefits for the Tokenization Ecosystem **For Issuers**: Access to qualified investor base seeking tokenized products. **For Investors**: Centralized access to fragmented market of tokenized alternatives. **For Advisors**: Professional tools to serve clients interested in tokenization. **For the Industry**: Necessary distribution infrastructure to scale tokenization market. ## Pricing TomNext pricing structure (contact for details): - **Platform Fees**: Subscription or transaction-based fees for platform access - **Management Tools**: Fees for portfolio management and reporting capabilities - **AI Analysis**: Potential fees for advanced AI-powered analytical tools - **Integration Fees**: Costs for connecting to RIA platforms and systems - **Transaction Fees**: Possible fees on investments made through the platform Contact TomNext for specific pricing based on firm size and needs. # Traceye (/integrations/traceye) --- title: Traceye category: Indexers available: ["C-Chain"] description: Traceye is a full-stack Blockchain indexing platform supporting multiple indexing protocols like 'The Graph' and 'Subquery Network'. logo: /images/traceye.png developer: Traceye website: https://traceye.io/ documentation: https://doc.traceye.io --- ## Overview Traceye is an indexing platform built for Avalanche L1s. It uses popular indexing protocols like The Graph and Subquery Network on shared and dedicated infrastructure, providing both cost-effective and enterprise-grade indexing solutions. Traceye also offers developer add-ons built on top of indexers to aid app development. ## Features - **Zero-Cost Indexer Development**: Traceye offer free consulting and development of data indexers tailored exclusively for Avalanche Layer 1 chains. Get started without upfront costs or vendor lock-in. - **High Availability**: Enjoy ultra-fast indexing, minimal data lag, and 99.99% uptime — all without the hassle of infrastructure maintenance. - **Configurable Webhooks**: Set up real-time webhooks on any indexed entity. - **BI Reporting & Charts Engine**: Built-in business intelligence engine for creating visual reports, dashboards, and charts directly from indexed data. - **One-Click Web3 API Launch**: Instantly generate and deploy Web3 Data APIs with a single click — optimized for L1 chains, no backend coding required. - **Bring Your Own RPC**: Connect your preferred Avalanche RPC endpoints for enhanced flexibility and control over indexing behavior. - **Custom Business Logic & Direct DB Access**: Define custom entities, apply business logic, and access your data directly from the underlying database. - **Query & Database Optimizer**: Smart query optimizer and schema tuning tools for low-latency responses and efficient data retrieval. - **Observability & Tracking**: Built-in notifications, versioning, logs, and entity tracking. - **Infrastructure Monitoring & Alerts**: Real-time infrastructure health checks, performance metrics, and alerting system to keep your deployment healthy and responsive. ## Getting Started 1. Visit the [Traceye website](https://app.traceye.io/) : Sign-up or log-in to access the platform. 2. Left-side menu provides options for shared and dedicated indexing along with 1-click Web3 data APIs for L1 chains. 3. Scroll to the appropriate option based on requirement and access the dashboard. Use the 'Buy Subscription' option to purchase subscription or get a free plan. 4. Once subscription is set, launch the indexer or Web3 data APIs. 5. Connect with us for free consulting & indexer development on L1. ## Documentation For detailed guides and API references visit the [Traceye documentation](https://doc.traceye.io). ## Use Cases - **Public L1 Chains**: Expose GraphQL and REST APIs for core Web3 data including NFTs, Tokens, Wallets, Blocks & Transactions. Useful for explorers, analytics platforms, and dApps needing standardized access to on-chain data. - **Appchains**: Enable custom indexers tailored to the specific data requirements of appchains — whether it's gaming, identity, or specialized DeFi use cases. - **DeFi & dApps**: Index and serve data from custom smart contracts powering DeFi protocols or decentralized applications. - **Blockchain Analytics**: Use Traceye’s BI and reporting engine to generate insights through dashboards covering chain-wide activity, DeFi/dApp usage, and historical on-chain trends. - **Programmable On-Chain Actions**: Trigger real-time workflows using webhooks on top of Indexers configured on specific on-chain events. # Trail of Bits (/integrations/trailofbits) --- title: Trail of Bits category: Audit Firms available: ["C-Chain"] description: Trail of Bits is a leading security research firm providing smart contract audits, blockchain security assessments, and advanced security tooling for enterprise and Web3 protocols. logo: /images/trailofbits.png developer: Trail of Bits website: https://www.trailofbits.com/ documentation: https://www.trailofbits.com/services/software-assurance --- ## Overview Trail of Bits is one of the most respected security research and development firms in the blockchain industry, known for their rigorous security audits, advanced security tools, and deep expertise in cryptography and systems security. Founded in 2012, Trail of Bits has audited hundreds of blockchain protocols, smart contracts, and cryptographic implementations for leading projects, enterprises, and government agencies. With a team of world-class security researchers, Trail of Bits combines academic rigor with practical security expertise to identify vulnerabilities that others miss. Their work spans smart contract audits, protocol design reviews, cryptographic analysis, and custom security tool development. Trail of Bits is trusted by the largest names in blockchain including Ethereum Foundation, USDC, and major DeFi protocols. ## Services - **Smart Contract Audits**: Comprehensive security audits using both manual and automated analysis. - **Protocol Security Reviews**: Assessment of protocol design and architecture. - **Cryptographic Review**: Analysis of cryptographic implementations and algorithms. - **Security Tool Development**: Custom tools for continuous security monitoring. - **Formal Verification**: Mathematical proofs of smart contract correctness. - **Incident Response**: Emergency security assessment and remediation. - **Security Training**: Educational programs for development teams. - **Continuous Monitoring**: Ongoing security surveillance post-deployment. - **Penetration Testing**: Adversarial testing of protocols and infrastructure. - **Supply Chain Security**: Assessment of dependencies and third-party code. ## Proprietary Security Tools Trail of Bits has developed industry-leading open-source security tools: **Slither**: Static analysis framework for Solidity with dozens of vulnerability detectors. **Echidna**: Property-based fuzzer for Ethereum smart contracts. **Manticore**: Symbolic execution tool for analyzing smart contracts. **Crytic**: Commercial platform combining multiple analysis tools. **Rattle**: EVM binary static analysis framework. These tools are widely used across the industry for automated smart contract security analysis. ## Audit Methodology Trail of Bits follows a comprehensive audit process: 1. **Threat Modeling**: Identify assets, threats, and attack surfaces 2. **Automated Analysis**: Run Slither, Echidna, and other tools 3. **Manual Review**: Expert manual code review by senior researchers 4. **Formal Verification**: Prove critical properties mathematically when applicable 5. **Attack Simulation**: Test protocols under adversarial conditions 6. **Documentation Review**: Assess documentation completeness and accuracy 7. **Report Generation**: Comprehensive report with prioritized findings 8. **Remediation Support**: Work with team to address issues 9. **Verification Audit**: Confirm fixes before final report ## Avalanche Expertise Trail of Bits has extensive experience securing protocols across all major blockchain networks including Avalanche. Their expertise covers: - Avalanche C-Chain smart contracts - Cross-chain bridge security - Subnet architecture review - Consensus mechanism analysis - High-throughput protocol optimization - Avalanche-specific attack vectors ## Access Through Areta Marketplace Avalanche projects can engage Trail of Bits through the [Areta Audit Marketplace](https://areta.market/avalanche): - **Competitive Quotes**: Receive proposals from Trail of Bits alongside other top firms - **Transparent Pricing**: Clear pricing without intermediaries - **Fast Matching**: Get connected within 48 hours - **Subsidy Eligibility**: Qualify for up to $10k in audit subsidies - **Streamlined Process**: Simplified procurement compared to direct engagement - **Ecosystem Focus**: Marketplace designed specifically for Avalanche builders ## Notable Clients Trail of Bits has audited protocols for: - Ethereum Foundation - USDC (Centre/Circle) - MakerDAO - Compound - Uniswap - Chainlink - U.S. Department of Defense - Major financial institutions - Fortune 500 companies This track record demonstrates their capability to handle the most critical security assessments. ## Audit Focus Areas **DeFi Security**: DEXs, lending protocols, derivatives, and yield strategies. **Infrastructure**: L1/L2 protocols, bridges, and consensus mechanisms. **Cryptography**: Novel cryptographic schemes and implementations. **Enterprise Blockchain**: Private and permissioned blockchain solutions. **Gaming & NFTs**: Gaming protocols and NFT platforms. **Stablecoins**: Stablecoin mechanisms and implementations. **Governance**: DAO governance and voting systems. ## Research and Publications Trail of Bits actively contributes to blockchain security research: - Regular security blog posts and advisories - Conference presentations at Black Hat, DEF CON, and academic venues - Open-source security tools with thousands of users - Collaboration with academic institutions - Industry security standards development ## Why Choose Trail of Bits **Industry Leader**: Most respected security firm in blockchain with decade+ track record. **Research Excellence**: Team of PhDs and security researchers pushing the field forward. **Tool Development**: Creators of industry-standard security analysis tools. **Comprehensive Approach**: Combination of automated and manual analysis techniques. **Formal Methods**: Capability to provide formal verification when needed. **Government Trust**: Trusted by government agencies for critical security work. **Enterprise Experience**: Experience securing enterprise and institutional-grade systems. ## Pricing Trail of Bits typically works with: - Established protocols with significant budgets - Enterprise clients - High-value smart contract systems - Projects requiring the highest level of security assurance Pricing reflects their premium positioning and comprehensive methodology. Contact via Areta marketplace or directly for proposals. ## Getting Started To engage Trail of Bits: 1. **Via Areta Marketplace** (Recommended for Avalanche): - Visit [areta.market/avalanche](https://areta.market/avalanche) - Submit audit request - Receive competitive quote from Trail of Bits - Potential eligibility for subsidies 2. **Direct Contact**: - Visit [trailofbits.com](https://www.trailofbits.com/) - Contact sales team - Discuss scope and requirements - Receive formal proposal ## Deliverables Trail of Bits provides: - **Comprehensive Audit Report**: Detailed findings with technical analysis - **Executive Summary**: High-level summary for stakeholders - **Fix Verification**: Confirmation of remediation - **Tool Reports**: Output from Slither, Echidna, and other tools - **Recommendations**: Best practices and improvements - **Ongoing Support**: Available for consultation during fixes # Transak (/integrations/transak) --- title: Transak category: Fiat On-Ramp available: ["C-Chain", "All Avalanche L1s"] description: Transak is a global fiat-to-crypto payment gateway that enables users to buy cryptocurrencies with fiat directly within your application. logo: /images/transak.png developer: Transak website: https://transak.com/ documentation: https://docs.transak.com/ --- ## Overview Transak is a global fiat-to-crypto payment gateway and on-ramp for blockchain applications. Users can purchase cryptocurrencies using traditional payment methods directly within your app, without navigating external exchanges. It supports 130+ cryptocurrencies across 100+ countries. ## Features - **Global Coverage**: Support for users in 100+ countries with local payment methods, currencies, and languages. - **Multiple Payment Methods**: Credit/debit cards, bank transfers, Apple Pay, Google Pay, and regional payment options. - **Extensive Crypto Support**: On-ramp to 130+ cryptocurrencies, including native tokens for custom L1s. - **Flexible Integration Options**: Widget, SDK, and API solutions to fit your application's needs. - **Customizable Widget**: White-labeled integration that matches your application's design. - **KYC/AML Compliance**: Built-in compliance processes that meet regulatory requirements across jurisdictions. - **Risk Management**: Advanced fraud prevention systems that protect both users and merchants. - **Automated Payouts**: Direct deposit of purchased crypto to user wallets. - **Fiat Off-Ramp**: Select regions support converting crypto back to fiat (where permitted by regulations). - **NFT Checkout**: Allow users to purchase NFTs directly with fiat payment methods. - **Partner Dashboard**: Analytics and management tools to track conversions and optimize performance. ## Getting Started 1. **Sign Up**: Visit the [Transak Partner Dashboard](https://docs.transak.com/docs/setup-your-partner-account) and create an account to get your API key. 2. **Choose Integration Method**: - **Direct URL Integration**: The simplest approach: ```javascript const transakUrl = new URL('https://global.transak.com/'); transakUrl.searchParams.append('apiKey', 'YOUR_API_KEY'); transakUrl.searchParams.append('defaultCryptoCurrency', 'AVAX'); transakUrl.searchParams.append('network', 'avalanche'); transakUrl.searchParams.append('walletAddress', userWalletAddress); window.open(transakUrl.href, '_blank'); ``` - **SDK Integration**: For web applications, add the Transak SDK to your project: ```bash npm install @transak/transak-sdk ``` ```javascript import transakSDK from '@transak/transak-sdk'; const transak = new transakSDK({ apiKey: 'YOUR_API_KEY', environment: 'PRODUCTION', // or 'STAGING' for testing defaultCryptoCurrency: 'AVAX', network: 'avalanche', walletAddress: userWalletAddress, // Pre-fill user's wallet themeColor: '000000', // Custom color in hex hostURL: window.location.origin, widgetHeight: '650px', widgetWidth: '450px', hideMenu: false, exchangeScreenTitle: 'Buy Crypto', disableWalletAddressForm: false, }); transak.init(); ``` 3. **Handle Events**: Listen for transaction events: ```javascript transak.on(transak.EVENTS.TRANSAK_WIDGET_CLOSE, () => { // Handle widget close }); transak.on(transak.EVENTS.TRANSAK_ORDER_SUCCESSFUL, (orderData) => { // Handle successful purchase console.log(orderData); }); transak.on(transak.EVENTS.TRANSAK_ORDER_FAILED, (orderData) => { // Handle failed purchase console.log(orderData); }); ``` 4. **Set Up Webhooks** (Optional): Implement server-side webhooks to receive transaction updates: ```javascript // Example Express.js webhook handler app.post('/transak-webhook', (req, res) => { const payload = req.body; if (payload.status === 'COMPLETED') { // Process completed transaction } res.status(200).send('Webhook received'); }); ``` 5. **Test in Staging**: Use the staging environment with test cards to verify your integration before going live. ## Documentation For more details, visit the [Transak Documentation](https://docs.transak.com/). ## Use Cases **Wallets**: Let users purchase crypto directly within your wallet app. **DeFi Platforms**: On-ramp for users to acquire tokens needed for DeFi services. **NFT Marketplaces**: Enable direct NFT purchases with fiat, skipping the crypto acquisition step. **GameFi Applications**: Let gamers purchase in-game assets or tokens without prior crypto knowledge. **Decentralized Applications**: Reduce friction by integrating a native fiat entry point. ## Pricing Transak operates on a transaction fee model: - **Fee Range**: Typically 0.5% to 3% per transaction - **Custom Fee Structure**: Enterprise solutions with custom fee arrangements available - **Revenue Sharing**: Partnership opportunities with revenue sharing for qualified partners - **No Monthly Fees**: Pay only for successful transactions The fee structure can vary by region, payment method, and transaction volume. For detailed pricing information, contact Transak's sales team. # Transfero (/integrations/transfero) --- title: Transfero category: Assets available: ["C-Chain"] description: "Transfero provides regulated stablecoins including BRZ (Brazilian Real stablecoin), offering tokenized fiat currencies for Latin American markets." logo: /images/transfero.jpg developer: Transfero website: https://www.transfero.com/ documentation: https://www.transfero.com/ --- ## Overview Transfero is a regulated fintech company providing stablecoins for Latin American markets, including BRZ (Brazilian Real stablecoin). With regulatory compliance and transparent reserve management, it lets users and businesses access tokenized Latin American fiat currencies on blockchain for cross-border payments, remittances, and digital asset transactions. # Truffle Suite (/integrations/truffle) --- title: Truffle Suite category: Developer Tooling available: ["C-Chain", "All Avalanche L1s"] description: "[Deprecated] Truffle Suite is a development environment for Ethereum and EVM-compatible smart contracts including testing and deployment tools." logo: /images/truffle.png developer: Consensys website: https://archive.trufflesuite.com/ documentation: https://archive.trufflesuite.com/docs/ --- > **⚠️ Deprecated:** Truffle Suite was discontinued by ConsenSys in December 2023. Migrate to [Hardhat](/integrations/hardhat) or [Foundry](/integrations/foundry) for smart contract development. ## Overview Truffle Suite is a world-class development environment, testing framework, and asset pipeline for blockchains using the Ethereum Virtual Machine (EVM). As part of the Consensys ecosystem, Truffle provides developers with the tools needed to develop, test, and deploy smart contracts on Avalanche and other EVM-compatible networks. ## Features - **Smart Contract Development**: Built-in smart contract compilation and linking - **Automated Testing**: Framework for writing and running tests - **Scriptable Deployment**: Configurable deployment scripts - **Network Management**: Deploy to multiple networks easily - **Ganache Integration**: Local blockchain for development - **Debugger**: Interactive debugger for troubleshooting ## Getting Started To use Truffle with Avalanche: 1. **Install Truffle**: `npm install -g truffle` 2. **Create Project**: `truffle init` to start new project 3. **Configure Network**: Add Avalanche networks to truffle-config.js 4. **Write Contracts**: Develop Solidity smart contracts 5. **Test and Deploy**: Run tests and deploy to Avalanche ## Documentation For comprehensive guides, visit [Truffle Documentation](https://archive.trufflesuite.com/docs/). ## Use Cases - **Smart Contract Development**: Full development lifecycle support - **DApp Development**: Build complete decentralized applications - **Testing**: Comprehensive testing of contract functionality - **CI/CD Integration**: Automated deployment pipelines # Turnkey (/integrations/turnkey) --- title: Turnkey category: Wallets and Account Abstraction available: ["C-Chain", "All Avalanche L1s"] description: "Turnkey provides programmable wallet infrastructure with secure key management for building Web3 applications." logo: /images/turnkey.png developer: Turnkey website: https://www.turnkey.com/ documentation: https://docs.turnkey.com/ --- ## Overview Turnkey provides programmable wallet infrastructure for building secure, scalable Web3 applications. Using secure enclaves and policy-based controls, developers can create and manage wallets with high security standards. ## Features - **Secure Enclaves**: Keys protected in secure hardware - **Programmable Policies**: Flexible policy engine for transaction controls - **API-First**: Complete API for wallet operations - **Non-Custodial**: Users maintain control of their assets - **Scalable**: Infrastructure built for high-volume applications - **Developer Friendly**: SDKs and documentation ## Getting Started 1. **Sign Up**: Create account at [Turnkey](https://www.turnkey.com/) 2. **Get API Keys**: Access credentials from dashboard 3. **Choose SDK**: Select SDK for your platform 4. **Implement**: Add Turnkey wallet functionality 5. **Configure Policies**: Set up transaction policies 6. **Launch**: Deploy wallet-enabled application ## Documentation For integration guides, visit [Turnkey Documentation](https://docs.turnkey.com/). ## Use Cases - **Embedded Wallets**: Build applications with embedded wallet functionality - **Institutional Ops**: Secure wallet operations for institutions - **Developer Platforms**: Add wallet capabilities to developer tools - **Enterprise Apps**: Wallet infrastructure for enterprise applications # Twinstake (/integrations/twinstake) --- title: Twinstake category: Validator Infrastructure available: ["C-Chain", "All Avalanche L1s"] description: "Twinstake is an institutional digital asset staking provider offering secure validator infrastructure and staking-as-a-service." logo: /images/twinstake.png developer: Twinstake website: https://twinstake.io/ documentation: https://docs.twinstake.io/ --- ## Overview Twinstake is an institutional digital asset infrastructure company specializing in staking services and validator operations. It serves institutional clients including asset managers, exchanges, and custodians with compliant infrastructure for participating in Avalanche and other proof-of-stake networks. ## Features - **Institutional Grade**: Infrastructure designed for institutional compliance requirements - **Staking-as-a-Service**: Complete staking solutions for enterprise clients - **Multi-Chain Support**: Validation and staking across leading PoS networks - **Security Standards**: Enterprise security with key management - **Compliance Focus**: Built for regulated financial institutions - **White-Glove Service**: Dedicated support for institutional clients ## Getting Started 1. **Contact Twinstake**: Reach out through [Twinstake](https://twinstake.io/) 2. **Solution Design**: Work with Twinstake to design your staking solution 3. **Onboarding**: Complete institutional onboarding process 4. **Deployment**: Launch your staking or validation operations 5. **Ongoing Support**: Receive dedicated institutional support ## Documentation For more information, visit [Twinstake Documentation](https://docs.twinstake.io/). ## Use Cases - **Asset Manager Staking**: Staking operations for crypto asset managers - **Exchange Infrastructure**: Staking services for cryptocurrency exchanges - **Custodian Solutions**: Support custodial staking for institutions - **Fund Validation**: Professional validation for crypto funds # Ultravioleta DAO (/integrations/ultravioletadao) --- title: Ultravioleta DAO category: x402 available: ["C-Chain"] description: Ultravioleta DAO offers the x402 protocol, a payment infrastructure that enables monetization of AI agents and APIs on Avalanche. logo: /images/ultravioletadao.png developer: Ultravioleta DAO website: https://ultravioletadao.xyz documentation: https://facilitator.ultravioletadao.xyz --- ## Overview Ultravioleta DAO, a Web3 DAO based in Latin America, offers the x402 protocol -- payment infrastructure for monetizing AI agents and APIs on Avalanche. The protocol addresses a core problem: autonomous agents cannot operate without gas money. The x402 facilitator verifies transactions, covers gas fees, and settles payments on-chain, allowing agents and services to operate autonomously using USDC payments. The facilitator is available on both Avalanche C-Chain (Mainnet) and Avalanche Fuji (Testnet), enabling developers to test their integrations before deploying to production. ## What is x402? x402 is an HTTP-based payment protocol built on the standard HTTP 402 "Payment Required" status code. It enables micropayments between services without requiring the paying party to hold native gas tokens. The protocol uses ERC-3009 meta-transactions to allow third-party facilitators to sponsor gas fees while maintaining cryptographic proof of payment authorization. **Key participants:** - **Merchants**: Service providers who monetize their APIs or agent services - **Clients**: Users or agents who pay for access to services - **Facilitators**: Infrastructure providers who verify payments and cover gas fees ## Features - **Gasless Payments**: Clients don't need AVAX for gas fees - the facilitator sponsors all transactions - **Stablecoin Settlement**: Payments settled in USDC for predictable pricing - **Cryptographic Security**: EIP-712 signatures ensure payment authenticity and prevent replay attacks - **Stateless Verification**: All payment verification happens on-chain without requiring databases - **Fast & Free**: Sub-second payment verification with no fees for using the facilitator service - **Load Balanced**: Auto-scalable infrastructure ensures high availability and reliability - **Community Operated**: Fully decentralized and community-governed payment infrastructure ## Getting Started for Merchants 1. **Set up your service endpoint** that you want to monetize 2. **Configure the x402 middleware** to protect your endpoint with a price tag 3. **Specify your payment address** where USDC payments should be received 4. **Deploy your service** - the facilitator handles all payment verification and settlement Example middleware configuration: ```typescript import { Hono } from "hono"; import { paymentMiddleware } from "x402-hono"; const app = new Hono(); // Configure payment middleware app.use(paymentMiddleware( "0xYourWalletAddress", { "/api/service": { price: "$0.01", network: "avalanche-c-chain", // or "avalanche-fuji" for testnet config: { description: "Access to your API service" } } }, { url: 'https://facilitator.ultravioletadao.xyz' } )); ``` ## Getting Started for Clients To pay for x402-protected services: 1. **Initialize the x402 client** with your private key and the facilitator URL 2. **Make requests** to protected endpoints - payment happens automatically 3. **Receive the response** once payment is verified and settled on-chain Example client usage: ```typescript import { X402Client } from "x402-client"; const client = new X402Client({ privateKey: process.env.CLIENT_PRIVATE_KEY, facilitatorUrl: 'https://facilitator.ultravioletadao.xyz', network: 'avalanche-c-chain' // or 'avalanche-fuji' for testnet }); // Make a paid request const response = await client.post('https://api.example.com/service', { data: { query: 'your request' } }); ``` ## Use Cases ### AI Agent Monetization Allow autonomous AI agents to offer services and receive payments without manual intervention. Agents can charge per request, per computation, or per resource consumed. ### Pay-Per-Use APIs Monetize API endpoints with micropayment pricing. Instead of subscription tiers, charge users only for what they consume at granular levels. ### Agent-to-Agent Marketplaces Build trustless marketplaces where autonomous agents buy and sell services from each other, creating self-sustaining AI economies. ### Token-Gated Services Provide access to premium services, data feeds, or computational resources based on per-use payments rather than upfront subscriptions. ## Avalanche Integration The x402 facilitator is optimized for Avalanche deployment on both mainnet and testnet: ### Avalanche C-Chain (Mainnet) - **Chain ID**: 43114 - **Native Token**: AVAX (used by facilitator for gas fees) - **Payment Token**: USDC - **Finality**: Sub-second transaction finality for fast payment confirmation - **EVM Compatibility**: Full support for ERC-3009 meta-transactions ### Avalanche Fuji (Testnet) - **Chain ID**: 43113 - **Native Token**: AVAX (used by facilitator for gas fees) - **Payment Token**: USDC - **Purpose**: Test your integration before deploying to mainnet The facilitator maintains a hot wallet funded with AVAX to sponsor gas fees for all USDC payment settlements. Clients only need USDC in their wallets - no AVAX required. ## How It Works 1. **Client Request**: Client signs payment authorization using EIP-712 and sends request to merchant 2. **Merchant Verification**: Merchant forwards payment proof to facilitator for verification 3. **Facilitator Check**: Facilitator verifies signature, checks on-chain balance and nonce 4. **Settlement**: Facilitator submits `transferWithAuthorization()` transaction, paying gas fees 5. **Response**: Once settled, merchant receives confirmation and responds to client All settlements happen on Avalanche using the ERC-3009 standard, ensuring cryptographic security and on-chain auditability. ## Documentation For more details, visit the [Ultravioleta DAO facilitator](https://facilitator.ultravioletadao.xyz). ## About Ultravioleta DAO Ultravioleta DAO is a Web3 DAO in Latin America focused on decentralized infrastructure for autonomous agent economies. The organization develops open-source protocols and tools for trustless interactions between AI agents, covering payment infrastructure, decentralized governance, and blockchain-based monetization. Learn more at [ultravioletadao.xyz](https://ultravioletadao.xyz/). # Van Eck (/integrations/van-eck) --- title: Van Eck category: Assets available: ["C-Chain"] description: "Van Eck is a global investment management firm offering tokenized funds including VBILL, providing innovative blockchain-based investment solutions." logo: /images/van-eck.png developer: Van Eck website: https://www.vaneck.com/ documentation: https://www.vaneck.com/us/en/digital-assets/ --- ## Overview Van Eck is an investment management firm with a long history of financial product innovation, now integrating blockchain technology into traditional asset management. Through tokenized funds like VBILL and other digital asset offerings, Van Eck provides investors access to blockchain-native investment products that combine institutional asset management with the efficiency and transparency of distributed ledger technology. # VIA Labs (/integrations/vialabs) --- title: VIA Labs category: Crosschain Solutions available: ["C-Chain"] description: "VIA Labs enables Avalanche L1s to integrate cross-chain USDC with Proto-USD, leveraging Avalanche's ICM infrastructure and Circle's CCTP." logo: /images/vialabs.png developer: VIA Labs website: https://vialabs.io/ documentation: https://developer.vialabs.io/general/package --- ## Overview VIA Labs provides Avalanche L1s with Proto-USD, a solution for cross-chain USDC integration. By combining Avalanche's native Interchain Messaging (ICM) with Circle's Cross-Chain Transfer Protocol (CCTP), any Avalanche L1 can receive and use USDC from Ethereum, Base, Solana, and other CCTP-enabled chains. ## Features - **Native ICM Integration**: Utilizes Avalanche's built-in cross-chain messaging infrastructure. - **USDC Cross-Chain Support**: Enables transfer of USDC between Avalanche L1s and other major blockchains. - **Bridged USDC Standard**: Implements standardized approach for USDC representation across chains. - **CCTP Compatibility**: Uses Circle's Cross-Chain Transfer Protocol for secure token transfers. - **Testnet-Ready**: Available on testnet for all Avalanche L1s. - **Path to Native Issuance**: Sets Avalanche L1s on an onboarding path to potential native USDC issuance. - **Multi-Chain Support**: Compatible with Ethereum, Base, Solana (coming soon), and other CCTP-enabled chains. ## Getting Started 1. **Access Documentation**: Review the [VIA Labs Developer Documentation](https://developer.vialabs.io/general/package). 2. **Explore Demo**: Visit the [Proto-USD demo](https://avax.protousd.com/) to see the technology in action. 3. **Integration Steps**: - Set up ICM infrastructure on your Avalanche L1 - Configure CCTP message passing - Implement Proto-USD contracts - Test cross-chain USDC transfers 4. **Deployment**: Deploy to testnet first, then to mainnet after thorough testing. ## Documentation For more details, visit the [VIA Labs Developer Portal](https://developer.vialabs.io/general/package). ## Use Cases - **Cross-Chain DeFi**: Access USDC liquidity from other chains for DeFi applications. - **Multi-Chain dApps**: Build applications that use USDC across different blockchains. - **Treasury Management**: Manage USDC treasury across multiple networks. - **Enhanced Liquidity**: Tap into USDC liquidity from major blockchains. - **Cross-Chain Stablecoins**: Provide users with cross-chain stablecoin functionality. ## Official Launch Update [See the official announcement on X](https://x.com/VIA_Labs/status/1895156949467161060) # VNX (/integrations/vnx) --- title: VNX category: Assets available: ["C-Chain"] description: "VNX provides regulated stablecoins including VEUR and VCHF, offering tokenized fiat currencies backed by traditional financial infrastructure." logo: /images/vnx.png developer: VNX website: https://vnx.li/ documentation: https://docs.vnx.li/ --- ## Overview VNX is a regulated stablecoin issuer providing tokenized fiat currencies including VEUR (Euro stablecoin) and VCHF (Swiss Franc stablecoin). Built on blockchain infrastructure with traditional financial backing, VNX enables users and institutions to access stable digital representations of major fiat currencies with full regulatory compliance and transparent reserve management. # Web3Auth (/integrations/web3auth) --- title: Web3Auth category: Developer Tooling available: ["C-Chain", "All Avalanche L1s"] description: "Web3Auth provides authentication infrastructure enabling social logins and passwordless authentication for Web3 applications." logo: /images/web3auth.png developer: Web3Auth website: https://web3auth.io/ documentation: https://web3auth.io/docs/ --- ## Overview Web3Auth is pluggable auth infrastructure for Web3 applications, enabling social logins, passwordless authentication, and wallet creation. It abstracts key management complexity so applications can onboard both crypto-native and mainstream users with familiar authentication methods. ## Features - **Social Logins**: Login with Google, Twitter, Discord, and more - **Passwordless Auth**: Email and SMS authentication options - **MPC Key Management**: Secure multi-party computation for key security - **White-Label**: Fully customizable authentication flows - **Multi-Platform**: Support for web, mobile, and gaming platforms - **Self-Custodial**: Users maintain control of their keys ## Getting Started 1. **Sign Up**: Create account at [Web3Auth](https://web3auth.io/) 2. **Get Client ID**: Access credentials from dashboard 3. **Choose SDK**: Select SDK for your platform (Web, React Native, Unity) 4. **Configure**: Set up authentication providers 5. **Implement**: Add Web3Auth to your application 6. **Launch**: Enable social login for your users ## Documentation For integration guides, visit [Web3Auth Documentation](https://web3auth.io/docs/). ## Use Cases - **User Onboarding**: Onboarding with social logins - **Gaming**: Easy player authentication for Web3 games - **DeFi Access**: Lower barrier to DeFi participation - **Enterprise Apps**: Familiar auth flows for enterprise users # Whitewallet (/integrations/whitewallet) --- title: Whitewallet category: Wallets available: ["C-Chain"] description: "Secure, non-custodial multi-chain wallet for managing, swapping, and exploring Web3 assets." logo: /images/whitewallet.png developer: Whitewallet website: https://whitewallet.app/?utm_source=avax_eco&utm_campaign=catalogues documentation: https://whitewallet.app/?utm_source=avax_eco&utm_campaign=catalogues --- ## Overview Whitewallet is a non-custodial cryptocurrency wallet for managing, swapping, and exploring Web3 assets across multiple blockchains. It supports Ethereum, Avalanche, Solana, Arbitrum, Whitechain, BNB Chain, Optimism, Tron, TON, and others through a unified interface. Users maintain full control of their private keys and digital assets. ## Features - **Multi-Chain Support**: Store, send, receive, and manage crypto assets across Ethereum, Avalanche, Solana, Arbitrum, Whitechain, BNB Chain, Optimism, Tron, TON, and other supported blockchains from a single interface. - **Cross-Chain Swaps**: Execute asset swaps between supported networks directly within the wallet, without external bridges or additional tools. - **Non-Custodial Security**: Full control of your private keys with self-custody architecture. No risk of account freezes or third-party access. - **Multi-Asset Management**: Support for leading digital assets including Ethereum (ETH), Tether (USDT), USD Coin (USDC), AVAX, WBT, DAI, BNB, TRX, TON, OP, XPL, ARB, and many other tokens across supported networks. - **User-Friendly Interface**: Streamlined token management and portfolio tracking for both beginners and experienced users. - **Web3 Integration**: Connect to decentralized applications and DeFi protocols directly from the wallet. ## Getting Started 1. **Download the Wallet**: Visit the [Whitewallet website](https://whitewallet.app/?utm_source=avax_eco&utm_campaign=catalogues) to download the application for your preferred platform. 2. **Create or Import Wallet**: Set up a new wallet or import an existing one using your recovery phrase. 3. **Start Managing Assets**: Begin sending, receiving, and managing your crypto assets across supported chains. ## Documentation For guides visit the [Whitewallet Website](https://whitewallet.app/?utm_source=avax_eco&utm_campaign=catalogues). ## Use Cases - **Cross-Chain Asset Management**: Manage crypto holdings across multiple blockchain networks from a single wallet. - **DeFi Participation**: Connect to DeFi protocols on Avalanche and other chains to lend, borrow, stake, and trade. - **Token Swapping**: Execute cross-chain swaps without leaving the wallet. - **dApp Interaction**: Connect to dApps across supported networks for gaming, NFT trading, and other Web3 activities. - **Portfolio Tracking**: Monitor asset balances and transaction history across all supported chains. # WisdomTree (/integrations/wisdomtree) --- title: WisdomTree category: Assets available: ["C-Chain"] description: WisdomTree is a global financial innovator offering exchange-traded products, a blockchain-native app (WisdomTree Prime), and institutional platform (WisdomTree Connect) bridging traditional finance with tokenized real-world assets. logo: /images/wisdomtree.jpg developer: WisdomTree website: https://www.wisdomtree.com/ documentation: https://www.wisdomtree.com/digital-assets --- ## Overview WisdomTree is a financial services company with over $100 billion in assets under management, known for exchange-traded products and now integrating blockchain technology into traditional finance. It operates three platforms: traditional ETPs (Exchange-Traded Products), WisdomTree Prime (a blockchain-native consumer app), and WisdomTree Connect (an institutional tokenization and infrastructure platform). These platforms provide access to tokenized real-world assets, cryptocurrencies, stablecoins, and traditional securities, with regulatory compliance built on two decades in traditional finance. ## Platforms ### WisdomTree Prime A mobile-first blockchain app for retail investors offering: - **Crypto Trading**: Buy, sell, and hold major cryptocurrencies - **Tokenized Assets**: Access to tokenized commodities, currencies, and other real-world assets - **Staking and Yield**: Earn rewards through staking and yield-generating products - **Traditional Securities**: Invest in stocks and ETFs alongside digital assets - **Wallet Functionality**: Self-custodial and custodial wallet options - **Dollar-Cost Averaging**: Automated recurring investment features - **Educational Resources**: In-app learning materials about crypto and blockchain ### WisdomTree Connect An institutional-grade platform for tokenization and asset management: - **Asset Tokenization**: Infrastructure for tokenizing real-world assets - **Blockchain-as-a-Service**: White-label blockchain infrastructure for institutions - **Digital Asset Custody**: Institutional custody solutions - **Smart Contract Platform**: Develop and deploy tokenized products - **Regulatory Compliance**: Built-in compliance for regulated tokenized securities - **Multi-Chain Support**: Deploy across multiple blockchain networks - **API Integration**: Enterprise APIs for seamless integration ### Traditional ETPs WisdomTree continues to offer its flagship exchange-traded products: - **Equity ETFs**: Global equity exposure across markets and sectors - **Fixed Income ETFs**: Bond and fixed income strategies - **Commodity ETFs**: Gold, silver, and commodity exposure - **Currency-Hedged ETFs**: International exposure with currency hedging - **Alternative Strategy ETFs**: Sophisticated investment strategies ## Features - **Multi-Asset Platform**: Access crypto, tokenized assets, stocks, ETFs, and commodities in one ecosystem. - **Tokenized Real-World Assets**: Invest in blockchain-native representations of traditional assets. - **Institutional Infrastructure**: Enterprise-grade custody, compliance, and security for digital assets. - **Regulatory Compliance**: Fully regulated platforms with licenses in multiple jurisdictions. - **Blockchain Integration**: Native blockchain infrastructure for asset tokenization and management. - **Mobile-First Experience**: User-friendly apps for retail investors (WisdomTree Prime). - **Institutional Services**: White-label solutions and infrastructure for financial institutions (WisdomTree Connect). - **Smart Contract Innovation**: Programmable assets with automated compliance and distribution. - **Staking and Yield**: Access to yield-generating opportunities in digital assets. - **Educational Focus**: Resources to help users understand digital assets. - **Traditional Finance Bridge**: Integration between TradFi and DeFi. - **Global Presence**: Operations in US, Europe, and other major markets. ## Getting Started ### For Retail Investors (WisdomTree Prime): 1. Download the WisdomTree Prime app from iOS or Android app stores 2. Create an account and complete identity verification 3. Fund your account via bank transfer or other methods 4. Explore available assets: crypto, tokenized products, stocks, ETFs 5. Build a diversified portfolio across asset classes 6. Set up automated investing and track performance ### For Institutions (WisdomTree Connect): 1. Contact WisdomTree's institutional team to discuss requirements 2. Explore tokenization services, custody solutions, or infrastructure offerings 3. Define use case: asset tokenization, product development, or white-label platform 4. Work with WisdomTree to implement compliant blockchain solutions 5. Launch tokenized products or services on WisdomTree's infrastructure 6. Access ongoing support and platform enhancements ## Avalanche Support WisdomTree's infrastructure supports multiple blockchain networks for deploying tokenized assets. While specific implementations may vary, WisdomTree Connect's multi-chain architecture is compatible with EVM networks including Avalanche C-Chain, enabling efficient deployment of tokenized products on Avalanche's high-performance infrastructure. ## Tokenized Products WisdomTree offers various tokenized asset products: **Tokenized Commodities**: Digital representations of gold, silver, and other physical commodities. **Tokenized Securities**: Blockchain-native versions of traditional securities with programmable features. **Stablecoins**: Dollar-pegged digital currencies for stability and efficiency. **Utility Tokens**: Tokens providing access to specific products or services within the ecosystem. **Yield-Bearing Tokens**: Tokens that automatically generate returns through staking or other mechanisms. ## Use Cases WisdomTree's platforms serve multiple needs: **Retail Wealth Building**: Individual investors using Prime to build diversified portfolios across traditional and digital assets. **Institutional Tokenization**: Asset managers using Connect to tokenize funds, securities, or other assets. **Corporate Treasury**: Companies managing digital asset treasuries through WisdomTree's institutional platform. **Financial Institution Partnerships**: Banks and brokers white-labeling WisdomTree's technology to offer digital assets. **Cross-Border Payments**: Using tokenized assets for efficient international settlements. **Yield Optimization**: Institutions accessing yield opportunities through tokenized products. **Product Innovation**: Financial services companies developing new tokenized investment products on WisdomTree's infrastructure. ## Regulatory Framework WisdomTree maintains regulatory compliance across jurisdictions: - **SEC-Registered**: Registered investment adviser with the U.S. Securities and Exchange Commission - **FINRA Member**: Member of Financial Industry Regulatory Authority - **FCA Regulated**: Authorized and regulated by UK Financial Conduct Authority - **State Licenses**: Money transmitter licenses for digital asset services - **European Licenses**: Regulatory approvals across European jurisdictions - **Crypto Compliance**: Adheres to crypto-specific regulations including AML/KYC - **Regular Audits**: Ongoing compliance audits and regulatory examinations ## WisdomTree's Digital Asset Strategy WisdomTree has invested significantly in blockchain and digital assets: - **Early Mover**: Among the first traditional asset managers to adopt blockchain technology - **Infrastructure Investment**: Developed proprietary blockchain platforms (Prime and Connect) - **Strategic Acquisitions**: Acquired and built technology to support digital asset vision - **Patent Portfolio**: Holds multiple patents related to blockchain and tokenization - **Industry Leadership**: Active in shaping regulations and industry standards - **Education Focus**: Active voice in educating investors about digital assets ## Competitive Advantages **Established Brand**: Over 20 years in financial services with $100B+ AUM building trust. **Dual Platform Approach**: Serves both retail (Prime) and institutional (Connect) markets. **Regulatory Certainty**: Fully licensed and compliant across multiple jurisdictions. **Traditional Finance Expertise**: Deep understanding of asset management and ETF structures. **Blockchain Innovation**: Early adopter with proprietary infrastructure and technology. **Multi-Asset Capability**: Blend traditional and digital assets. **Global Reach**: International operations and regulatory licenses. ## Pricing WisdomTree offers transparent pricing across platforms: ### WisdomTree Prime (Retail): - **Trading Fees**: Competitive fees on crypto and asset trades - **Management Fees**: Low fees on tokenized products and ETFs - **No Account Minimums**: Accessible to all investors - **Transparent Pricing**: Clear fee schedule with no hidden costs ### WisdomTree Connect (Institutional): - **Custom Pricing**: Tailored to institutional requirements - **Setup Fees**: Initial tokenization and platform setup costs - **Platform Fees**: Ongoing infrastructure and technology fees - **Service Fees**: Additional services like custody and compliance - **Volume Discounts**: Pricing scales with asset size and transaction volume Contact WisdomTree for detailed institutional pricing. ## Assets Under Management WisdomTree manages significant assets across platforms: - $100+ billion in traditional ETPs - Growing digital asset AUM through Prime and Connect platforms - Partnerships with major institutions for tokenization - Expanding tokenized product offerings # BDACS / Woori Bank (/integrations/woori-bank) --- title: BDACS / Woori Bank category: Assets available: ["C-Chain"] description: "BDACS in partnership with Woori Bank provides KRW1 (Korean Won stablecoin), offering a regulated tokenized representation of the Korean Won." logo: /images/woori-bank.png developer: BDACS / Woori Bank website: https://www.bdacs.co.kr/ documentation: https://www.bdacs.co.kr/ --- ## Overview BDACS, in partnership with Woori Bank, provides KRW1, a Korean Won-backed stablecoin. Users and businesses in South Korea and globally can access a stable digital representation of the Korean Won on blockchain. The stablecoin facilitates payments, remittances, and digital asset transactions with Korean Won stability. # Wormhole (/integrations/wormhole) --- title: Wormhole category: Crosschain Solutions available: ["C-Chain"] description: Wormhole is a leading interoperability platform powering multichain applications, messaging, and native token transfers across 30+ blockchains with secure, permissionless infrastructure. logo: /images/wormhole.png developer: Wormhole Foundation website: https://wormhole.com/ documentation: https://docs.wormhole.com/ --- ## Overview Wormhole is a widely adopted interoperability platform enabling cross-chain messaging, native token transfers, and multichain applications across 30+ blockchain networks. As a generic message passing protocol, it allows developers to build applications that interact with multiple blockchains simultaneously, unlocking liquidity and users across Web3. With billions of dollars in cross-chain value transferred and integrations with major DeFi protocols and blockchain infrastructure, Wormhole is key infrastructure for the multichain ecosystem. Its permissionless, decentralized architecture means any developer can build cross-chain applications without relying on centralized intermediaries. # Wyre (/integrations/wyre) --- title: Wyre category: Fiat On-Ramp available: ["C-Chain"] description: "[Deprecated] Wyre offers powerful APIs for fiat-to-crypto on-ramp and off-ramp solutions, enabling users to purchase and sell cryptocurrencies directly within applications." logo: /images/wyre.webp developer: Wyre website: https://wyre.studiofreight.com/ documentation: https://docs.sendwyre.com/ --- > **⚠️ Deprecated:** Wyre ceased operations in June 2023. This page is kept for historical reference. Consider alternatives like [Transak](/integrations/transak), [MoonPay](/integrations/moonpay), or [Banxa](/integrations/banxa) for fiat on/off-ramp solutions. ## Overview Wyre is a financial technology company that provides comprehensive API infrastructure for converting between fiat currencies and cryptocurrencies. Wyre's platform enables businesses to offer seamless fiat-to-crypto on-ramp and off-ramp solutions, allowing users to buy and sell digital assets directly within applications. With support for Avalanche and multiple payment methods, Wyre handles the complex aspects of payment processing, compliance, and liquidity management. Wyre's infrastructure is designed for developers who need a reliable, compliant, and feature-rich payment gateway that supports the full lifecycle of crypto transactions, from KYC verification to settlement. ## Features - **Wyre Checkout Widget**: Turnkey fiat-to-crypto on-ramp that enables selling crypto with just a few lines of integration code. - **Card Processing**: Accept debit and credit cards for cryptocurrency purchases with built-in fraud prevention. - **Global Payment Methods**: Support for bank transfers (ACH, wire), credit/debit cards, Apple Pay, and region-specific payment options. - **On-Ramp and Off-Ramp**: Complete buy and sell functionality to create a full-circle user experience. - **Custodial Wallets**: API-based wallet solution for holding crypto and fiat assets on behalf of users. - **KYC/AML Compliance Engine**: Built-in compliance as a service with user management system that meets regulatory requirements. - **Transfers and Swaps**: Move and convert assets across different blockchains and currencies. - **White-Label Solutions**: Customizable integration that can be branded to match your application. - **Webhooks**: Real-time notifications for transaction status updates and user events. - **Test Environment**: Sandbox environment for development and testing without real transactions or fees. - **Multi-Currency Support**: Handle multiple fiat currencies and cryptocurrencies in a single integration. ## Getting Started To integrate Wyre into your application: 1. **Create a Test Account**: Register at [dash.testwyre.com](https://dash.testwyre.com/) to get started with testing. 2. **Generate API Keys**: Obtain your test API keys from the dashboard to begin integration. 3. **Choose Integration Method**: Wyre offers multiple integration options: - **Wyre Checkout Widget**: Hosted solution for quick integration - **Hosted URL**: Direct users to a Wyre-hosted payment page - **White-Label API**: Build a fully custom interface using Wyre's APIs - **Wallet API**: Create and manage custodial wallets programmatically 4. **Implement Authentication**: Wyre uses signature-based authentication to secure API requests. Review the authentication documentation to properly sign your API calls. 5. **Test Your Integration**: Use the test environment to verify your integration with test cards and bank accounts before going live. 6. **Request Production Access**: After testing is complete, register a production account and complete the verification process to begin processing real transactions. ## Avalanche Support Wyre supports AVAX and other assets on the Avalanche C-Chain, enabling users to purchase Avalanche-based cryptocurrencies directly with fiat currencies. This allows developers building on Avalanche to provide seamless fiat on-ramp and off-ramp capabilities within their applications. ## Documentation For comprehensive integration guides, API references, and authentication examples, visit: - [Wyre Documentation](https://docs.sendwyre.com/) - [Quick Start Guide](https://docs.sendwyre.com/docs/quick-start) - [API Reference](https://docs.sendwyre.com/reference) - [Wyre Checkout Widget Guide](https://docs.sendwyre.com/docs/wyre-checkout-overview) ## Use Cases on Avalanche Wyre can enhance various Avalanche applications: **Cryptocurrency Wallets**: Enable users to purchase AVAX and other Avalanche tokens directly within wallet applications. **DeFi Platforms**: Provide a seamless fiat entry point for users to acquire tokens needed for Avalanche DeFi protocols. **NFT Marketplaces**: Allow direct purchase of NFTs on Avalanche with fiat payment methods, abstracting away the crypto acquisition step. **GameFi Applications**: Let gamers purchase in-game assets or tokens on Avalanche without needing prior cryptocurrency knowledge. **Decentralized Applications**: Reduce friction in your Avalanche dApp's user experience by integrating a native fiat entry and exit point. **Enterprise Solutions**: Offer compliant on/off-ramp solutions for corporate applications built on Avalanche. ## Pricing Wyre operates on a transaction fee model with pricing that varies based on payment method, region, and transaction volume. The fee structure typically includes: - **Transaction Fees**: Percentage-based fees on each transaction - **Payment Method Fees**: Different rates for cards, bank transfers, and other payment methods - **Volume Discounts**: Available for high-volume partners - **Custom Enterprise Solutions**: Tailored pricing for large-scale integrations For specific pricing details and enterprise arrangements, contact Wyre's sales team. ## Compliance and Security Wyre maintains robust compliance and security measures: - **Regulatory Compliance**: Licensed as a money transmitter and complies with state and federal regulations - **KYC/AML Processes**: Built-in identity verification and anti-money laundering screening - **Fraud Prevention**: Advanced risk management systems to prevent fraudulent transactions - **Data Security**: Enterprise-grade security infrastructure to protect user data and funds - **Regular Audits**: Ongoing compliance audits and security assessments ## Conclusion Wyre provides a comprehensive fiat payment infrastructure that enables developers to build seamless on-ramp and off-ramp experiences for Avalanche applications. By handling the complex aspects of payment processing, regulatory compliance, and liquidity, Wyre allows developers to focus on building their core products while providing users with easy access to the Avalanche ecosystem. With flexible integration options, extensive payment method support, and robust compliance features, Wyre is an ideal solution for any project looking to reduce friction in user onboarding and provide complete fiat-to-crypto functionality on Avalanche. # x402-rs (/integrations/x402-rs) --- title: x402-rs category: x402 available: ["C-Chain"] description: A Rust-based implementation of the x402 protocol for accepting blockchain payments through HTTP on Avalanche. logo: /images/x402-rs.png developer: x402-rs website: https://github.com/x402-rs/x402-rs documentation: https://github.com/x402-rs/x402-rs --- ## Overview x402-rs is a Rust-based implementation of the x402 protocol that enables blockchain payments directly through HTTP using the native `402 Payment Required` status code on Avalanche's C-Chain. The x402 protocol allows servers to declare payment requirements for specific routes. Clients send cryptographically signed payment payloads, and facilitators verify and settle payments on-chain. ## What x402-rs Provides - **x402-rs core**: Protocol types, facilitator traits, and logic for on-chain payment verification and settlement - **Facilitator binary**: Production-grade HTTP server to verify and settle x402 payments - **x402-axum**: Axum middleware for accepting x402 payments - **x402-reqwest**: Wrapper for reqwest for transparent x402 payments ## Getting Started ### Run the Facilitator The quickest way to get started is using Docker: ```bash docker run --env-file .env -p 8080:8080 ukstv/x402-facilitator ``` Or build locally: ```bash docker build -t x402-rs . docker run --env-file .env -p 8080:8080 x402-rs ``` ### Protect Axum Routes Use `x402-axum` to gate your routes behind on-chain payments on Avalanche: ```rust let x402 = X402Middleware::try_from("https://x402.org/facilitator/").unwrap(); let usdc = USDCDeployment::by_network(Network::AvalancheFuji); let app = Router::new().route("/paid-content", get(handler).layer( x402.with_price_tag(usdc.amount("0.025").pay_to("0xYourAddress").unwrap()) ), ); ``` See the [x402-axum crate documentation](https://docs.rs/x402-axum) for more details. ### Send x402 Payments Use `x402-reqwest` to send payments on Avalanche: ```rust use x402_reqwest::X402ClientExt; let signer: PrivateKeySigner = "0x...".parse()?; // never hardcode real keys! let client = reqwest::Client::new() .with_payments(signer) .prefer(USDCDeployment::by_network(Network::Avalanche)) .max(USDCDeployment::by_network(Network::Avalanche).amount("1.00")?) .build(); let res = client .get("https://example.com/protected") .send() .await?; ``` See the [x402-reqwest crate documentation](https://docs.rs/x402-reqwest) for more details. ## Facilitator The x402-rs facilitator is a runnable binary that simplifies x402 adoption by handling: - **Payment verification**: Confirming that client-submitted payment payloads match the declared requirements - **Payment settlement**: Submitting validated payments to the blockchain and monitoring their confirmation By using a facilitator, servers (sellers) do not need to: - Connect directly to a blockchain - Implement complex cryptographic or blockchain-specific payment logic The facilitator never holds user funds. It acts solely as a stateless verification and execution layer for signed payment payloads. ### Configuration Create a `.env` file or set environment variables directly: ```bash HOST=0.0.0.0 PORT=8080 RPC_URL_AVALANCHE_FUJI=https://api.avax-test.network/ext/bc/C/rpc RPC_URL_AVALANCHE=https://api.avax.network/ext/bc/C/rpc SIGNER_TYPE=private-key EVM_PRIVATE_KEY=0xdeadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeefdeadbeef RUST_LOG=info ``` **Important:** The supported networks are determined by which RPC URLs you provide: - If you set only `RPC_URL_AVALANCHE_FUJI`, then only Avalanche Fuji testnet is supported - If you set both `RPC_URL_AVALANCHE_FUJI` and `RPC_URL_AVALANCHE`, then both testnet and mainnet are supported - If an RPC URL for a network is missing, that network will not be available for settlement or verification ### Environment Variables Available configuration variables: - `RUST_LOG`: Logging level (e.g., `info`, `debug`, `trace`) - `HOST`: HTTP host to bind to (default: `0.0.0.0`) - `PORT`: HTTP server port (default: `8080`) - `SIGNER_TYPE` (required): Type of signer to use. Only `private-key` is supported now - `EVM_PRIVATE_KEY` (required): Private key in hex for EVM networks - `RPC_URL_AVALANCHE_FUJI`: Ethereum RPC endpoint for Avalanche Fuji testnet - `RPC_URL_AVALANCHE`: Ethereum RPC endpoint for Avalanche C-Chain mainnet ### Docker Deployment Prebuilt Docker images are available at: - **GitHub Container Registry**: `ghcr.io/x402-rs/x402-facilitator` - **Docker Hub**: `ukstv/x402-facilitator` Run the container from Docker Hub: ```bash docker run --env-file .env -p 8080:8080 ukstv/x402-facilitator ``` To run using GitHub Container Registry: ```bash docker run --env-file .env -p 8080:8080 ghcr.io/x402-rs/x402-facilitator ``` Or build a Docker image locally: ```bash docker build -t x402-rs . docker run --env-file .env -p 8080:8080 x402-rs ``` The container: - Exposes port `8080` (or a port you configure with `PORT` environment variable) - Starts on `http://localhost:8080` by default - Requires minimal runtime dependencies (based on `debian:bullseye-slim`) ### Point Your Application to Your Facilitator If you are building an x402-powered application, update the Facilitator URL to point to your self-hosted instance. **Using x402-hono:** ```typescript import { Hono } from "hono"; import { serve } from "@hono/node-server"; import { paymentMiddleware } from "x402-hono"; const app = new Hono(); // Configure the payment middleware app.use(paymentMiddleware( "0xYourAddress", // Your receiving wallet address { "/protected-route": { price: "$0.10", network: "avalanche-fuji", config: { description: "Access to premium content", } } }, { url: "http://your-validator.url/", // 👈 Your self-hosted Facilitator } )); // Implement your protected route app.get("/protected-route", (c) => { return c.json({ message: "This content is behind a paywall" }); }); serve({ fetch: app.fetch, port: 3000 }); ``` **Using x402-axum:** ```rust let x402 = X402Middleware::try_from("http://your-validator.url/").unwrap(); // 👈 Your self-hosted Facilitator let usdc = USDCDeployment::by_network(Network::AvalancheFuji); let app = Router::new().route("/paid-content", get(handler).layer( x402.with_price_tag(usdc.amount("0.025").pay_to("0xYourAddress").unwrap()) ), ); ``` ## Observability The facilitator emits OpenTelemetry-compatible traces and metrics, making it easy to integrate with tools like Honeycomb, Prometheus, Grafana, and others. Tracing spans are annotated with HTTP method, status code, URI, latency, and other request and process metadata. To enable tracing and metrics export, set the appropriate `OTEL_` environment variables: ```bash # For Honeycomb, for example: # Endpoint URL for sending OpenTelemetry traces and metrics OTEL_EXPORTER_OTLP_ENDPOINT=https://api.honeycomb.io:443 # Comma-separated list of key=value pairs to add as headers OTEL_EXPORTER_OTLP_HEADERS=x-honeycomb-team=your_api_key,x-honeycomb-dataset=x402-rs # Export protocol to use for telemetry. Supported values: `http/protobuf` (default), `grpc` OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf ``` The service automatically detects and initializes exporters if `OTEL_EXPORTER_OTLP_*` variables are provided. ## Avalanche C-Chain Support The facilitator supports Avalanche C-Chain through these environment variables: | Network | Environment Variable | Notes | | ------------------------- | ------------------------- | -------------------------------- | | Avalanche Fuji Testnet | RPC_URL_AVALANCHE_FUJI | Recommended for testing | | Avalanche C-Chain Mainnet | RPC_URL_AVALANCHE | Production mainnet | **Tip:** For initial development and testing, start with Avalanche Fuji testnet only. ## Development ### Prerequisites - Rust 1.80+ - `cargo` and a working toolchain ### Build Locally ```bash cargo build ``` ### Run ```bash cargo run ``` ## Use Cases ### Payment-Gated APIs Protect your API endpoints with automatic on-chain payment verification on Avalanche. ### Micropayment Services Enable micropayments for content, data, or compute resources with Avalanche's low fees. ### AI Agent Monetization Allow AI agents to charge for services on a pay-per-use basis using Avalanche's fast finality. ### Rust-Native dApps Build Rust-based decentralized applications with built-in payment capabilities on Avalanche. ## Documentation - [x402-rs GitHub Repository](https://github.com/x402-rs/x402-rs) - [x402 Protocol Documentation](https://x402.gitbook.io/x402) - [x402 Overview by Coinbase](https://www.coinbase.com/cloud/discover/protocol-guides/guide-to-x402) # Xangle (/integrations/xangle) --- title: Xangle category: Development Infrastructure available: ["C-Chain"] description: Xangle provides blockchain infrastructure solutions including node services, custom block explorers, and the Xangle Hub, helping projects deploy, monitor, and scale on Avalanche and beyond. Analytics and market insights are also available. logo: /images/xangle.jpg developer: Xangle website: https://infra.xangle.io/hub documentation: https://business.xangle.io/ --- ## Overview Xangle is a blockchain infrastructure provider offering high-performance node services, customizable block explorers, and the Xangle Hub -- a unified platform for DeFi tools, portfolio management, and on-chain analytics. The infrastructure supports deployment, monitoring, and scaling of blockchain applications. ## Features - **Node Services**: Reliable, scalable node infrastructure for developers and enterprises, supporting secure access to blockchain networks. - **Custom Block Explorers**: Deploy branded, feature-rich explorers for your network or application, with support for transactions, tokens, NFTs, analytics, and more. For example, MapleStory Universe uses a dedicated Xangle-powered explorer for its Henesys L1 testnet ([see example](https://msu-testnet-explorer.xangle.io/)). - **Xangle Hub**: An all-in-one platform ([see Xangle Hub](https://infra.xangle.io/hub)) that brings together DeFi services (swap, bridge, faucet), portfolio management, NFT minting, dashboards, and real-time on-chain analytics. - **Developer Tools**: APIs and dashboards for real-time monitoring, data access, and operational insights. - **Security & Reliability**: Enterprise-grade uptime, monitoring, and support. - **Analytics & Market Insights**: In addition to infrastructure, Xangle provides market analytics, research, and reporting tools for blockchain projects. ## Getting Started 1. **Explore Infrastructure Solutions**: Visit [Xangle](https://xangle.io/en) to learn about node services, explorer deployments, and developer tools. 2. **Try Xangle Hub**: Experience the unified DeFi and analytics platform at [Xangle Hub](https://infra.xangle.io/hub). 3. **Request Custom Explorer**: Contact Xangle to deploy a tailored explorer for your blockchain or dApp. 4. **Integrate Node Services**: Access secure, scalable nodes for your application or business. 5. **Review Documentation**: Access integration guides and API references at the [official documentation](https://business.xangle.io/). ## Documentation For more details, visit the [Xangle Business Documentation](https://business.xangle.io/). ## Use Cases - **Network & dApp Infrastructure**: Deploy and manage nodes, explorers, and ecosystem tools for your blockchain or application. - **Ecosystem Hubs**: Launch a unified platform for your community with DeFi tools, analytics, and portfolio management (see [Xangle Hub](https://infra.xangle.io/hub)). - **Custom Block Explorers**: Provide users with transparent, branded access to on-chain data (see [MapleStory Universe Explorer](https://msu-testnet-explorer.xangle.io/)). - **Operational Monitoring**: Use dashboards and APIs for real-time network and application insights. - **Analytics & Reporting**: Use Xangle's analytics and research tools for market intelligence and compliance. # Zeeve (/integrations/zeeve) --- title: Zeeve category: Blockchain as a Service available: ["All Avalanche L1s"] description: Build and Deploy Avalanche L1s with plug-and-play dev tools and Zeeve infrastructure platform. logo: /images/zeeve.jpeg developer: Zeeve website: https://www.zeeve.io/appchains/avalanche-subnets/ documentation: https://docs.zeeve.io/ --- ## Overview Zeeve extends its enterprise-grade infrastructure management platform to Avalanche L1s, enabling builders to launch their own blockchain. From deployments to 24x7 monitoring, Zeeve handles the infrastructure so you can focus on building your business. ## Features - **Plug-and-play tools**: Pre-built dev tools for Avalanche L1 deployment. - **Managed infrastructure**: Zeeve’s platform handles monitoring and maintenance. - **Scalability**: Smooth scaling as your blockchain network grows. - **Security**: Built-in security features and architecture. - **High availability**: 24/7 uptime and monitoring services. ## Getting Started 1. Sign up for a Zeeve account [here](https://www.zeeve.io/appchains/avalanche-subnets/). 2. Explore the Avalanche L1s deployment guide provided in the Zeeve documentation. 3. Use Zeeve’s platform to set up, deploy, and monitor your Avalanche L1 blockchain with ease. ## Documentation For detailed information and step-by-step guides, visit the official [Zeeve Documentation](https://docs.zeeve.io/). # Zellic (/integrations/zellic) --- title: Zellic category: Security Audits available: ["C-Chain", "All Avalanche L1s"] description: "Zellic provides highly recommended security audits with deep expertise in VM audits and advanced cybersecurity techniques." logo: /images/zellic.png developer: Zellic website: https://zellic.io/ --- ## Overview Zellic is a security firm specializing in blockchain security audits with particular expertise in Virtual Machine (VM) audits. Their team combines deep technical knowledge with practical cybersecurity experience to deliver security assessments for Avalanche projects. Beyond static code audits, Zellic performs dynamic analysis and in-depth security reviews of complex systems. ## Features - **VM Audits**: Specialized expertise in auditing virtual machine implementations. - **Smart Contract Security**: Reviews of contract code and logic. - **Deep Cybersecurity Background**: Team with advanced offensive security expertise. - **Protocol Analysis**: Thorough assessment of protocol designs and implementations. - **Advanced Vulnerability Research**: Identification of novel attack vectors and security issues. - **Custom Security Solutions**: Tailored security assessments for specific project needs. ## Getting Started 1. **Initial Contact**: Reach out to Zellic at [oliver@zellic.io](mailto:oliver@zellic.io) to discuss your project requirements. 2. **Security Assessment Planning**: Define the scope, objectives, and timeline for your audit. 3. **Audit Process**: - In-depth code and architecture review - Specialized VM auditing when applicable - Dynamic analysis and penetration testing - Comprehensive vulnerability assessment 4. **Findings and Recommendations**: Detailed report outlining security issues and remediation steps. 5. **Remediation Support**: Guidance on addressing identified vulnerabilities. ## Use Cases - **Custom VM Implementations**: Specialized assessment of virtual machine code. - **Novel Blockchain Implementations**: Security review of innovative blockchain designs. - **Complex Smart Contract Systems**: Thorough analysis of intricate smart contract interactions. - **Cross-Chain Applications**: Security assessment of applications spanning multiple blockchains. - **High-Security Requirements**: Projects requiring advanced security validation beyond standard audits. # ZeroDev (/integrations/zerodev) --- title: ZeroDev category: Wallets and Account Abstraction available: ["C-Chain", "All Avalanche L1s"] description: ZeroDev is an account abstraction toolkit that enables developers to build applications with smart accounts, gasless transactions, and session keys. logo: /images/zerodev.png developer: ZeroDev website: https://zerodev.app/ documentation: https://docs.zerodev.app/ --- ## Overview ZeroDev is an account abstraction toolkit built on ERC-4337 for building user-friendly blockchain applications. It provides everything needed to integrate smart accounts into dApps: gasless transactions, batch transactions, and session keys for simplified authentication flows. ## Features - **Smart Accounts**: Implement fully ERC-4337 compliant smart contract accounts that enhance security and user experience. - **Gasless Transactions**: Enable sponsorship of gas fees for users, eliminating the need for them to hold native tokens. - **Bundled Transactions**: Combine multiple transactions into one for better UX and lower overall gas costs. - **Session Keys**: Allow users to authorize specific actions for a limited time without needing to sign every transaction. - **Social Login**: Integrate with email, social media, and passkey authentication for seamless user onboarding. - **Multi-chain Support**: Deploy and manage smart accounts across multiple EVM-compatible blockchains. - **Modular Architecture**: Customize account implementation based on specific application needs. ## Getting Started 1. **Install ZeroDev SDK**: Add the SDK to your project using npm or yarn: ```bash npm install @zerodev/sdk ``` 2. **Initialize the SDK**: Set up the client in your application code: ```javascript import { createEcdsaKernelAccountClient } from "@zerodev/sdk" const client = await createEcdsaKernelAccountClient({ projectId: "YOUR_PROJECT_ID", owner: yourWalletClient, }) ``` 3. **Register for API Keys**: Create an account on the [ZeroDev Dashboard](https://dashboard.zerodev.app/) to get your project ID. 4. **Implement Gas Sponsorship**: Follow the [paymaster documentation](https://docs.zerodev.app/sdk/core-api/sponsor-gas) to set up gas sponsorship for your users. 5. **Deploy and Test**: Test your implementation in a development environment before going live. ## Documentation For detailed guides, API references, and examples, visit the [ZeroDev Documentation](https://docs.zerodev.app/). ## Use Cases **DeFi Applications**: Eliminate gas fees and simplify complex transaction sequences through batching. **Gaming and NFT Platforms**: Enable gasless minting and trading of NFTs, making blockchain gaming accessible to mainstream audiences. **Web3 Social Applications**: Implement social login and session keys to create smooth, Web2-like user experiences while maintaining the benefits of blockchain. **Enterprise Solutions**: Build corporate wallet solutions with customizable permissions, transaction limits, and multi-signature requirements. **Mobile dApps**: Create mobile-friendly applications that don't require users to manually sign every transaction. ## Pricing ZeroDev offers a tiered pricing model: - **Free Tier**: Up to 100 monthly active users with basic features - **Growth**: Starting at $99/month for up to 1,000 monthly active users - **Scale**: Custom pricing for enterprises with higher volume needs For the most current pricing information, visit the [ZeroDev pricing page](https://zerodev.app/pricing). # Zero Hash (/integrations/zerohash) --- title: Zero Hash category: Fiat On-Ramp available: ["C-Chain"] description: Zero Hash provides enterprise-grade infrastructure for crypto trading, tokenization, payments, and on-ramp/off-ramp solutions across 200+ countries with full regulatory compliance. logo: /images/zerohash.jpg developer: Zero Hash website: https://zerohash.com/ documentation: https://docs.zerohash.com/ --- ## Overview Zero Hash is a B2B infrastructure provider that enables businesses to integrate cryptocurrency, stablecoin, and tokenization capabilities into their platforms. With over $50 billion in transaction volume settled and 5+ million end customers across 200+ countries, it provides regulated infrastructure for crypto trading, payments, tokenization, and fiat on/off-ramp solutions. Zero Hash supports Avalanche along with 80+ other digital assets, offering services including instant account funding, buy/sell, staking, qualified custody, cross-border payments, and tokenization. Licensed under MiCAR across Europe and regulated in the United States, Zero Hash is trusted by banks, brokerages, payment service providers, and fintech platforms. ## Features - **Fiat On-Ramp and Off-Ramp**: Conversion between fiat currencies and cryptocurrencies including AVAX and Avalanche-based assets. - **Global Payment Support**: Accept payments and process payouts in 200+ countries with multiple payment methods. - **Crypto Trading**: Embeddable out-of-the-box crypto trading functionality for buying and selling digital assets. - **Instant Account Funding**: Enable users to fund accounts instantly anytime, anywhere with multiple funding sources. - **Staking Infrastructure**: Offer secure token staking services to your users with enterprise-grade custody. - **Qualified Custody**: Safeguard crypto and tokenized assets with institutional-grade custody solutions. - **Cross-Border Payments**: Instant international remittances and payments 24/7/365 using crypto rails. - **Tokenization Engine**: Create and manage tokenized real-world assets on blockchain infrastructure. - **Tokenization Payment Rails**: Unlock programmable and instant funding through tokenized payment infrastructure. - **80+ Assets Supported**: Support for major cryptocurrencies, stablecoins, and tokenized assets including AVAX. - **Full Regulatory Compliance**: Licensed under MiCAR (Europe), FinCEN-registered MSB, and regulated money transmitter in 51 U.S. jurisdictions. - **99.9% Uptime**: Enterprise-grade infrastructure with industry-leading reliability and availability. - **White-Label Solutions**: Fully customizable integration that matches your brand experience. - **APIs**: Well-documented REST APIs for all services with SDKs for major languages. - **Webhooks**: Real-time notifications for transaction events and status updates. ## Getting Started 1. **Contact Zero Hash**: Reach out to Zero Hash's partnership team to discuss your business requirements and use cases. 2. **Complete Onboarding**: Go through Zero Hash's enterprise onboarding process including business verification and compliance review. 3. **Choose Your Services**: Select which Zero Hash services you need: - On/off-ramp for fiat conversion - Trading infrastructure for buy/sell functionality - Custody solutions for asset safeguarding - Staking services for yield generation - Tokenization for creating digital assets - Payment rails for instant settlements 4. **Obtain API Credentials**: Receive your API keys and access to Zero Hash's developer portal and documentation. 5. **Integration Development**: Build your integration using Zero Hash's comprehensive API documentation and SDKs. 6. **Configure Services**: Set up your payment methods, supported assets, trading pairs, and custody arrangements. 7. **Test in Sandbox**: Thoroughly test your integration in Zero Hash's sandbox environment before production. 8. **Compliance Setup**: Complete any remaining compliance requirements for your specific use case and target markets. 9. **Go Live**: Launch your integration in production and start offering crypto services to your users. ## Avalanche Support Zero Hash supports AVAX and other assets on the Avalanche C-Chain as part of their 80+ asset offering. This enables businesses to offer their users the ability to buy, sell, trade, stake, and custody Avalanche-based cryptocurrencies through a single, fully compliant infrastructure provider. ## Documentation For more details, visit: - [Zero Hash Documentation](https://docs.zerohash.com/) - [API Reference](https://docs.zerohash.com/reference) - [Integration Guides](https://docs.zerohash.com/docs) - [Developer Portal](https://zerohash.com/developers) ## Use Cases on Avalanche Zero Hash infrastructure can power various Avalanche applications and services: **Cryptocurrency Brokerages**: Launch complete crypto brokerage services with trading, custody, and on-ramp for AVAX and Avalanche tokens. **Banking Platforms**: Enable banks to offer crypto services including AVAX trading, custody, and payments to their customers. **Payment Service Providers**: Integrate crypto payment capabilities using Avalanche's fast, low-cost infrastructure. **Fintech Applications**: Add AVAX and stablecoin functionality to your fintech platform with full regulatory compliance. **Tokenization Platforms**: Build tokenization solutions on Avalanche with integrated payment rails and custody. **Wealth Management**: Offer AVAX and Avalanche-based assets to wealth management clients with qualified custody. **Payroll Platforms**: Enable global payroll payments using AVAX and stablecoins on Avalanche for instant settlement. **Remittance Services**: Facilitate cross-border money transfers using Avalanche's fast and low-cost infrastructure. ## Enterprise Solutions Zero Hash provides tailored solutions for different industries: **Banks**: Complete crypto infrastructure across wealth management, payments, and banking operations with full regulatory compliance. **Brokerages**: End-to-end crypto stack including trading, custody, instant account funding, and tokenization tools. **Payment Service Providers**: Enable crypto and stablecoin use cases with licensed infrastructure. **Tokenization Platforms**: Tokenization infrastructure with integrated payment rails for asset issuance and trading. **Payroll Services**: Global payroll solutions with instant cross-border payments using crypto rails. ## Pricing Zero Hash offers enterprise pricing based on your specific needs: - **Custom Pricing**: Tailored pricing based on transaction volume, services used, and business requirements - **Transaction-Based Fees**: Competitive fees on trading, conversions, and on-ramp/off-ramp transactions - **Custody Fees**: Transparent custody fees for asset safeguarding services - **Volume Discounts**: Reduced pricing for high-volume customers - **Enterprise Packages**: Comprehensive solutions with bundled services for large organizations Contact Zero Hash's sales team for detailed pricing information and custom enterprise arrangements. ## Compliance and Licensing Zero Hash maintains regulatory compliance globally: - **MiCAR Licensed**: Fully licensed under EU's Markets in Crypto-Assets Regulation across Europe - **U.S. Money Transmitter**: Licensed to operate in 51 U.S. jurisdictions - **FinCEN Registered**: Registered Money Service Business with Financial Crimes Enforcement Network - **New York BitLicense**: Licensed by NY Department of Financial Services for virtual currency business - **Canadian MSB**: Registered as Money Service Business with FINTRAC in Canada - **Regular Audits**: Ongoing compliance audits and financial controls - **Institutional-Grade Custody**: SOC 2 Type II certified custody infrastructure ## Why Choose Zero Hash **Proven at Scale**: Over $50 billion in transaction volume with 5+ million end customers across 200+ countries. **Single Provider**: Covers trading, custody, payments, tokenization, and on-ramps. **Full Regulatory Coverage**: Licensed and compliant across major jurisdictions including EU, U.S., and Canada. **Enterprise-Grade Reliability**: 99.9% uptime with institutional-quality infrastructure and security. **Fast Time to Market**: Pre-built infrastructure enables rapid deployment compared to building in-house. **Trusted by Leaders**: Powers crypto services for major banks, brokerages, fintechs, and payment platforms. **Multi-Chain Support**: Support for 80+ assets across multiple blockchains including Avalanche. # Zk.Me (/integrations/zkme) --- title: Zk.Me category: KYC / Identity Verification available: ["C-Chain"] description: "Zk.Me provides zero-knowledge proof (ZKP) based KYC solutions for privacy-preserving identity verification on blockchain platforms." logo: /images/zkme.png developer: Zk.Me website: https://zk.me/ documentation: https://zk.me/ --- ## Overview Zk.Me is a zero-knowledge proof (ZKP) based identity verification platform for privacy-preserving KYC on blockchain applications. Users can prove identity credentials without revealing sensitive personal information, balancing regulatory compliance with privacy. ## Features - **Zero-Knowledge KYC**: Verify compliance without exposing personal data. - **Privacy-Preserving**: Advanced cryptography maintains user privacy during verification. - **Reusable Credentials**: Verify once and reuse credentials across multiple platforms. - **On-Chain Verification**: Smart contract compatible ZK proofs for automated compliance. - **Multi-Level Verification**: Different verification tiers for varying compliance requirements. - **Decentralized Infrastructure**: Built on decentralized systems for enhanced security. - **Regulatory Compliance**: Meet KYC requirements while maintaining privacy standards. ## Documentation For more information, visit the [Zk.Me website](https://zk.me/). ## Use Cases - **Private KYC**: Enable KYC compliance without compromising user privacy. - **Selective Disclosure**: Prove specific attributes without full identity exposure. - **DeFi Compliance**: Implement privacy-preserving compliance for DeFi protocols. - **Smart Contract Integration**: Automated verification using ZK proofs in contracts. # Zokyo (/integrations/zokyo) --- title: Zokyo category: Audit Firms available: ["C-Chain"] description: Zokyo provides blockchain security services including smart contract audits, penetration testing, and continuous security monitoring for protocols across multiple ecosystems. logo: /images/zokyo.jpg developer: Zokyo website: https://www.zokyo.io/ documentation: https://www.zokyo.io/services --- ## Overview Zokyo is a full-service blockchain security firm offering smart contract audits, penetration testing, and ongoing security monitoring. Their team of security researchers and ethical hackers helps projects across multiple blockchain ecosystems secure protocols from development through production. Zokyo goes beyond one-time audits to include continuous monitoring, incident response, and security consulting. Their expertise spans DeFi, NFTs, gaming, infrastructure, and enterprise blockchain applications. ## Services - **Smart Contract Audits**: Security audits of smart contracts across multiple languages and chains. - **Penetration Testing**: Adversarial testing of protocols and infrastructure. - **Continuous Monitoring**: Ongoing security surveillance post-deployment. - **Incident Response**: Emergency support for security incidents and exploits. - **Security Consulting**: Advisory services for secure protocol design and architecture. - **Code Review**: Detailed examination of implementation and logic. - **Vulnerability Assessment**: Systematic identification of security weaknesses. - **Bug Bounty Management**: Management and coordination of bug bounty programs. - **Security Training**: Educational programs for development teams. - **Compliance Review**: Assessment of regulatory and compliance requirements. ## Security Approach Zokyo provides end-to-end security: **Pre-Launch**: Design review, architecture assessment, and smart contract audits. **Launch**: Final security verification and deployment support. **Post-Launch**: Continuous monitoring, incident response, and security updates. **Ongoing**: Regular security check-ins, re-audits after upgrades, and consulting. This covers security at every stage. ## Audit Methodology Audit process: 1. **Discovery**: Understand protocol design, architecture, and business logic 2. **Threat Modeling**: Identify potential attack vectors and risk areas 3. **Automated Testing**: Run comprehensive security analysis tools 4. **Manual Review**: Expert line-by-line code examination 5. **Penetration Testing**: Adversarial testing of the protocol 6. **Logic Verification**: Validate business logic and economic mechanisms 7. **Documentation**: Compile detailed findings with severity ratings 8. **Presentation**: Review findings with development team 9. **Remediation Support**: Assist during vulnerability fixes 10. **Verification**: Re-audit to confirm all issues resolved ## Penetration Testing Beyond audits, Zokyo offers penetration testing: **Infrastructure Testing**: Test servers, databases, and backend systems. **API Testing**: Evaluate API security and authentication. **Frontend Testing**: Assess web application security. **Social Engineering**: Test human elements of security. **Network Security**: Evaluate network architecture and defenses. This testing identifies vulnerabilities that standard audits might miss. ## Avalanche Expertise Zokyo has experience securing protocols on Avalanche including: - Avalanche C-Chain smart contracts - Subnet-specific implementations - Cross-chain bridge security - DeFi protocols on Avalanche - NFT and gaming projects - Infrastructure and tooling ## Access Through Areta Marketplace Avalanche projects can engage Zokyo through the [Areta Audit Marketplace](https://areta.market/avalanche): - **Quick Connection**: Submit request and receive quotes within 48 hours - **Multiple Proposals**: Compare Zokyo with other leading firms - **Clear Pricing**: Transparent costs without hidden fees - **Subsidy Access**: Eligible for up to $10k audit cashback - **Streamlined Process**: Faster than traditional direct outreach - **Avalanche-Focused**: Marketplace built for Avalanche ecosystem ## Audit Focus Areas **DeFi Protocols**: All DeFi categories including lending, DEXs, derivatives, and yield strategies. **NFT & Gaming**: NFT marketplaces, game contracts, and play-to-earn platforms. **Infrastructure**: Bridges, oracles, layer 2 solutions, and core infrastructure. **Enterprise Blockchain**: Private and permissioned blockchain applications. **Token Economics**: Token contracts, vesting, and distribution systems. **Governance**: DAO governance contracts and voting mechanisms. ## Continuous Monitoring Zokyo provides ongoing security: **Transaction Monitoring**: Real-time monitoring of on-chain activity. **Anomaly Detection**: Automated alerts for suspicious transactions. **Threat Intelligence**: Proactive identification of emerging threats. **Security Updates**: Regular security briefings and updates. **Incident Response**: Rapid response to detected security issues. ## Why Choose Zokyo **Full-Service Security**: Complete security lifecycle from audit to ongoing monitoring. **Penetration Testing**: Goes beyond audits to include adversarial testing. **Continuous Protection**: Ongoing monitoring ensures lasting security. **Experienced Team**: Security researchers and ethical hackers with extensive experience. **Practical Approach**: Actionable recommendations and remediation support. **Multi-Chain Expertise**: Experience across multiple blockchain ecosystems. **Responsive Support**: Available for urgent security needs. ## Bug Bounty Programs Zokyo helps manage bug bounty programs: - Program design and structure - Platform selection and setup - Researcher outreach and management - Submission triage and validation - Payout coordination - Security researcher relations This adds a security layer through community research. ## Pricing Zokyo offers flexible pricing: - Tiered pricing based on project complexity - Packages including audit + monitoring - Subscription options for ongoing services - Custom enterprise engagements Contact via Areta marketplace or directly for proposals. ## Getting Started 1. **Via Areta Marketplace** (Recommended for Avalanche): - Visit [areta.market/avalanche](https://areta.market/avalanche) - Submit audit request with project details - Receive competitive quote from Zokyo - Access subsidies and streamlined process 2. **Direct Contact**: - Visit [zokyo.io](https://www.zokyo.io/) - Submit security inquiry - Discuss scope and requirements - Receive detailed proposal ## Deliverables Zokyo provides: - **Audit Report**: Detailed findings with severity classifications - **Executive Summary**: High-level overview for stakeholders - **Penetration Test Report**: Results from adversarial testing - **Remediation Guidance**: Specific recommendations for fixes - **Re-Audit Report**: Verification of all remediations - **Monitoring Setup**: Configuration of continuous monitoring (if applicable) - **Security Badge**: Post-audit security badge ## Client Support Zokyo provides ongoing support: - Dedicated security team contacts - Emergency incident response - Regular security briefings - Access to security resources and tools - Community and educational content # ACP-226 Dynamic Minimum Block Times for Sub-Second Blocks (/blog/226-min-block-times) # ACP-267: Primary Network Uptime Requirement Increases to 90% (/blog/acp-267-validator-uptime-requirement) # Scaling Web3 Distribution to 169M+ Telco Users with Avalanche (/blog/binary-holdings-avalanche-l1) # Native Safe Multisig Support for Avalanche L1s in Builder Console (/blog/builder-console-safe-support) # Cortina: X-Chain Linearization (/blog/cortina-x-chain-linearization) # DeletegateCall Incident Overview (/blog/delegatecall-incident) # Deploy a DApp on the C-Chain with Foundry (/blog/deploy-a-dapp-on-c-chain-with-foundry) # Durango: Avalanche Warp Messaging Comes to the EVM (/blog/durango-avalanche-warp-messaging) # What to Expect After the Etna Upgrade (/blog/etna-changes) # Etna: Enhancing the Sovereignty of Avalanche L1 Networks (/blog/etna-enhancing-sovereignty-avalanche-l1s) # Motivation behind Avalanche9000 (/blog/etna-upgrade-motivation) # Avalanche C-Chain Throughput Increases as Validators Signal Higher Gas Targets (/blog/gas-target-increase) # How to Upgrade AvalancheGo to Granite (/blog/granite-installer) # Avalanche Granite Upgrade - Enhancing ICM, Unlocking Biometric Use Cases, and Enabling Dynamic Block Times (/blog/granite-upgrade) # Install Avalanche CLI (/blog/install-avalanche-cli) # Economics of L1 Blockchains: Deploying on Public Permissionless Chains vs. Your Custom L1 Blockchain (/blog/l1-economics) # How Do L1 Validator Fees Work? (/blog/l1-validator-fee) # Avalanche Octane: Optimizing C-Chain Fees and Gas Target (/blog/octane-optimizing-c-chain-gas-fees) # Playing in the P2P Network: Investigating Anomalies in Avalanche's Transaction Gossip (/blog/p2p-network-anomalies) # Playing Defense: Spam Prevention on Avalanche C-Chain (/blog/spam-prevention-blog) # Validator Rewards and Staking Mechanisms on Avalanche L1s (/blog/staking-and-validator-management) # Subnet & L1 Validators, What's the Difference? (/blog/subnet-vs-l1-validators) # Create a Telegram Mini-App using ThirdWeb SDK (/blog/telegram-miniapps-thirdweb) # Use Privy on Avalanche L1 (/blog/use-privy-on-l1) # Inside the Invisible Web3 Stack: Wallets, Gas Abstraction, and Onchain Settlement (/blog/web3-payment-stack-part-2) # The New Payment Stack: How Web3 Rails Are Powering Real-World Transactions (/blog/web3-payment-stack) # What is a blockchain? (/blog/what-is-a-blockchain) # Why Test Networks Like Fuji Exist — and Who They're For (/blog/what-is-fuji-testnet)

snowman/transitive.go:392 consensus starting {"lastAcceptedBlock": "2ofmPJuWZbdroCPEMv6aHGvZ45oa8SBp2reEm9gNxvFjnfSGFP"} [09-09|17:01:51.628] INFO snowman/transitive.go:334 consensus starting {"lenFrontier": 1} ``` ### Check Bootstrapping Progress[​](#check-bootstrapping-progress "Direct link to heading") To check if a given chain is done bootstrapping, in another terminal window call [`info.isBootstrapped`](/docs/rpcs/other/info-rpc#infoisbootstrapped) by copying and pasting the following command: ```bash curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"info.isBootstrapped", "params": { "chain":"X" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info ``` If this returns `true`, the chain is bootstrapped; otherwise, it returns `false`. If you make other API calls to a chain that is not done bootstrapping, it will return `API call rejected because chain is not done bootstrapping`. If you are still experiencing issues please contact us on [Discord.](https://chat.avalabs.org/) The 3 chains will bootstrap in the following order: P-chain, X-chain, C-chain. Learn more about bootstrapping [here](/docs/nodes/maintain/bootstrapping). ## RPC When finished bootstrapping, the X, P, and C-Chain RPC endpoints will be: ```bash localhost:9650/ext/bc/P localhost:9650/ext/bc/X localhost:9650/ext/bc/C/rpc ``` if run locally, or ```bash XXX.XX.XX.XXX:9650/ext/bc/P XXX.XX.XX.XXX:9650/ext/bc/X XXX.XX.XX.XXX:9650/ext/bc/C/rpc ``` if run on a cloud provider. The “XXX.XX.XX.XXX" should be replaced with the public IP of your EC2 instance. For more information on the requests available at these endpoints, please see the [AvalancheGo API Reference](/docs/rpcs/p-chain) documentation. ## Going Further Your Avalanche node will perform consensus on its own, but it is not yet a validator on the network. This means that the rest of the network will not query your node when sampling the network during consensus. If you want to add your node as a validator, check out [Add a Validator](/docs/primary-network/validate/node-validator) to take it a step further. Also check out the [Maintain](/docs/nodes/maintain/bootstrapping) section to learn about how to maintain and customize your node to fit your needs. To track an Avalanche L1 with your node, head to the [Avalanche L1 Node](/docs/nodes/run-a-node/avalanche-l1-nodes) tutorial. # Run AvalancheGo with Docker (/docs/nodes/run-a-node/using-docker) --- title: Run AvalancheGo with Docker description: Learn how to run an Avalanche node using the official AvalancheGo Docker image. --- For an easier way to set up and run a node, try the [Avalanche Console Node Setup Tool](/console/primary-network/node-setup). ## Prerequisites - [Docker](https://docs.docker.com/get-docker/) installed and running Verify your Docker installation: ```bash docker --version ``` ## Quick Start Pull and run the latest AvalancheGo release: ```bash docker run -d \ --name avalanchego \ -p 9650:9650 \ -p 9651:9651 \ -v ~/.avalanchego:/root/.avalanchego \ avaplatform/avalanchego:v1.14.1 ``` This will start an AvalancheGo node and begin syncing with the Avalanche network. Replace `v1.14.1` with the latest release version from the [AvalancheGo releases page](https://github.com/ava-labs/avalanchego/releases). ## What This Command Does | Flag | Purpose | |------|---------| | `-d` | Runs the container in the background (detached mode) | | `--name avalanchego` | Names the container for easy reference | | `-p 9650:9650` | Exposes the HTTP API port | | `-p 9651:9651` | Exposes the P2P staking port | | `-v ~/.avalanchego:/root/.avalanchego` | Persists chain data and node configuration to your host machine | The volume mount (`-v`) is important. Without it, chain data is lost when the container is removed and the node will need to re-sync from scratch. ## Check Node Status Once the container is running, check that the node is bootstrapping: ```bash curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"info.isBootstrapped", "params": { "chain": "X" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info ``` The response will show `"isBootstrapped": true` once the node has finished syncing. ## View Logs ```bash docker logs -f avalanchego ``` ## Stop and Restart ```bash docker stop avalanchego docker start avalanchego ``` ## Upgrade to a New Version To upgrade AvalancheGo, stop the current container, remove it, and run the new version: ```bash docker stop avalanchego docker rm avalanchego docker run -d \ --name avalanchego \ -p 9650:9650 \ -p 9651:9651 \ -v ~/.avalanchego:/root/.avalanchego \ avaplatform/avalanchego: ``` Your chain data is preserved in `~/.avalanchego` on the host, so the node will resume from where it left off. ## Pass Configuration Flags You can pass any [AvalancheGo configuration flags](/docs/nodes/configure/avalanchego-config-flags) directly after the image name: ```bash docker run -d \ --name avalanchego \ -p 9650:9650 \ -p 9651:9651 \ -v ~/.avalanchego:/root/.avalanchego \ avaplatform/avalanchego:v1.14.1 \ --http-host=0.0.0.0 \ --public-ip-resolution-service=opendns ``` ## Connect to Fuji Testnet To run a node on the Fuji testnet instead of Mainnet: ```bash docker run -d \ --name avalanchego-fuji \ -p 9650:9650 \ -p 9651:9651 \ -v ~/.avalanchego-fuji:/root/.avalanchego \ avaplatform/avalanchego:v1.14.1 \ --network-id=fuji ``` ## Port Reference | Port | Protocol | Purpose | |------|----------|---------| | `9650` | TCP | HTTP API (RPC calls) | | `9651` | TCP | P2P networking and staking | Ensure these ports are open in your firewall. Port `9651` must be reachable from the internet for your node to participate in the network. # Subnet-EVM Configs (/docs/rpcs/subnet-evm/config) --- title: "Subnet-EVM Configs" description: "This page describes the configuration options available for the Subnet-EVM." edit_url: https://github.com/ava-labs/avalanchego/edit/master/graft/subnet-evm/plugin/evm/config/config.md --- # Subnet-EVM Configuration > **Note**: These are the configuration options available in the Subnet-EVM codebase. To set these values, you need to create a configuration file at `~/.avalanchego/configs/chains//config.json`. > > For the AvalancheGo node configuration options, see the AvalancheGo Configuration page. This document describes all configuration options available for Subnet-EVM. ## Example Configuration ```json { "eth-apis": ["eth", "eth-filter", "net", "web3"], "pruning-enabled": true, "commit-interval": 4096, "trie-clean-cache": 512, "trie-dirty-cache": 512, "snapshot-cache": 256, "rpc-gas-cap": 50000000, "log-level": "info", "metrics-expensive-enabled": true, "continuous-profiler-dir": "./profiles", "state-sync-enabled": false, "accepted-cache-size": 32 } ``` ## Configuration Format Configuration is provided as a JSON object. All fields are optional unless otherwise specified. ## API Configuration ### Ethereum APIs | Option | Type | Description | Default | |--------|------|-------------|---------| | `eth-apis` | array of strings | List of Ethereum services that should be enabled | `["eth", "eth-filter", "net", "web3", "internal-eth", "internal-blockchain", "internal-transaction"]` | ### Subnet-EVM Specific APIs | Option | Type | Description | Default | |--------|------|-------------|---------| | `validators-api-enabled` | bool | Enable the validators API | `true` | | `admin-api-enabled` | bool | Enable the admin API for administrative operations | `false` | | `admin-api-dir` | string | Directory for admin API operations | - | | `warp-api-enabled` | bool | Enable the Warp API for cross-chain messaging | `false` | ### API Limits and Security | Option | Type | Description | Default | |--------|------|-------------|---------| | `rpc-gas-cap` | uint64 | Maximum gas limit for RPC calls | `50,000,000` | | `rpc-tx-fee-cap` | float64 | Maximum transaction fee cap in AVAX | `100` | | `api-max-duration` | duration | Maximum duration for API calls (0 = no limit) | `0` | | `api-max-blocks-per-request` | int64 | Maximum number of blocks per getLogs request (0 = no limit) | `0` | | `http-body-limit` | uint64 | Maximum size of HTTP request bodies | - | | `batch-request-limit` | uint64 | Maximum number of requests that can be batched in an RPC call. For no limit, set either this or `batch-response-max-size` to 0 | `1000` | | `batch-response-max-size` | uint64 | Maximum size (in bytes) of response that can be returned from a batched RPC call. For no limit, set either this or `batch-request-limit` to 0. Defaults to `25 MB`| `1000` | ### WebSocket Settings | Option | Type | Description | Default | |--------|------|-------------|---------| | `ws-cpu-refill-rate` | duration | Rate at which WebSocket CPU usage quota is refilled (0 = no limit) | `0` | | `ws-cpu-max-stored` | duration | Maximum stored WebSocket CPU usage quota (0 = no limit) | `0` | ## Cache Configuration ### Trie Caches | Option | Type | Description | Default | |--------|------|-------------|---------| | `trie-clean-cache` | int | Size of the trie clean cache in MB | `512` | | `trie-dirty-cache` | int | Size of the trie dirty cache in MB | `512` | | `trie-dirty-commit-target` | int | Memory limit to target in the dirty cache before performing a commit in MB | `20` | | `trie-prefetcher-parallelism` | int | Maximum concurrent disk reads trie prefetcher should perform | `16` | ### Other Caches | Option | Type | Description | Default | |--------|------|-------------|---------| | `snapshot-cache` | int | Size of the snapshot disk layer clean cache in MB | `256` | | `accepted-cache-size` | int | Depth to keep in the accepted headers and logs cache (blocks) | `32` | | `state-sync-server-trie-cache` | int | Trie cache size for state sync server in MB | `64` | ## Ethereum Settings ### Transaction Processing | Option | Type | Description | Default | |--------|------|-------------|---------| | `preimages-enabled` | bool | Enable preimage recording | `false` | | `allow-unfinalized-queries` | bool | Allow queries for unfinalized blocks | `false` | | `allow-unprotected-txs` | bool | Allow unprotected transactions (without EIP-155) | `false` | | `allow-unprotected-tx-hashes` | array | List of specific transaction hashes allowed to be unprotected | EIP-1820 registry tx | | `local-txs-enabled` | bool | Enable treatment of transactions from local accounts as local | `false` | ### Snapshots | Option | Type | Description | Default | |--------|------|-------------|---------| | `snapshot-wait` | bool | Wait for snapshot generation on startup | `false` | | `snapshot-verification-enabled` | bool | Enable snapshot verification | `false` | ## Pruning and State Management ### Basic Pruning | Option | Type | Description | Default | |--------|------|-------------|---------| | `pruning-enabled` | bool | Enable state pruning to save disk space | `true` | | `commit-interval` | uint64 | Interval at which to persist EVM and atomic tries (blocks) | `4096` | | `accepted-queue-limit` | int | Maximum blocks to queue before blocking during acceptance | `64` | ### State Reconstruction | Option | Type | Description | Default | |--------|------|-------------|---------| | `allow-missing-tries` | bool | Suppress warnings about incomplete trie index | `false` | | `populate-missing-tries` | uint64 | Starting block for re-populating missing tries (null = disabled) | `null` | | `populate-missing-tries-parallelism` | int | Concurrent readers for re-populating missing tries | `1024` | ### Offline Pruning | Option | Type | Description | Default | |--------|------|-------------|---------| | `offline-pruning-enabled` | bool | Enable offline pruning | `false` | | `offline-pruning-bloom-filter-size` | uint64 | Bloom filter size for offline pruning in MB | `512` | | `offline-pruning-data-directory` | string | Directory for offline pruning data | - | ### Historical Data | Option | Type | Description | Default | |--------|------|-------------|---------| | `historical-proof-query-window` | uint64 | Number of blocks before last accepted for proof queries (archive mode only, ~24 hours) | `43200` | | `state-history` | uint64 | Number of most recent states that are accesible on disk (pruning mode only) | `32` | ## Transaction Pool Configuration | Option | Type | Description | Default | |--------|------|-------------|---------| | `tx-pool-price-limit` | uint64 | Minimum gas price for transaction acceptance | - | | `tx-pool-price-bump` | uint64 | Minimum price bump percentage for transaction replacement | - | | `tx-pool-account-slots` | uint64 | Maximum number of executable transaction slots per account | - | | `tx-pool-global-slots` | uint64 | Maximum number of executable transaction slots for all accounts | - | | `tx-pool-account-queue` | uint64 | Maximum number of non-executable transaction slots per account | - | | `tx-pool-global-queue` | uint64 | Maximum number of non-executable transaction slots for all accounts | - | | `tx-pool-lifetime` | duration | Maximum time transactions can stay in the pool | - | ## Gossip Configuration ### Push Gossip Settings | Option | Type | Description | Default | |--------|------|-------------|---------| | `push-gossip-percent-stake` | float64 | Percentage of total stake to push gossip to (range: [0, 1]) | `0.9` | | `push-gossip-num-validators` | int | Number of validators to push gossip to | `100` | | `push-gossip-num-peers` | int | Number of non-validator peers to push gossip to | `0` | ### Regossip Settings | Option | Type | Description | Default | |--------|------|-------------|---------| | `push-regossip-num-validators` | int | Number of validators to regossip to | `10` | | `push-regossip-num-peers` | int | Number of non-validator peers to regossip to | `0` | | `priority-regossip-addresses` | array | Addresses to prioritize for regossip | - | ### Timing Configuration | Option | Type | Description | Default | |--------|------|-------------|---------| | `push-gossip-frequency` | duration | Frequency of push gossip | `100ms` | | `pull-gossip-frequency` | duration | Frequency of pull gossip | `1s` | | `regossip-frequency` | duration | Frequency of regossip | `30s` | ## Logging and Monitoring ### Logging | Option | Type | Description | Default | |--------|------|-------------|---------| | `log-level` | string | Logging level (trace, debug, info, warn, error, crit) | `"info"` | | `log-json-format` | bool | Use JSON format for logs | `false` | ### Profiling | Option | Type | Description | Default | |--------|------|-------------|---------| | `continuous-profiler-dir` | string | Directory for continuous profiler output (empty = disabled) | - | | `continuous-profiler-frequency` | duration | Frequency to run continuous profiler | `15m` | | `continuous-profiler-max-files` | int | Maximum number of profiler files to maintain | `5` | ### Metrics | Option | Type | Description | Default | |--------|------|-------------|---------| | `metrics-expensive-enabled` | bool | Enable expensive debug-level metrics; this includes Firewood metrics | `true` | ## Security and Access ### Keystore | Option | Type | Description | Default | |--------|------|-------------|---------| | `keystore-directory` | string | Directory for keystore files (absolute or relative path) | - | | `keystore-external-signer` | string | External signer configuration | - | | `keystore-insecure-unlock-allowed` | bool | Allow insecure account unlocking | `false` | ### Fee Configuration | Option | Type | Description | Default | |--------|------|-------------|---------| | `feeRecipient` | string | Address to send transaction fees to (leave empty if not supported) | - | ## Network and Sync ### Network | Option | Type | Description | Default | |--------|------|-------------|---------| | `max-outbound-active-requests` | int64 | Maximum number of outbound active requests for VM2VM network | `16` | ### State Sync | Option | Type | Description | Default | |--------|------|-------------|---------| | `state-sync-enabled` | bool | Enable state sync | `false` | | `state-sync-skip-resume` | bool | Force state sync to use highest available summary block | `false` | | `state-sync-ids` | string | Comma-separated list of state sync IDs | - | | `state-sync-commit-interval` | uint64 | Commit interval for state sync (blocks) | `16384` | | `state-sync-min-blocks` | uint64 | Minimum blocks ahead required for state sync | `300000` | | `state-sync-request-size` | uint16 | Number of key/values to request per state sync request | `1024` | ## Database Configuration > **WARNING**: `firewood` and `path` schemes are untested in production. Using `path` is strongly discouraged. To use `firewood`, you must also set the following config options: > > - `pruning-enabled: true` (enabled by default) > - `state-sync-enabled: false` > - `snapshot-cache: 0` Failing to set these options will result in errors on VM initialization. Additionally, not all APIs are available - see these portions of the config documentation for more details. | Option | Type | Description | Default | |--------|------|-------------|---------| | `database-type` | string | Type of database to use | `"pebbledb"` | | `database-path` | string | Path to database directory | - | | `database-read-only` | bool | Open database in read-only mode | `false` | | `database-config` | string | Inline database configuration | - | | `database-config-file` | string | Path to database configuration file | - | | `use-standalone-database` | bool | Use standalone database instead of shared one | - | | `inspect-database` | bool | Inspect database on startup | `false` | | `state-scheme` | string | EXPERIMENTAL: specifies the database scheme to store state data; can be one of `hash` or `firewood` | `hash` | ## Transaction Indexing | Option | Type | Description | Default | |--------|------|-------------|---------| | `transaction-history` | uint64 | Maximum number of blocks from head whose transaction indices are reserved (0 = no limit) | - | | `tx-lookup-limit` | uint64 | **Deprecated** - use `transaction-history` instead | - | | `skip-tx-indexing` | bool | Skip indexing transactions entirely | `false` | ## Warp Configuration | Option | Type | Description | Default | |--------|------|-------------|---------| | `warp-off-chain-messages` | array | Off-chain messages the node should be willing to sign | - | | `prune-warp-db-enabled` | bool | Clear warp database on startup | `false` | ## Miscellaneous | Option | Type | Description | Default | |--------|------|-------------|---------| | `airdrop` | string | Path to airdrop file | - | | `skip-upgrade-check` | bool | Skip checking that upgrades occur before last accepted block ⚠️ **Warning**: Only use when you understand the implications | `false` | | `min-delay-target` | integer | The minimum delay between blocks (in milliseconds) that this node will attempt to use when creating blocks | Parent block's target | ## Gossip Constants The following constants are defined for transaction gossip behavior and cannot be configured without a custom build of Subnet-EVM: | Constant | Type | Description | Value | |----------|------|-------------|-------| | Bloom Filter Min Target Elements | int | Minimum target elements for bloom filter | `8,192` | | Bloom Filter Target False Positive Rate | float | Target false positive rate | `1%` | | Bloom Filter Reset False Positive Rate | float | Reset false positive rate | `5%` | | Bloom Filter Churn Multiplier | int | Churn multiplier | `3` | | Push Gossip Discarded Elements | int | Number of discarded elements | `16,384` | | Tx Gossip Target Message Size | size | Target message size for transaction gossip | `20 KiB` | | Tx Gossip Throttling Period | duration | Throttling period | `10s` | | Tx Gossip Throttling Limit | int | Throttling limit | `2` | | Tx Gossip Poll Size | int | Poll size | `1` | ## Validation Notes - Cannot enable `populate-missing-tries` while pruning or offline pruning is enabled - Cannot run offline pruning while pruning is disabled - Commit interval must be non-zero when pruning is enabled - `push-gossip-percent-stake` must be in range `[0, 1]` - Some settings may require node restart to take effect # Subnet-EVM RPC (/docs/rpcs/subnet-evm) --- title: "Subnet-EVM RPC" description: "This page describes the RPC endpoints available for Subnet-EVM based blockchains." edit_url: https://github.com/ava-labs/avalanchego/edit/master/graft/subnet-evm/plugin/evm/service.md --- [Subnet-EVM](https://github.com/ava-labs/subnet-evm) APIs are identical to [Coreth](https://build.avax.network/docs/api-reference/c-chain/api) C-Chain APIs, except Avalanche Specific APIs starting with `avax`. Subnet-EVM also supports standard Ethereum APIs as well. For more information about Coreth APIs see [GitHub](https://github.com/ava-labs/coreth). Subnet-EVM has some additional APIs that are not available in Coreth. ## `eth_feeConfig` Subnet-EVM comes with an API request for getting fee config at a specific block. You can use this API to check your activated fee config. **Signature:** ```bash eth_feeConfig([blk BlkNrOrHash]) -> {feeConfig: json} ``` - `blk` is the block number or hash at which to retrieve the fee config. Defaults to the latest block if omitted. **Example Call:** ```bash curl -X POST --data '{ "jsonrpc": "2.0", "method": "eth_feeConfig", "params": [ "latest" ], "id": 1 }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/2ebCneCbwthjQ1rYT41nhd7M76Hc6YmosMAQrTFhBq8qeqh6tt/rpc ``` **Example Response:** ```json { "jsonrpc": "2.0", "id": 1, "result": { "feeConfig": { "gasLimit": 15000000, "targetBlockRate": 2, "minBaseFee": 33000000000, "targetGas": 15000000, "baseFeeChangeDenominator": 36, "minBlockGasCost": 0, "maxBlockGasCost": 1000000, "blockGasCostStep": 200000 }, "lastChangedAt": 0 } } ``` ## `eth_getChainConfig` `eth_getChainConfig` returns the Chain Config of the blockchain. This API is enabled by default with `internal-blockchain` namespace. This API exists on the C-Chain as well, but in addition to the normal Chain Config returned by the C-Chain `eth_getChainConfig` on `subnet-evm` additionally returns the upgrade config, which specifies network upgrades activated after the genesis. **Signature:** ```bash eth_getChainConfig({}) -> {chainConfig: json} ``` **Example Call:** ```bash curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"eth_getChainConfig", "params" :[] }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/Nvqcm33CX2XABS62iZsAcVUkavfnzp1Sc5k413wn5Nrf7Qjt7/rpc ``` **Example Response:** ```json { "jsonrpc": "2.0", "id": 1, "result": { "chainId": 43214, "feeConfig": { "gasLimit": 8000000, "targetBlockRate": 2, "minBaseFee": 33000000000, "targetGas": 15000000, "baseFeeChangeDenominator": 36, "minBlockGasCost": 0, "maxBlockGasCost": 1000000, "blockGasCostStep": 200000 }, "allowFeeRecipients": true, "homesteadBlock": 0, "eip150Block": 0, "eip150Hash": "0x2086799aeebeae135c246c65021c82b4e15a2c451340993aacfd2751886514f0", "eip155Block": 0, "eip158Block": 0, "byzantiumBlock": 0, "constantinopleBlock": 0, "petersburgBlock": 0, "istanbulBlock": 0, "muirGlacierBlock": 0, "subnetEVMTimestamp": 0, "contractDeployerAllowListConfig": { "adminAddresses": ["0x8db97c7cece249c2b98bdc0226cc4c2a57bf52fc"], "blockTimestamp": 0 }, "contractNativeMinterConfig": { "adminAddresses": ["0x8db97c7cece249c2b98bdc0226cc4c2a57bf52fc"], "blockTimestamp": 0 }, "feeManagerConfig": { "adminAddresses": ["0x8db97c7cece249c2b98bdc0226cc4c2a57bf52fc"], "blockTimestamp": 0 }, "upgrades": { "precompileUpgrades": [ { "feeManagerConfig": { "adminAddresses": null, "blockTimestamp": 1661541259, "disable": true } }, { "feeManagerConfig": { "adminAddresses": null, "blockTimestamp": 1661541269 } } ] } } } ``` ## `eth_getActivePrecompilesAt` **DEPRECATED—instead use** [`eth_getActiveRulesAt`](#eth_getactiveprecompilesat). `eth_getActivePrecompilesAt` returns activated precompiles at a specific timestamp. If no timestamp is provided it returns the latest block timestamp. This API is enabled by default with `internal-blockchain` namespace. **Signature:** ```bash eth_getActivePrecompilesAt([timestamp uint]) -> {precompiles: []Precompile} ``` - `timestamp` specifies the timestamp to show the precompiles active at this time. If omitted it shows precompiles activated at the latest block timestamp. **Example Call:** ```bash curl -X POST --data '{ "jsonrpc": "2.0", "method": "eth_getActivePrecompilesAt", "params": [], "id": 1 }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/Nvqcm33CX2XABS62iZsAcVUkavfnzp1Sc5k413wn5Nrf7Qjt7/rpc ``` **Example Response:** ```json { "jsonrpc": "2.0", "id": 1, "result": { "contractDeployerAllowListConfig": { "adminAddresses": ["0x8db97c7cece249c2b98bdc0226cc4c2a57bf52fc"], "blockTimestamp": 0 }, "contractNativeMinterConfig": { "adminAddresses": ["0x8db97c7cece249c2b98bdc0226cc4c2a57bf52fc"], "blockTimestamp": 0 }, "feeManagerConfig": { "adminAddresses": ["0x8db97c7cece249c2b98bdc0226cc4c2a57bf52fc"], "blockTimestamp": 0 } } } ``` ## `eth_getActiveRulesAt` `eth_getActiveRulesAt` returns activated rules (precompiles, upgrades) at a specific timestamp. If no timestamp is provided it returns the latest block timestamp. This API is enabled by default with `internal-blockchain` namespace. **Signature:** ```bash eth_getActiveRulesAt([timestamp uint]) -> {rules: json} ``` - `timestamp` specifies the timestamp to show the rules active at this time. If omitted it shows rules activated at the latest block timestamp. **Example Call:** ```bash curl -X POST --data '{ "jsonrpc": "2.0", "method": "eth_getActiveRulesAt", "params": [], "id": 1 }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/Nvqcm33CX2XABS62iZsAcVUkavfnzp1Sc5k413wn5Nrf7Qjt7/rpc ``` **Example Response:** ```json { "jsonrpc": "2.0", "id": 1, "result": { "ethRules": { "IsHomestead": true, "IsEIP150": true, "IsEIP155": true, "IsEIP158": true, "IsByzantium": true, "IsConstantinople": true, "IsPetersburg": true, "IsIstanbul": true, "IsCancun": true }, "avalancheRules": { "IsSubnetEVM": true, "IsDurango": true, "IsEtna": true }, "precompiles": { "contractNativeMinterConfig": { "timestamp": 0 }, "rewardManagerConfig": { "timestamp": 1712918700 }, "warpConfig": { "timestamp": 1714158045 } } } } ``` ## `validators.getCurrentValidators` This API retrieves the list of current validators for the Subnet/L1. It provides detailed information about each validator, including their ID, status, weight, connection, and uptime. URL: `http:///ext/bc//validators` **Signature:** ```bash validators.getCurrentValidators({nodeIDs: []string}) -> {validators: []Validator} ``` - `nodeIDs` is an optional parameter that specifies the node IDs of the validators to retrieve. If omitted, all validators are returned. **Example Call:** ```bash curl -X POST --data '{ "jsonrpc": "2.0", "method": "validators.getCurrentValidators", "params": { "nodeIDs": [] }, "id": 1 }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/C49rHzk3vLr1w9Z8sY7scrZ69TU4WcD2pRS6ZyzaSn9xA2U9F/validators ``` **Example Response:** ```json { "jsonrpc": "2.0", "result": { "validators": [ { "validationID": "nESqWkcNXihfdZESS2idWbFETMzatmkoTCktjxG1qryaQXfS6", "nodeID": "NodeID-P7oB2McjBGgW2NXXWVYjV8JEDFoW9xDE5", "weight": 20, "startTimestamp": 1732025492, "isActive": true, "isL1Validator": false, "isConnected": true, "uptimeSeconds": 36, "uptimePercentage": 100 } ] }, "id": 1 } ``` **Response Fields:** - `validationID`: (string) Unique identifier for the validation. This returns validation ID for L1s, AddSubnetValidator txID for Subnets. - `nodeID`: (string) Node identifier for the validator. - `weight`: (integer) The weight of the validator, often representing stake. - `startTimestamp`: (integer) UNIX timestamp for when validation started. - `isActive`: (boolean) Indicates if the validator is active. This returns true if this is L1 validator and has enough continuous subnet staking fees in P-Chain. It always returns true for subnet validators. - `isL1Validator`: (boolean) Indicates if the validator is a L1 validator or a subnet validator. - `isConnected`: (boolean) Indicates if the validator node is currently connected to the callee node. - `uptimeSeconds`: (integer) The number of seconds the validator has been online. - `uptimePercentage`: (float) The percentage of time the validator has been online. # Health RPC (/docs/rpcs/other/health-rpc) --- title: "Health RPC" description: "This page is an overview of the Health RPC associated with AvalancheGo." edit_url: https://github.com/ava-labs/avalanchego/edit/master/api/health/service.md --- The Health API can be used for measuring node health. This API set is for a specific node; it is unavailable on the [public server](https://build.avax.network/docs/tooling/rpc-providers). ## Health Checks The node periodically runs all health checks, including health checks for each chain. The frequency at which health checks are run can be specified with the [\--health-check-frequency](https://build.avax.network/docs/nodes/configure/configs-flags) flag. ## Filterable Health Checks The health checks that are run by the node are filterable. You can specify which health checks you want to see by using `tags` filters. Returned results will only include health checks that match the specified tags and global health checks like `network`, `database` etc. When filtered, the returned results will not show the full node health, but only a subset of filtered health checks. This means the node can still be unhealthy in unfiltered checks, even if the returned results show that the node is healthy. AvalancheGo supports using subnetIDs as tags. ## GET Request To get an HTTP status code response that indicates the node's health, make a `GET` request. If the node is healthy, it will return a `200` status code. If the node is unhealthy, it will return a `503` status code. In-depth information about the node's health is included in the response body. ### Filtering To filter GET health checks, add a `tag` query parameter to the request. The `tag` parameter is a string. For example, to filter health results by subnetID `29uVeLPJB1eQJkzRemU8g8wZDw5uJRqpab5U2mX9euieVwiEbL`, use the following query: ```sh curl 'http://localhost:9650/ext/health?tag=29uVeLPJB1eQJkzRemU8g8wZDw5uJRqpab5U2mX9euieVwiEbL' ``` In this example returned results will contain global health checks and health checks that are related to subnetID `29uVeLPJB1eQJkzRemU8g8wZDw5uJRqpab5U2mX9euieVwiEbL`. **Note**: This filtering can show healthy results even if the node is unhealthy in other Chains/Avalanche L1s. In order to filter results by multiple tags, use multiple `tag` query parameters. For example, to filter health results by subnetID `29uVeLPJB1eQJkzRemU8g8wZDw5uJRqpab5U2mX9euieVwiEbL` and `28nrH5T2BMvNrWecFcV3mfccjs6axM1TVyqe79MCv2Mhs8kxiY` use the following query: ```sh curl 'http://localhost:9650/ext/health?tag=29uVeLPJB1eQJkzRemU8g8wZDw5uJRqpab5U2mX9euieVwiEbL&tag=28nrH5T2BMvNrWecFcV3mfccjs6axM1TVyqe79MCv2Mhs8kxiY' ``` The returned results will include health checks for both subnetIDs as well as global health checks. ### Endpoints The available endpoints for GET requests are: - `/ext/health` returns a holistic report of the status of the node. **Most operators should monitor this status.** - `/ext/health/health` is the same as `/ext/health`. - `/ext/health/readiness` returns healthy once the node has finished initializing. - `/ext/health/liveness` returns healthy once the endpoint is available. ## JSON RPC Request ### Format This API uses the `json 2.0` RPC format. For more information on making JSON RPC calls, see [here](https://build.avax.network/docs/api-reference/guides/issuing-api-calls). ### Endpoint ### Methods #### `health.health` This method returns the last set of health check results. **Example Call**: ```sh curl -H 'Content-Type: application/json' --data '{ "jsonrpc":"2.0", "id" :1, "method" :"health.health", "params": { "tags": ["11111111111111111111111111111111LpoYY", "29uVeLPJB1eQJkzRemU8g8wZDw5uJRqpab5U2mX9euieVwiEbL"] } }' 'http://localhost:9650/ext/health' ``` **Example Response**: ```json { "jsonrpc": "2.0", "result": { "checks": { "C": { "message": { "engine": { "consensus": { "lastAcceptedHeight": 31273749, "lastAcceptedID": "2Y4gZGzQnu8UjnHod8j1BLewHFVEbzhULPNzqrSWEHkHNqDrYL", "longestProcessingBlock": "0s", "processingBlocks": 0 }, "vm": null }, "networking": { "percentConnected": 0.9999592612587486 } }, "timestamp": "2024-03-26T19:44:45.2931-04:00", "duration": 20375 }, "P": { "message": { "engine": { "consensus": { "lastAcceptedHeight": 142517, "lastAcceptedID": "2e1FEPCBEkG2Q7WgyZh1v4nt3DXj1HDbDthyhxdq2Ltg3shSYq", "longestProcessingBlock": "0s", "processingBlocks": 0 }, "vm": null }, "networking": { "percentConnected": 0.9999592612587486 } }, "timestamp": "2024-03-26T19:44:45.293115-04:00", "duration": 8750 }, "X": { "message": { "engine": { "consensus": { "lastAcceptedHeight": 24464, "lastAcceptedID": "XuFCsGaSw9cn7Vuz5e2fip4KvP46Xu53S8uDRxaC2QJmyYc3w", "longestProcessingBlock": "0s", "processingBlocks": 0 }, "vm": null }, "networking": { "percentConnected": 0.9999592612587486 } }, "timestamp": "2024-03-26T19:44:45.29312-04:00", "duration": 23291 }, "bootstrapped": { "message": [], "timestamp": "2024-03-26T19:44:45.293078-04:00", "duration": 3375 }, "database": { "timestamp": "2024-03-26T19:44:45.293102-04:00", "duration": 1959 }, "diskspace": { "message": { "availableDiskBytes": 227332591616 }, "timestamp": "2024-03-26T19:44:45.293106-04:00", "duration": 3042 }, "network": { "message": { "connectedPeers": 284, "sendFailRate": 0, "timeSinceLastMsgReceived": "293.098ms", "timeSinceLastMsgSent": "293.098ms" }, "timestamp": "2024-03-26T19:44:45.2931-04:00", "duration": 2333 }, "router": { "message": { "longestRunningRequest": "66.90725ms", "outstandingRequests": 3 }, "timestamp": "2024-03-26T19:44:45.293097-04:00", "duration": 3542 } }, "healthy": true }, "id": 1 } ``` In this example response, every check has passed. So, the node is healthy. **Response Explanation**: - `checks` is a list of health check responses. - A check response may include a `message` with additional context. - A check response may include an `error` describing why the check failed. - `timestamp` is the timestamp of the last health check. - `duration` is the execution duration of the last health check, in nanoseconds. - `contiguousFailures` is the number of times in a row this check failed. - `timeOfFirstFailure` is the time this check first failed. - `healthy` is true all the health checks are passing. #### `health.readiness` This method returns the last evaluation of the startup health check results. **Example Call**: ```sh curl -H 'Content-Type: application/json' --data '{ "jsonrpc":"2.0", "id" :1, "method" :"health.readiness", "params": { "tags": ["11111111111111111111111111111111LpoYY", "29uVeLPJB1eQJkzRemU8g8wZDw5uJRqpab5U2mX9euieVwiEbL"] } }' 'http://localhost:9650/ext/health' ``` **Example Response**: ```json { "jsonrpc": "2.0", "result": { "checks": { "bootstrapped": { "message": [], "timestamp": "2024-03-26T20:02:45.299114-04:00", "duration": 2834 } }, "healthy": true }, "id": 1 } ``` In this example response, every check has passed. So, the node has finished the startup process. **Response Explanation**: - `checks` is a list of health check responses. - A check response may include a `message` with additional context. - A check response may include an `error` describing why the check failed. - `timestamp` is the timestamp of the last health check. - `duration` is the execution duration of the last health check, in nanoseconds. - `contiguousFailures` is the number of times in a row this check failed. - `timeOfFirstFailure` is the time this check first failed. - `healthy` is true all the health checks are passing. #### `health.liveness` This method returns healthy. **Example Call**: ```sh curl -H 'Content-Type: application/json' --data '{ "jsonrpc":"2.0", "id" :1, "method" :"health.liveness" }' 'http://localhost:9650/ext/health' ``` **Example Response**: ```json { "jsonrpc": "2.0", "result": { "checks": {}, "healthy": true }, "id": 1 } ``` In this example response, the node was able to handle the request and mark the service as healthy. **Response Explanation**: - `checks` is an empty list. - `healthy` is true. # Index RPC (/docs/rpcs/other/index-rpc) --- title: "Index RPC" description: "This page is an overview of the Index RPC associated with AvalancheGo." edit_url: https://github.com/ava-labs/avalanchego/edit/master/indexer/service.md --- AvalancheGo can be configured to run with an indexer. That is, it saves (indexes) every container (a block, vertex or transaction) it accepts on the X-Chain, P-Chain and C-Chain. To run AvalancheGo with indexing enabled, set command line flag [\--index-enabled](https://build.avax.network/docs/nodes/configure/configs-flags#--index-enabled-boolean) to true. **AvalancheGo will only index containers that are accepted when running with `--index-enabled` set to true.** To ensure your node has a complete index, run a node with a fresh database and `--index-enabled` set to true. The node will accept every block, vertex and transaction in the network history during bootstrapping, ensuring your index is complete. It is OK to turn off your node if it is running with indexing enabled. If it restarts with indexing still enabled, it will accept all containers that were accepted while it was offline. The indexer should never fail to index an accepted block, vertex or transaction. Indexed containers (that is, accepted blocks, vertices and transactions) are timestamped with the time at which the node accepted that container. Note that if the container was indexed during bootstrapping, other nodes may have accepted the container much earlier. Every container indexed during bootstrapping will be timestamped with the time at which the node bootstrapped, not when it was first accepted by the network. If `--index-enabled` is changed to `false` from `true`, AvalancheGo won't start as doing so would cause a previously complete index to become incomplete, unless the user explicitly says to do so with `--index-allow-incomplete`. This protects you from accidentally running with indexing disabled, after previously running with it enabled, which would result in an incomplete index. This document shows how to query data from AvalancheGo's Index API. The Index API is only available when running with `--index-enabled`. ## Go Client There is a Go implementation of an Index API client. See documentation [here](https://pkg.go.dev/github.com/ava-labs/avalanchego/indexer#Client). This client can be used inside a Go program to connect to an AvalancheGo node that is running with the Index API enabled and make calls to the Index API. ## Format This API uses the `json 2.0` RPC format. For more information on making JSON RPC calls, see [here](https://build.avax.network/docs/api-reference/guides/issuing-api-calls). ## Endpoints Each chain has one or more index. To see if a C-Chain block is accepted, for example, send an API call to the C-Chain block index. To see if an X-Chain vertex is accepted, for example, send an API call to the X-Chain vertex index. ### C-Chain Blocks ``` /ext/index/C/block ``` ### P-Chain Blocks ``` /ext/index/P/block ``` ### X-Chain Transactions ``` /ext/index/X/tx ``` ### X-Chain Blocks ``` /ext/index/X/block ``` To ensure historical data can be accessed, the `/ext/index/X/vtx` is still accessible, even though it is no longer populated with chain data since the Cortina activation. If you are using `V1.10.0` or higher, you need to migrate to using the `/ext/index/X/block` endpoint. ## Methods ### `index.getContainerByID` Get container by ID. **Signature**: ``` index.getContainerByID({ id: string, encoding: string }) -> { id: string, bytes: string, timestamp: string, encoding: string, index: string } ``` **Request**: - `id` is the container's ID - `encoding` is `"hex"` only. **Response**: - `id` is the container's ID - `bytes` is the byte representation of the container - `timestamp` is the time at which this node accepted the container - `encoding` is `"hex"` only. - `index` is how many containers were accepted in this index before this one **Example Call**: ```sh curl --location --request POST 'localhost:9650/ext/index/X/tx' \ --header 'Content-Type: application/json' \ --data-raw '{ "jsonrpc": "2.0", "method": "index.getContainerByID", "params": { "id": "6fXf5hncR8LXvwtM8iezFQBpK5cubV6y1dWgpJCcNyzGB1EzY", "encoding":"hex" }, "id": 1 }' ``` **Example Response**: ```json { "jsonrpc": "2.0", "id": 1, "result": { "id": "6fXf5hncR8LXvwtM8iezFQBpK5cubV6y1dWgpJCcNyzGB1EzY", "bytes": "0x00000000000400003039d891ad56056d9c01f18f43f58b5c784ad07a4a49cf3d1f11623804b5cba2c6bf00000001dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db000000070429ccc5c5eb3b80000000000000000000000001000000013cb7d3842e8cee6a0ebd09f1fe884f6861e1b29c00000001dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db00000001dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db000000050429d069189e0000000000010000000000000000c85fc1980a77c5da78fe5486233fc09a769bb812bcb2cc548cf9495d046b3f1b00000001dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db00000007000003a352a38240000000000000000000000001000000013cb7d3842e8cee6a0ebd09f1fe884f6861e1b29c0000000100000009000000011cdb75d4e0b0aeaba2ebc1ef208373fedc1ebbb498f8385ad6fb537211d1523a70d903b884da77d963d56f163191295589329b5710113234934d0fd59c01676b00b63d2108", "timestamp": "2021-04-02T15:34:00.262979-07:00", "encoding": "hex", "index": "0" } } ``` ### `index.getContainerByIndex` Get container by index. The first container accepted is at index 0, the second is at index 1, etc. **Signature**: ``` index.getContainerByIndex({ index: uint64, encoding: string }) -> { id: string, bytes: string, timestamp: string, encoding: string, index: string } ``` **Request**: - `index` is how many containers were accepted in this index before this one - `encoding` is `"hex"` only. **Response**: - `id` is the container's ID - `bytes` is the byte representation of the container - `timestamp` is the time at which this node accepted the container - `index` is how many containers were accepted in this index before this one - `encoding` is `"hex"` only. **Example Call**: ```sh curl --location --request POST 'localhost:9650/ext/index/X/tx' \ --header 'Content-Type: application/json' \ --data-raw '{ "jsonrpc": "2.0", "method": "index.getContainerByIndex", "params": { "index":0, "encoding": "hex" }, "id": 1 }' ``` **Example Response**: ```json { "jsonrpc": "2.0", "id": 1, "result": { "id": "6fXf5hncR8LXvwtM8iezFQBpK5cubV6y1dWgpJCcNyzGB1EzY", "bytes": "0x00000000000400003039d891ad56056d9c01f18f43f58b5c784ad07a4a49cf3d1f11623804b5cba2c6bf00000001dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db000000070429ccc5c5eb3b80000000000000000000000001000000013cb7d3842e8cee6a0ebd09f1fe884f6861e1b29c00000001dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db00000001dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db000000050429d069189e0000000000010000000000000000c85fc1980a77c5da78fe5486233fc09a769bb812bcb2cc548cf9495d046b3f1b00000001dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db00000007000003a352a38240000000000000000000000001000000013cb7d3842e8cee6a0ebd09f1fe884f6861e1b29c0000000100000009000000011cdb75d4e0b0aeaba2ebc1ef208373fedc1ebbb498f8385ad6fb537211d1523a70d903b884da77d963d56f163191295589329b5710113234934d0fd59c01676b00b63d2108", "timestamp": "2021-04-02T15:34:00.262979-07:00", "encoding": "hex", "index": "0" } } ``` ### `index.getContainerRange` Returns the transactions at index \[`startIndex`\], \[`startIndex+1`\], ... , \[`startIndex+n-1`\] - If \[`n`\] == 0, returns an empty response (for example: null). - If \[`startIndex`\] > the last accepted index, returns an error (unless the above apply.) - If \[`n`\] > \[`MaxFetchedByRange`\], returns an error. - If we run out of transactions, returns the ones fetched before running out. - `numToFetch` must be in `[0,1024]`. **Signature**: ``` index.getContainerRange({ startIndex: uint64, numToFetch: uint64, encoding: string }) -> []{ id: string, bytes: string, timestamp: string, encoding: string, index: string } ``` **Request**: - `startIndex` is the beginning index - `numToFetch` is the number of containers to fetch - `encoding` is `"hex"` only. **Response**: - `id` is the container's ID - `bytes` is the byte representation of the container - `timestamp` is the time at which this node accepted the container - `encoding` is `"hex"` only. - `index` is how many containers were accepted in this index before this one **Example Call**: ```sh curl --location --request POST 'localhost:9650/ext/index/X/tx' \ --header 'Content-Type: application/json' \ --data-raw '{ "jsonrpc": "2.0", "method": "index.getContainerRange", "params": { "startIndex":0, "numToFetch":100, "encoding": "hex" }, "id": 1 }' ``` **Example Response**: ```json { "jsonrpc": "2.0", "id": 1, "result": [ { "id": "6fXf5hncR8LXvwtM8iezFQBpK5cubV6y1dWgpJCcNyzGB1EzY", "bytes": "0x00000000000400003039d891ad56056d9c01f18f43f58b5c784ad07a4a49cf3d1f11623804b5cba2c6bf00000001dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db000000070429ccc5c5eb3b80000000000000000000000001000000013cb7d3842e8cee6a0ebd09f1fe884f6861e1b29c00000001dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db00000001dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db000000050429d069189e0000000000010000000000000000c85fc1980a77c5da78fe5486233fc09a769bb812bcb2cc548cf9495d046b3f1b00000001dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db00000007000003a352a38240000000000000000000000001000000013cb7d3842e8cee6a0ebd09f1fe884f6861e1b29c0000000100000009000000011cdb75d4e0b0aeaba2ebc1ef208373fedc1ebbb498f8385ad6fb537211d1523a70d903b884da77d963d56f163191295589329b5710113234934d0fd59c01676b00b63d2108", "timestamp": "2021-04-02T15:34:00.262979-07:00", "encoding": "hex", "index": "0" } ] } ``` ### `index.getIndex` Get a container's index. **Signature**: ``` index.getIndex({ id: string, encoding: string }) -> { index: string } ``` **Request**: - `id` is the ID of the container to fetch - `encoding` is `"hex"` only. **Response**: - `index` is how many containers were accepted in this index before this one **Example Call**: ```sh curl --location --request POST 'localhost:9650/ext/index/X/tx' \ --header 'Content-Type: application/json' \ --data-raw '{ "jsonrpc": "2.0", "method": "index.getIndex", "params": { "id":"6fXf5hncR8LXvwtM8iezFQBpK5cubV6y1dWgpJCcNyzGB1EzY", "encoding": "hex" }, "id": 1 }' ``` **Example Response**: ```json { "jsonrpc": "2.0", "result": { "index": "0" }, "id": 1 } ``` ### `index.getLastAccepted` Get the most recently accepted container. **Signature**: ``` index.getLastAccepted({ encoding:string }) -> { id: string, bytes: string, timestamp: string, encoding: string, index: string } ``` **Request**: - `encoding` is `"hex"` only. **Response**: - `id` is the container's ID - `bytes` is the byte representation of the container - `timestamp` is the time at which this node accepted the container - `encoding` is `"hex"` only. **Example Call**: ```sh curl --location --request POST 'localhost:9650/ext/index/X/tx' \ --header 'Content-Type: application/json' \ --data-raw '{ "jsonrpc": "2.0", "method": "index.getLastAccepted", "params": { "encoding": "hex" }, "id": 1 }' ``` **Example Response**: ```json { "jsonrpc": "2.0", "id": 1, "result": { "id": "6fXf5hncR8LXvwtM8iezFQBpK5cubV6y1dWgpJCcNyzGB1EzY", "bytes": "0x00000000000400003039d891ad56056d9c01f18f43f58b5c784ad07a4a49cf3d1f11623804b5cba2c6bf00000001dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db000000070429ccc5c5eb3b80000000000000000000000001000000013cb7d3842e8cee6a0ebd09f1fe884f6861e1b29c00000001dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db00000001dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db000000050429d069189e0000000000010000000000000000c85fc1980a77c5da78fe5486233fc09a769bb812bcb2cc548cf9495d046b3f1b00000001dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db00000007000003a352a38240000000000000000000000001000000013cb7d3842e8cee6a0ebd09f1fe884f6861e1b29c0000000100000009000000011cdb75d4e0b0aeaba2ebc1ef208373fedc1ebbb498f8385ad6fb537211d1523a70d903b884da77d963d56f163191295589329b5710113234934d0fd59c01676b00b63d2108", "timestamp": "2021-04-02T15:34:00.262979-07:00", "encoding": "hex", "index": "0" } } ``` ### `index.isAccepted` Returns true if the container is in this index. **Signature**: ``` index.isAccepted({ id: string, encoding: string }) -> { isAccepted: bool } ``` **Request**: - `id` is the ID of the container to fetch - `encoding` is `"hex"` only. **Response**: - `isAccepted` displays if the container has been accepted **Example Call**: ```sh curl --location --request POST 'localhost:9650/ext/index/X/tx' \ --header 'Content-Type: application/json' \ --data-raw '{ "jsonrpc": "2.0", "method": "index.isAccepted", "params": { "id":"6fXf5hncR8LXvwtM8iezFQBpK5cubV6y1dWgpJCcNyzGB1EzY", "encoding": "hex" }, "id": 1 }' ``` **Example Response**: ```json { "jsonrpc": "2.0", "result": { "isAccepted": true }, "id": 1 } ``` ## Example: Iterating Through X-Chain Transaction Here is an example of how to iterate through all transactions on the X-Chain. You can use the Index API to get the ID of every transaction that has been accepted on the X-Chain, and use the X-Chain API method `avm.getTx` to get a human-readable representation of the transaction. To get an X-Chain transaction by its index (the order it was accepted in), use Index API method [index.getlastaccepted](#indexgetlastaccepted). For example, to get the second transaction (note that `"index":1`) accepted on the X-Chain, do: ```sh curl --location --request POST 'https://indexer-demo.avax.network/ext/index/X/tx' \ --header 'Content-Type: application/json' \ --data-raw '{ "jsonrpc": "2.0", "method": "index.getContainerByIndex", "params": { "encoding":"hex", "index":1 }, "id": 1 }' ``` This returns the ID of the second transaction accepted in the X-Chain's history. To get the third transaction on the X-Chain, use `"index":2`, and so on. The above API call gives the response below: ```json { "jsonrpc": "2.0", "result": { "id": "ZGYTSU8w3zUP6VFseGC798vA2Vnxnfj6fz1QPfA9N93bhjJvo", "bytes": "0x00000000000000000001ed5f38341e436e5d46e2bb00b45d62ae97d1b050c64bc634ae10626739e35c4b0000000221e67317cbc4be2aeb00677ad6462778a8f52274b9d605df2591b23027a87dff000000070000000129f6afc0000000000000000000000001000000017416792e228a765c65e2d76d28ab5a16d18c342f21e67317cbc4be2aeb00677ad6462778a8f52274b9d605df2591b23027a87dff0000000700000222afa575c00000000000000000000000010000000187d6a6dd3cd7740c8b13a410bea39b01fa83bb3e000000016f375c785edb28d52edb59b54035c96c198e9d80f5f5f5eee070592fe9465b8d0000000021e67317cbc4be2aeb00677ad6462778a8f52274b9d605df2591b23027a87dff0000000500000223d9ab67c0000000010000000000000000000000010000000900000001beb83d3d29f1247efb4a3a1141ab5c966f46f946f9c943b9bc19f858bd416d10060c23d5d9c7db3a0da23446b97cd9cf9f8e61df98e1b1692d764c84a686f5f801a8da6e40", "timestamp": "2021-11-04T00:42:55.01643414Z", "encoding": "hex", "index": "1" }, "id": 1 } ``` The ID of this transaction is `ZGYTSU8w3zUP6VFseGC798vA2Vnxnfj6fz1QPfA9N93bhjJvo`. To get the transaction by its ID, use API method `avm.getTx`: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"avm.getTx", "params" :{ "txID":"ZGYTSU8w3zUP6VFseGC798vA2Vnxnfj6fz1QPfA9N93bhjJvo", "encoding": "json" } }' -H 'content-type:application/json;' https://api.avax.network/ext/bc/X ``` **Response**: ```json { "jsonrpc": "2.0", "result": { "tx": { "unsignedTx": { "networkID": 1, "blockchainID": "2oYMBNV4eNHyqk2fjjV5nVQLDbtmNJzq5s3qs3Lo6ftnC6FByM", "outputs": [ { "assetID": "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z", "fxID": "spdxUxVJQbX85MGxMHbKw1sHxMnSqJ3QBzDyDYEP3h6TLuxqQ", "output": { "addresses": ["X-avax1wst8jt3z3fm9ce0z6akj3266zmgccdp03hjlaj"], "amount": 4999000000, "locktime": 0, "threshold": 1 } }, { "assetID": "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z", "fxID": "spdxUxVJQbX85MGxMHbKw1sHxMnSqJ3QBzDyDYEP3h6TLuxqQ", "output": { "addresses": ["X-avax1slt2dhfu6a6qezcn5sgtagumq8ag8we75f84sw"], "amount": 2347999000000, "locktime": 0, "threshold": 1 } } ], "inputs": [ { "txID": "qysTYUMCWdsR3MctzyfXiSvoSf6evbeFGRLLzA4j2BjNXTknh", "outputIndex": 0, "assetID": "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z", "fxID": "spdxUxVJQbX85MGxMHbKw1sHxMnSqJ3QBzDyDYEP3h6TLuxqQ", "input": { "amount": 2352999000000, "signatureIndices": [0] } } ], "memo": "0x" }, "credentials": [ { "fxID": "spdxUxVJQbX85MGxMHbKw1sHxMnSqJ3QBzDyDYEP3h6TLuxqQ", "credential": { "signatures": [ "0xbeb83d3d29f1247efb4a3a1141ab5c966f46f946f9c943b9bc19f858bd416d10060c23d5d9c7db3a0da23446b97cd9cf9f8e61df98e1b1692d764c84a686f5f801" ] } } ] }, "encoding": "json" }, "id": 1 } ``` # Admin RPC (/docs/rpcs/other) --- title: "Admin RPC" description: "This page is an overview of the Admin RPC associated with AvalancheGo." edit_url: https://github.com/ava-labs/avalanchego/edit/master/api/admin/service.md --- The Admin API can be used for measuring node health and debugging. The Admin API is disabled by default for security reasons. To run a node with the Admin API enabled, use [`config flag --api-admin-enabled=true`](https://build.avax.network/docs/nodes/configure/configs-flags#--api-admin-enabled-boolean). This API set is for a specific node, it is unavailable on the [public server](https://build.avax.network/docs/tooling/rpc-providers). ## Format This API uses the `json 2.0` RPC format. For details, see [here](https://build.avax.network/docs/api-reference/guides/issuing-api-calls). ## Endpoint ``` /ext/admin ``` ## Methods ### `admin.alias` Assign an API endpoint an alias, a different endpoint for the API. The original endpoint will still work. This change only affects this node; other nodes will not know about this alias. **Signature**: ``` admin.alias({endpoint:string, alias:string}) -> {} ``` - `endpoint` is the original endpoint of the API. `endpoint` should only include the part of the endpoint after `/ext/`. - The API being aliased can now be called at `ext/alias`. - `alias` can be at most 512 characters. **Example Call**: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"admin.alias", "params": { "alias":"myAlias", "endpoint":"bc/X" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/admin ``` **Example Response**: ```json { "jsonrpc": "2.0", "id": 1, "result": {} } ``` Now, calls to the X-Chain can be made to either `/ext/bc/X` or, equivalently, to `/ext/myAlias`. ### `admin.aliasChain` Give a blockchain an alias, a different name that can be used any place the blockchain's ID is used. Aliasing a chain can also be done via the [Node API](https://build.avax.network/docs/nodes/configure/configs-flags#--chain-aliases-file-string). Note that the alias is set for each chain on each node individually. In a multi-node Avalanche L1, the same alias should be configured on each node to use an alias across an Avalanche L1 successfully. Setting an alias for a chain on one node does not register that alias with other nodes automatically. **Signature**: ``` admin.aliasChain( { chain:string, alias:string } ) -> {} ``` - `chain` is the blockchain's ID. - `alias` can now be used in place of the blockchain's ID (in API endpoints, for example.) **Example Call**: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"admin.aliasChain", "params": { "chain":"sV6o671RtkGBcno1FiaDbVcFv2sG5aVXMZYzKdP4VQAWmJQnM", "alias":"myBlockchainAlias" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/admin ``` **Example Response**: ```json { "jsonrpc": "2.0", "id": 1, "result": {} } ``` Now, instead of interacting with the blockchain whose ID is `sV6o671RtkGBcno1FiaDbVcFv2sG5aVXMZYzKdP4VQAWmJQnM` by making API calls to `/ext/bc/sV6o671RtkGBcno1FiaDbVcFv2sG5aVXMZYzKdP4VQAWmJQnM`, one can also make calls to `ext/bc/myBlockchainAlias`. ### `admin.getChainAliases` Returns the aliases of the chain **Signature**: ``` admin.getChainAliases( { chain:string } ) -> {aliases:string[]} ``` - `chain` is the blockchain's ID. **Example Call**: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"admin.getChainAliases", "params": { "chain":"sV6o671RtkGBcno1FiaDbVcFv2sG5aVXMZYzKdP4VQAWmJQnM" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/admin ``` **Example Response**: ```json { "jsonrpc": "2.0", "result": { "aliases": [ "X", "avm", "2eNy1mUFdmaxXNj1eQHUe7Np4gju9sJsEtWQ4MX3ToiNKuADed" ] }, "id": 1 } ``` ### `admin.getLoggerLevel` Returns log and display levels of loggers. **Signature**: ``` admin.getLoggerLevel( { loggerName:string // optional } ) -> { loggerLevels: { loggerName: { logLevel: string, displayLevel: string } } } ``` - `loggerName` is the name of the logger to be returned. This is an optional argument. If not specified, it returns all possible loggers. **Example Call**: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"admin.getLoggerLevel", "params": { "loggerName": "C" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/admin ``` **Example Response**: ```json { "jsonrpc": "2.0", "result": { "loggerLevels": { "C": { "logLevel": "DEBUG", "displayLevel": "INFO" } } }, "id": 1 } ``` ### `admin.loadVMs` Dynamically loads any virtual machines installed on the node as plugins. See [here](https://build.avax.network/docs/virtual-machines#installing-a-vm) for more information on how to install a virtual machine on a node. **Signature**: ``` admin.loadVMs() -> { newVMs: map[string][]string failedVMs: map[string]string, } ``` - `failedVMs` is only included in the response if at least one virtual machine fails to be loaded. **Example Call**: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"admin.loadVMs", "params" :{} }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/admin ``` **Example Response**: ```json { "jsonrpc": "2.0", "result": { "newVMs": { "tGas3T58KzdjLHhBDMnH2TvrddhqTji5iZAMZ3RXs2NLpSnhH": ["foovm"] }, "failedVMs": { "rXJsCSEYXg2TehWxCEEGj6JU2PWKTkd6cBdNLjoe2SpsKD9cy": "error message" } }, "id": 1 } ``` ### `admin.lockProfile` Writes a profile of mutex statistics to `lock.profile`. **Signature**: ``` admin.lockProfile() -> {} ``` **Example Call**: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"admin.lockProfile", "params" :{} }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/admin ``` **Example Response**: ```json { "jsonrpc": "2.0", "id": 1, "result": {} } ``` ### `admin.memoryProfile` Writes a memory profile of the to `mem.profile`. **Signature**: ``` admin.memoryProfile() -> {} ``` **Example Call**: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"admin.memoryProfile", "params" :{} }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/admin ``` **Example Response**: ```json { "jsonrpc": "2.0", "id": 1, "result": {} } ``` ### `admin.setLoggerLevel` Sets log and display levels of loggers. **Signature**: ``` admin.setLoggerLevel( { loggerName: string, // optional logLevel: string, // optional displayLevel: string, // optional } ) -> {} ``` - `loggerName` is the logger's name to be changed. This is an optional parameter. If not specified, it changes all possible loggers. - `logLevel` is the log level of written logs, can be omitted. - `displayLevel` is the log level of displayed logs, can be omitted. `logLevel` and `displayLevel` cannot be omitted at the same time. **Example Call**: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"admin.setLoggerLevel", "params": { "loggerName": "C", "logLevel": "DEBUG", "displayLevel": "INFO" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/admin ``` **Example Response**: ```json { "jsonrpc": "2.0", "id": 1, "result": {} } ``` ### `admin.startCPUProfiler` Start profiling the CPU utilization of the node. To stop, call `admin.stopCPUProfiler`. On stop, writes the profile to `cpu.profile`. **Signature**: ``` admin.startCPUProfiler() -> {} ``` **Example Call**: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"admin.startCPUProfiler", "params" :{} }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/admin ``` **Example Response**: ```json { "jsonrpc": "2.0", "id": 1, "result": {} } ``` ### `admin.stopCPUProfiler` Stop the CPU profile that was previously started. **Signature**: ``` admin.stopCPUProfiler() -> {} ``` **Example Call**: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"admin.stopCPUProfiler" }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/admin ``` **Example Response**: ```json { "jsonrpc": "2.0", "id": 1, "result": {} } ``` # Info RPC (/docs/rpcs/other/info-rpc) --- title: "Info RPC" description: "This page is an overview of the Info RPC associated with AvalancheGo." edit_url: https://github.com/ava-labs/avalanchego/edit/master/api/info/service.md --- The Info API can be used to access basic information about an Avalanche node. ## Format This API uses the `json 2.0` RPC format. For more information on making JSON RPC calls, see [here](https://build.avax.network/docs/api-reference/guides/issuing-api-calls). ## Endpoint ``` /ext/info ``` ## Methods ### `info.acps` Returns peer preferences for Avalanche Community Proposals (ACPs) **Signature**: ``` info.acps() -> { acps: map[uint32]{ supportWeight: uint64 supporters: set[string] objectWeight: uint64 objectors: set[string] abstainWeight: uint64 } } ``` **Example Call**: ```sh curl -sX POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"info.acps", "params" :{} }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info ``` **Example Response**: ```json { "jsonrpc": "2.0", "result": { "acps": { "23": { "supportWeight": "0", "supporters": [], "objectWeight": "0", "objectors": [], "abstainWeight": "161147778098286584" }, "24": { "supportWeight": "0", "supporters": [], "objectWeight": "0", "objectors": [], "abstainWeight": "161147778098286584" }, "25": { "supportWeight": "0", "supporters": [], "objectWeight": "0", "objectors": [], "abstainWeight": "161147778098286584" }, "30": { "supportWeight": "0", "supporters": [], "objectWeight": "0", "objectors": [], "abstainWeight": "161147778098286584" }, "31": { "supportWeight": "0", "supporters": [], "objectWeight": "0", "objectors": [], "abstainWeight": "161147778098286584" }, "41": { "supportWeight": "0", "supporters": [], "objectWeight": "0", "objectors": [], "abstainWeight": "161147778098286584" }, "62": { "supportWeight": "0", "supporters": [], "objectWeight": "0", "objectors": [], "abstainWeight": "161147778098286584" } } }, "id": 1 } ``` ### `info.isBootstrapped` Check whether a given chain is done bootstrapping **Signature**: ``` info.isBootstrapped({chain: string}) -> {isBootstrapped: bool} ``` `chain` is the ID or alias of a chain. **Example Call**: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"info.isBootstrapped", "params": { "chain":"X" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info ``` **Example Response**: ```json { "jsonrpc": "2.0", "result": { "isBootstrapped": true }, "id": 1 } ``` ### `info.getBlockchainID` Given a blockchain's alias, get its ID. (See [`admin.aliasChain`](https://build.avax.network/docs/api-reference/admin-api#adminaliaschain).) **Signature**: ``` info.getBlockchainID({alias:string}) -> {blockchainID:string} ``` **Example Call**: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"info.getBlockchainID", "params": { "alias":"X" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info ``` **Example Response**: ```json { "jsonrpc": "2.0", "id": 1, "result": { "blockchainID": "sV6o671RtkGBcno1FiaDbVcFv2sG5aVXMZYzKdP4VQAWmJQnM" } } ``` ### `info.getNetworkID` Get the ID of the network this node is participating in. **Signature**: ``` info.getNetworkID() -> { networkID: int } ``` **Example Call**: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"info.getNetworkID" }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info ``` **Example Response**: ```json { "jsonrpc": "2.0", "id": 1, "result": { "networkID": "2" } } ``` Network ID of 1 = Mainnet Network ID of 5 = Fuji (testnet) ### `info.getNetworkName` Get the name of the network this node is participating in. **Signature**: ``` info.getNetworkName() -> { networkName:string } ``` **Example Call**: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"info.getNetworkName" }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info ``` **Example Response**: ```json { "jsonrpc": "2.0", "id": 1, "result": { "networkName": "local" } } ``` ### `info.getNodeID` Get the ID, the BLS key, and the proof of possession(BLS signature) of this node. This endpoint set is for a specific node, it is unavailable on the [public server](https://build.avax.network/docs/tooling/rpc-providers). **Signature**: ``` info.getNodeID() -> { nodeID: string, nodePOP: { publicKey: string, proofOfPossession: string } } ``` - `nodeID` Node ID is the unique identifier of the node that you set to act as a validator on the Primary Network. - `nodePOP` is this node's BLS key and proof of possession. Nodes must register a BLS key to act as a validator on the Primary Network. Your node's POP is logged on startup and is accessible over this endpoint. - `publicKey` is the 48 byte hex representation of the BLS key. - `proofOfPossession` is the 96 byte hex representation of the BLS signature. **Example Call**: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"info.getNodeID" }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info ``` **Example Response**: ```json { "jsonrpc": "2.0", "result": { "nodeID": "NodeID-5mb46qkSBj81k9g9e4VFjGGSbaaSLFRzD", "nodePOP": { "publicKey": "0x8f95423f7142d00a48e1014a3de8d28907d420dc33b3052a6dee03a3f2941a393c2351e354704ca66a3fc29870282e15", "proofOfPossession": "0x86a3ab4c45cfe31cae34c1d06f212434ac71b1be6cfe046c80c162e057614a94a5bc9f1ded1a7029deb0ba4ca7c9b71411e293438691be79c2dbf19d1ca7c3eadb9c756246fc5de5b7b89511c7d7302ae051d9e03d7991138299b5ed6a570a98" } }, "id": 1 } ``` ### `info.getNodeIP` Get the IP of this node. This endpoint set is for a specific node, it is unavailable on the [public server](https://build.avax.network/docs/tooling/rpc-providers). **Signature**: ``` info.getNodeIP() -> {ip: string} ``` **Example Call**: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"info.getNodeIP" }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info ``` **Example Response**: ```json { "jsonrpc": "2.0", "result": { "ip": "192.168.1.1:9651" }, "id": 1 } ``` ### `info.getNodeVersion` Get the version of this node. **Signature**: ``` info.getNodeVersion() -> { version: string, databaseVersion: string, gitCommit: string, vmVersions: map[string]string, rpcProtocolVersion: string, } ``` where: - `version` is this node's version - `databaseVersion` is the version of the database this node is using - `gitCommit` is the Git commit that this node was built from - `vmVersions` is map where each key/value pair is the name of a VM, and the version of that VM this node runs - `rpcProtocolVersion` is the RPCChainVM protocol version **Example Call**: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"info.getNodeVersion" }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info ``` **Example Response**: ```json { "jsonrpc": "2.0", "result": { "version": "avalanche/1.9.1", "databaseVersion": "v1.4.5", "rpcProtocolVersion": "18", "gitCommit": "79cd09ba728e1cecef40acd60702f0a2d41ea404", "vmVersions": { "avm": "v1.9.1", "evm": "v0.11.1", "platform": "v1.9.1" } }, "id": 1 } ``` ### `info.getTxFee` Deprecated as of [v1.12.2](https://github.com/ava-labs/avalanchego/releases/tag/v1.12.2). Get the fees of the network. **Signature**: ``` info.getTxFee() -> { txFee: uint64, createAssetTxFee: uint64, createSubnetTxFee: uint64, transformSubnetTxFee: uint64, createBlockchainTxFee: uint64, addPrimaryNetworkValidatorFee: uint64, addPrimaryNetworkDelegatorFee: uint64, addSubnetValidatorFee: uint64, addSubnetDelegatorFee: uint64 } ``` - `txFee` is the default fee for issuing X-Chain transactions. - `createAssetTxFee` is the fee for issuing a `CreateAssetTx` on the X-Chain. - `createSubnetTxFee` is no longer used. - `transformSubnetTxFee` is no longer used. - `createBlockchainTxFee` is no longer used. - `addPrimaryNetworkValidatorFee` is no longer used. - `addPrimaryNetworkDelegatorFee` is no longer used. - `addSubnetValidatorFee` is no longer used. - `addSubnetDelegatorFee` is no longer used. All fees are denominated in nAVAX. **Example Call**: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"info.getTxFee" }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info ``` **Example Response**: ```json { "jsonrpc": "2.0", "id": 1, "result": { "txFee": "1000000", "createAssetTxFee": "10000000", "createSubnetTxFee": "1000000000", "transformSubnetTxFee": "10000000000", "createBlockchainTxFee": "1000000000", "addPrimaryNetworkValidatorFee": "0", "addPrimaryNetworkDelegatorFee": "0", "addSubnetValidatorFee": "1000000", "addSubnetDelegatorFee": "1000000" } } ``` ### `info.getVMs` Get the virtual machines installed on this node. This endpoint set is for a specific node, it is unavailable on the [public server](https://build.avax.network/docs/tooling/rpc-providers). **Signature**: ``` info.getVMs() -> { vms: map[string][]string } ``` **Example Call**: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"info.getVMs", "params" :{} }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info ``` **Example Response**: ```json { "jsonrpc": "2.0", "result": { "vms": { "jvYyfQTxGMJLuGWa55kdP2p2zSUYsQ5Raupu4TW34ZAUBAbtq": ["avm"], "mgj786NP7uDwBCcq6YwThhaN8FLyybkCa4zBWTQbNgmK6k9A6": ["evm"], "qd2U4HDWUvMrVUeTcCHp6xH3Qpnn1XbU5MDdnBoiifFqvgXwT": ["nftfx"], "rWhpuQPF1kb72esV2momhMuTYGkEb1oL29pt2EBXWmSy4kxnT": ["platform"], "rXJsCSEYXg2TehWxCEEGj6JU2PWKTkd6cBdNLjoe2SpsKD9cy": ["propertyfx"], "spdxUxVJQbX85MGxMHbKw1sHxMnSqJ3QBzDyDYEP3h6TLuxqQ": ["secp256k1fx"] } }, "id": 1 } ``` ### `info.peers` Get a description of peer connections. **Signature**: ``` info.peers({ nodeIDs: string[] // optional }) -> { numPeers: int, peers:[]{ ip: string, publicIP: string, nodeID: string, version: string, lastSent: string, lastReceived: string, benched: string[], observedUptime: int, } } ``` - `nodeIDs` is an optional parameter to specify what NodeID's descriptions should be returned. If this parameter is left empty, descriptions for all active connections will be returned. If the node is not connected to a specified NodeID, it will be omitted from the response. - `ip` is the remote IP of the peer. - `publicIP` is the public IP of the peer. - `nodeID` is the prefixed Node ID of the peer. - `version` shows which version the peer runs on. - `lastSent` is the timestamp of last message sent to the peer. - `lastReceived` is the timestamp of last message received from the peer. - `benched` shows chain IDs that the peer is currently benched on. - `observedUptime` is this node's primary network uptime, observed by the peer. **Example Call**: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"info.peers", "params": { "nodeIDs": [] } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info ``` **Example Response**: ```json { "jsonrpc": "2.0", "id": 1, "result": { "numPeers": 3, "peers": [ { "ip": "206.189.137.87:9651", "publicIP": "206.189.137.87:9651", "nodeID": "NodeID-8PYXX47kqLDe2wD4oPbvRRchcnSzMA4J4", "version": "avalanche/1.9.4", "lastSent": "2020-06-01T15:23:02Z", "lastReceived": "2020-06-01T15:22:57Z", "benched": [], "observedUptime": "99", "trackedSubnets": [], "benched": [] }, { "ip": "158.255.67.151:9651", "publicIP": "158.255.67.151:9651", "nodeID": "NodeID-C14fr1n8EYNKyDfYixJ3rxSAVqTY3a8BP", "version": "avalanche/1.9.4", "lastSent": "2020-06-01T15:23:02Z", "lastReceived": "2020-06-01T15:22:34Z", "benched": [], "observedUptime": "75", "trackedSubnets": [ "29uVeLPJB1eQJkzRemU8g8wZDw5uJRqpab5U2mX9euieVwiEbL" ], "benched": [] }, { "ip": "83.42.13.44:9651", "publicIP": "83.42.13.44:9651", "nodeID": "NodeID-LPbcSMGJ4yocxYxvS2kBJ6umWeeFbctYZ", "version": "avalanche/1.9.3", "lastSent": "2020-06-01T15:23:02Z", "lastReceived": "2020-06-01T15:22:55Z", "benched": [], "observedUptime": "95", "trackedSubnets": [], "benched": [] } ] } } ``` ### `info.uptime` Returns the network's observed uptime of this node. This is the only reliable source of data for your node's uptime. Other sources may be using data gathered with incomplete (limited) information. **Signature**: ``` info.uptime() -> { rewardingStakePercentage: float64, weightedAveragePercentage: float64 } ``` - `rewardingStakePercentage` is the percent of stake which thinks this node is above the uptime requirement. - `weightedAveragePercentage` is the stake-weighted average of all observed uptimes for this node. **Example Call**: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"info.uptime" }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info ``` **Example Response**: ```json { "jsonrpc": "2.0", "id": 1, "result": { "rewardingStakePercentage": "100.0000", "weightedAveragePercentage": "99.0000" } } ``` #### Example Avalanche L1 Call ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"info.uptime", "params" :{ "subnetID":"29uVeLPJB1eQJkzRemU8g8wZDw5uJRqpab5U2mX9euieVwiEbL" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info ``` #### Example Avalanche L1 Response ```json { "jsonrpc": "2.0", "id": 1, "result": { "rewardingStakePercentage": "74.0741", "weightedAveragePercentage": "72.4074" } } ``` ### `info.upgrades` Returns the upgrade history and configuration of the network. **Example Call**: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"info.upgrades" }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/info ``` **Example Response (Mainnet)**: ```json { "jsonrpc": "2.0", "result": { "apricotPhase1Time": "2021-03-31T14:00:00Z", "apricotPhase2Time": "2021-05-10T11:00:00Z", "apricotPhase3Time": "2021-08-24T14:00:00Z", "apricotPhase4Time": "2021-09-22T21:00:00Z", "apricotPhase4MinPChainHeight": 793005, "apricotPhase5Time": "2021-12-02T18:00:00Z", "apricotPhasePre6Time": "2022-09-05T01:30:00Z", "apricotPhase6Time": "2022-09-06T20:00:00Z", "apricotPhasePost6Time": "2022-09-07T03:00:00Z", "banffTime": "2022-10-18T16:00:00Z", "cortinaTime": "2023-04-25T15:00:00Z", "cortinaXChainStopVertexID": "jrGWDh5Po9FMj54depyunNixpia5PN4aAYxfmNzU8n752Rjga", "durangoTime": "2024-03-06T16:00:00Z", "etnaTime": "2024-12-16T17:00:00Z", "fortunaTime": "2025-04-08T15:00:00Z", "graniteTime": "2025-11-19T16:00:00Z", "heliconTime": "9999-12-01T00:00:00Z" }, "id": 1 } ``` # Metrics RPC (/docs/rpcs/other/metrics-rpc) --- title: "Metrics RPC" description: "This page is an overview of the Metrics RPC associated with AvalancheGo." edit_url: https://github.com/ava-labs/avalanchego/edit/master/api/metrics/service.md --- The Metrics API allows clients to get statistics about a node's health and performance. This API set is for a specific node, it is unavailable on the [public server](https://build.avax.network/docs/tooling/rpc-providers). ## Endpoint ``` /ext/metrics ``` ## Usage To get the node metrics: ```sh curl -X POST 127.0.0.1:9650/ext/metrics ``` ## Format This API produces Prometheus compatible metrics. See [here](https://prometheus.io/docs/instrumenting/exposition_formats) for information on Prometheus' formatting. [Here](https://build.avax.network/docs/nodes/maintain/monitoring) is a tutorial that shows how to set up Prometheus and Grafana to monitor AvalancheGo node using the Metrics API. # ProposerVM RPC (/docs/rpcs/other/proposervm-rpc) --- title: "ProposerVM RPC" description: "This page is an overview of the ProposerVM RPC associated with AvalancheGo." edit_url: https://github.com/ava-labs/avalanchego/edit/master/vms/proposervm/service.md --- # ProposerVM API The ProposerVM API allows clients to fetch information about a chain's Snowman++ wrapper information. ## Endpoint ```text /ext/bc/{blockchainID}/proposervm ``` ## Format This API uses the `JSON-RPC 2.0` RPC format. ## Methods ### `proposervm.getProposedHeight` Returns this node's current proposer VM height. **Signature:** ``` proposervm.getProposedHeight() -> { height: int, } ``` **Example Call:** ```sh curl -X POST --data '{ "jsonrpc": "2.0", "method": "proposervm.getProposedHeight", "params": {}, "id": 1 }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P/proposervm ``` **Example Response:** ```json { "jsonrpc": "2.0", "result": { "height": "56" }, "id": 1 } ``` ### `proposervm.getCurrentEpoch` Returns the current epoch information. **Signature:** ``` proposervm.getCurrentEpoch() -> { number: int, startTime: int, pChainHeight: int } ``` **Example Call:** ```sh curl -X POST --data '{ "jsonrpc": "2.0", "method": "proposervm.getCurrentEpoch", "params": {}, "id": 1 }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P/proposervm ``` **Example Response:** ```json { "jsonrpc": "2.0", "result": { "number": "56", "startTime":"1755802182", "pChainHeight": "21857141" }, "id": 1 } ``` # AI & LLM Integration (/docs/tooling/ai-llm) --- title: AI & LLM Integration description: Access Avalanche documentation programmatically for AI applications icon: Bot --- The Builder Hub provides AI-friendly access to documentation through standardized formats. Whether you're building a chatbot, using Claude/ChatGPT, or integrating with AI development tools, we offer multiple ways to access our docs. ## Endpoints Overview | Endpoint | Purpose | Best For | | :------- | :------ | :------- | | [`/llms.txt`](/docs/tooling/ai-llm/llms-txt#llmstxt) | Structured index of all docs | Content discovery | | [`/llms-full.txt`](/docs/tooling/ai-llm/llms-txt#llms-fulltxt) | Complete docs in one file | Full context loading | | [`/{path}.md`](/docs/tooling/ai-llm/llms-txt#individual-pages) | Markdown for any page | Single page retrieval | | [`/api/mcp`](/docs/tooling/ai-llm/mcp-server) | MCP server for search & retrieval | Dynamic AI tool access | ## Quick Start The fastest way to get started depends on your use case: Static endpoints for sitemap, full docs, and individual pages Dynamic search and retrieval via Model Context Protocol Rate limits, CORS policy, and privacy information ## Standards - [llms.txt](https://llmstxt.org/) - AI sitemap standard - [Model Context Protocol](https://modelcontextprotocol.io/) - Anthropic's standard for AI tool access - [JSON-RPC 2.0](https://www.jsonrpc.org/specification) - MCP server protocol # llms.txt Endpoints (/docs/tooling/ai-llm/llms-txt) --- title: llms.txt Endpoints description: Static endpoints for AI content discovery and retrieval --- ## llms.txt A structured markdown index following the [llms.txt standard](https://llmstxt.org/). Use this for content discovery. ``` https://build.avax.network/llms.txt ``` Returns organized sections (Documentation, Academy, Integrations, Blog) with links and descriptions. ## llms-full.txt All documentation content in a single markdown file for one-time context loading. ``` https://build.avax.network/llms-full.txt ``` Contains 1300+ pages. For models with limited context, use the [MCP server](/docs/tooling/ai-llm/mcp-server) or individual page endpoint instead. ## Individual Pages Append `.md` to any page URL to get processed markdown: ``` https://build.avax.network/docs/primary-network/overview.md https://build.avax.network/academy/blockchain-fundamentals/blockchain-intro.md https://build.avax.network/blog/your-first-l1.md https://build.avax.network/integrations/chainlink.md ``` Works with `/docs/`, `/academy/`, `/integrations/`, and `/blog/` paths. Returns clean markdown with JSX components stripped for optimal AI consumption. # MCP Server (/docs/tooling/ai-llm/mcp-server) --- title: MCP Server description: Search and retrieve Avalanche documentation dynamically via Model Context Protocol --- The [Model Context Protocol](https://modelcontextprotocol.io/) server enables AI systems to search and retrieve documentation dynamically. **Endpoint:** `https://build.avax.network/api/mcp` ## Tools | Tool | Purpose | | :--- | :------ | | `avalanche_docs_search` | Search docs by query with optional source filter | | `avalanche_docs_fetch` | Get a specific page by URL path | | `avalanche_docs_list_sections` | List all sections with page counts | ## Claude Code Setup Add the MCP server to your project: ```bash claude mcp add avalanche-docs --transport http https://build.avax.network/api/mcp ``` Or add to your `.claude/settings.json`: ```json { "mcpServers": { "avalanche-docs": { "transport": { "type": "http", "url": "https://build.avax.network/api/mcp" } } } } ``` ## Claude Desktop Setup Add to `~/Library/Application Support/Claude/claude_desktop_config.json`: ```json { "mcpServers": { "avalanche-docs": { "transport": { "type": "http", "url": "https://build.avax.network/api/mcp" } } } } ``` ## JSON-RPC Protocol The MCP server uses JSON-RPC 2.0 for communication: ```bash curl -X POST https://build.avax.network/api/mcp \ -H "Content-Type: application/json" \ -d '{"jsonrpc":"2.0","id":1,"method":"tools/call","params":{"name":"avalanche_docs_search","arguments":{"query":"create L1","limit":5}}}' ``` **Response format:** ```json { "jsonrpc": "2.0", "id": 1, "result": { "content": [ { "type": "text", "text": "[{\"title\":\"Create an L1\",\"url\":\"/docs/avalanche-l1s/create\",\"description\":\"...\",\"score\":45}]" } ] } } ``` ## Search Examples **Search all documentation:** ```bash curl -X POST https://build.avax.network/api/mcp \ -H "Content-Type: application/json" \ -d '{ "jsonrpc": "2.0", "id": 1, "method": "tools/call", "params": { "name": "avalanche_docs_search", "arguments": { "query": "smart contracts", "limit": 10 } } }' ``` **Filter by source:** ```bash # Search only academy content curl -X POST https://build.avax.network/api/mcp \ -H "Content-Type: application/json" \ -d '{ "jsonrpc": "2.0", "id": 2, "method": "tools/call", "params": { "name": "avalanche_docs_search", "arguments": { "query": "blockchain basics", "source": "academy", "limit": 5 } } }' ``` **Fetch specific page:** ```bash curl -X POST https://build.avax.network/api/mcp \ -H "Content-Type: application/json" \ -d '{ "jsonrpc": "2.0", "id": 3, "method": "tools/call", "params": { "name": "avalanche_docs_fetch", "arguments": { "url": "/docs/primary-network/overview" } } }' ``` # Security & Limits (/docs/tooling/ai-llm/security) --- title: Security & Limits description: Rate limiting, CORS policy, and privacy information for AI endpoints --- ## Rate Limiting - **60 requests per minute** per client (identified by origin or IP address) - 429 status code with Retry-After header when exceeded - RateLimit headers included in responses (Limit, Remaining, Reset) ## CORS Policy Browser requests must originate from: - `https://claude.ai` - `https://build.avax.network` - `http://localhost:3000` (development only) Non-browser MCP clients (no Origin header) are always allowed. ## Privacy We collect anonymized usage metrics including: - Tool names and invocation counts - Search result counts (not full query text) - Latency measurements - Client names (e.g., "claude-desktop") We do NOT log: - Full query text (truncated to 100 characters) - Document content - Raw IP addresses (hashed for rate limiting) ## Abuse Reporting Report security issues or abuse to: **security@avalabs.org** # AvalancheGo P-Chain RPC (/docs/rpcs/p-chain) --- title: "AvalancheGo P-Chain RPC" description: "This page is an overview of the P-Chain RPC associated with AvalancheGo." edit_url: https://github.com/ava-labs/avalanchego/edit/master/vms/platformvm/service.md --- The P-Chain API allows clients to interact with the [P-Chain](https://build.avax.network/docs/quick-start/primary-network#p-chain), which maintains Avalanche’s validator set and handles blockchain creation. ## Endpoint ``` /ext/bc/P ``` ## Format This API uses the `json 2.0` RPC format. ## Methods ### `platform.getBalance` Deprecated as of [**v1.9.12**](https://github.com/ava-labs/avalanchego/releases/tag/v1.9.12). Get the balance of AVAX controlled by a given address. **Signature:** ``` platform.getBalance({ addresses: []string }) -> { balances: string -> int, unlockeds: string -> int, lockedStakeables: string -> int, lockedNotStakeables: string -> int, utxoIDs: []{ txID: string, outputIndex: int } } ``` - `addresses` are the addresses to get the balance of. - `balances` is a map from assetID to the total balance. - `unlockeds` is a map from assetID to the unlocked balance. - `lockedStakeables` is a map from assetID to the locked stakeable balance. - `lockedNotStakeables` is a map from assetID to the locked and not stakeable balance. - `utxoIDs` are the IDs of the UTXOs that reference `address`. **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" : 1, "method" :"platform.getBalance", "params" :{ "addresses":["P-custom18jma8ppw3nhx5r4ap8clazz0dps7rv5u9xde7p"] } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P ``` **Example Response:** ```json { "jsonrpc": "2.0", "result": { "balance": "30000000000000000", "unlocked": "20000000000000000", "lockedStakeable": "10000000000000000", "lockedNotStakeable": "0", "balances": { "BUuypiq2wyuLMvyhzFXcPyxPMCgSp7eeDohhQRqTChoBjKziC": "30000000000000000" }, "unlockeds": { "BUuypiq2wyuLMvyhzFXcPyxPMCgSp7eeDohhQRqTChoBjKziC": "20000000000000000" }, "lockedStakeables": { "BUuypiq2wyuLMvyhzFXcPyxPMCgSp7eeDohhQRqTChoBjKziC": "10000000000000000" }, "lockedNotStakeables": {}, "utxoIDs": [ { "txID": "11111111111111111111111111111111LpoYY", "outputIndex": 1 }, { "txID": "11111111111111111111111111111111LpoYY", "outputIndex": 0 } ] }, "id": 1 } ``` ### `platform.getBlock` Get a block by its ID. **Signature:** ``` platform.getBlock({ blockID: string encoding: string // optional }) -> { block: string, encoding: string } ``` **Request:** - `blockID` is the block ID. It should be in cb58 format. - `encoding` is the encoding format to use. Can be either `hex` or `json`. Defaults to `hex`. **Response:** - `block` is the block encoded to `encoding`. - `encoding` is the `encoding`. #### Hex Example **Example Call:** ```sh curl -X POST --data '{ "jsonrpc": "2.0", "method": "platform.getBlock", "params": { "blockID": "d7WYmb8VeZNHsny3EJCwMm6QA37s1EHwMxw1Y71V3FqPZ5EFG", "encoding": "hex" }, "id": 1 }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P ``` **Example Response:** ```json { "jsonrpc": "2.0", "result": { "block": "0x00000000000309473dc99a0851a29174d84e522da8ccb1a56ac23f7b0ba79f80acce34cf576900000000000f4241000000010000001200000001000000000000000000000000000000000000000000000000000000000000000000000000000000011c4c57e1bcb3c567f9f03caa75563502d1a21393173c06d9d79ea247b20e24800000000021e67317cbc4be2aeb00677ad6462778a8f52274b9d605df2591b23027a87dff000000050000000338e0465f0000000100000000000000000427d4b22a2a78bcddd456742caf91b56badbff985ee19aef14573e7343fd6520000000121e67317cbc4be2aeb00677ad6462778a8f52274b9d605df2591b23027a87dff000000070000000338d1041f0000000000000000000000010000000195a4467dd8f939554ea4e6501c08294386938cbf000000010000000900000001c79711c4b48dcde205b63603efef7c61773a0eb47efb503fcebe40d21962b7c25ebd734057400a12cce9cf99aceec8462923d5d91fffe1cb908372281ed738580119286dde", "encoding": "hex" }, "id": 1 } ``` #### JSON Example **Example Call:** ```sh curl -X POST --data '{ "jsonrpc": "2.0", "method": "platform.getBlock", "params": { "blockID": "d7WYmb8VeZNHsny3EJCwMm6QA37s1EHwMxw1Y71V3FqPZ5EFG", "encoding": "json" }, "id": 1 }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P ``` **Example Response:** ```json { "jsonrpc": "2.0", "result": { "block": { "parentID": "5615di9ytxujackzaXNrVuWQy5y8Yrt8chPCscMr5Ku9YxJ1S", "height": 1000001, "txs": [ { "unsignedTx": { "inputs": { "networkID": 1, "blockchainID": "11111111111111111111111111111111LpoYY", "outputs": [], "inputs": [ { "txID": "DTqiagiMFdqbNQ62V2Gt1GddTVLkKUk2caGr4pyza9hTtsfta", "outputIndex": 0, "assetID": "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z", "fxID": "spdxUxVJQbX85MGxMHbKw1sHxMnSqJ3QBzDyDYEP3h6TLuxqQ", "input": { "amount": 13839124063, "signatureIndices": [0] } } ], "memo": "0x" }, "destinationChain": "2q9e4r6Mu3U68nU1fYjgbR6JvwrRx36CohpAX5UQxse55x1Q5", "exportedOutputs": [ { "assetID": "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z", "fxID": "spdxUxVJQbX85MGxMHbKw1sHxMnSqJ3QBzDyDYEP3h6TLuxqQ", "output": { "addresses": [ "P-avax1jkjyvlwclyu42n4yuegpczpfgwrf8r9lyj0d3c" ], "amount": 13838124063, "locktime": 0, "threshold": 1 } } ] }, "credentials": [ { "signatures": [ "0xc79711c4b48dcde205b63603efef7c61773a0eb47efb503fcebe40d21962b7c25ebd734057400a12cce9cf99aceec8462923d5d91fffe1cb908372281ed7385801" ] } ] } ] }, "encoding": "json" }, "id": 1 } ``` ### `platform.getBlockByHeight` Get a block by its height. **Signature:** ``` platform.getBlockByHeight({ height: int encoding: string // optional }) -> { block: string, encoding: string } ``` **Request:** - `height` is the block height. - `encoding` is the encoding format to use. Can be either `hex` or `json`. Defaults to `hex`. **Response:** - `block` is the block encoded to `encoding`. - `encoding` is the `encoding`. #### Hex Example **Example Call:** ```sh curl -X POST --data '{ "jsonrpc": "2.0", "method": "platform.getBlockByHeight", "params": { "height": 1000001, "encoding": "hex" }, "id": 1 }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P ``` **Example Response:** ```json { "jsonrpc": "2.0", "result": { "block": "0x00000000000309473dc99a0851a29174d84e522da8ccb1a56ac23f7b0ba79f80acce34cf576900000000000f4241000000010000001200000001000000000000000000000000000000000000000000000000000000000000000000000000000000011c4c57e1bcb3c567f9f03caa75563502d1a21393173c06d9d79ea247b20e24800000000021e67317cbc4be2aeb00677ad6462778a8f52274b9d605df2591b23027a87dff000000050000000338e0465f0000000100000000000000000427d4b22a2a78bcddd456742caf91b56badbff985ee19aef14573e7343fd6520000000121e67317cbc4be2aeb00677ad6462778a8f52274b9d605df2591b23027a87dff000000070000000338d1041f0000000000000000000000010000000195a4467dd8f939554ea4e6501c08294386938cbf000000010000000900000001c79711c4b48dcde205b63603efef7c61773a0eb47efb503fcebe40d21962b7c25ebd734057400a12cce9cf99aceec8462923d5d91fffe1cb908372281ed738580119286dde", "encoding": "hex" }, "id": 1 } ``` #### JSON Example **Example Call:** ```sh curl -X POST --data '{ "jsonrpc": "2.0", "method": "platform.getBlockByHeight", "params": { "height": 1000001, "encoding": "json" }, "id": 1 }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P ``` **Example Response:** ```json { "jsonrpc": "2.0", "result": { "block": { "parentID": "5615di9ytxujackzaXNrVuWQy5y8Yrt8chPCscMr5Ku9YxJ1S", "height": 1000001, "txs": [ { "unsignedTx": { "inputs": { "networkID": 1, "blockchainID": "11111111111111111111111111111111LpoYY", "outputs": [], "inputs": [ { "txID": "DTqiagiMFdqbNQ62V2Gt1GddTVLkKUk2caGr4pyza9hTtsfta", "outputIndex": 0, "assetID": "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z", "fxID": "spdxUxVJQbX85MGxMHbKw1sHxMnSqJ3QBzDyDYEP3h6TLuxqQ", "input": { "amount": 13839124063, "signatureIndices": [0] } } ], "memo": "0x" }, "destinationChain": "2q9e4r6Mu3U68nU1fYjgbR6JvwrRx36CohpAX5UQxse55x1Q5", "exportedOutputs": [ { "assetID": "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z", "fxID": "spdxUxVJQbX85MGxMHbKw1sHxMnSqJ3QBzDyDYEP3h6TLuxqQ", "output": { "addresses": [ "P-avax1jkjyvlwclyu42n4yuegpczpfgwrf8r9lyj0d3c" ], "amount": 13838124063, "locktime": 0, "threshold": 1 } } ] }, "credentials": [ { "signatures": [ "0xc79711c4b48dcde205b63603efef7c61773a0eb47efb503fcebe40d21962b7c25ebd734057400a12cce9cf99aceec8462923d5d91fffe1cb908372281ed7385801" ] } ] } ] }, "encoding": "json" }, "id": 1 } ``` ### `platform.getBlockchains` Deprecated as of [**v1.9.12**](https://github.com/ava-labs/avalanchego/releases/tag/v1.9.12). Get all the blockchains that exist (excluding the P-Chain). **Signature:** ``` platform.getBlockchains() -> { blockchains: []{ id: string, name: string, subnetID: string, vmID: string } } ``` - `blockchains` is all of the blockchains that exists on the Avalanche network. - `name` is the human-readable name of this blockchain. - `id` is the blockchain’s ID. - `subnetID` is the ID of the Subnet that validates this blockchain. - `vmID` is the ID of the Virtual Machine the blockchain runs. **Example Call:** ```sh curl -X POST --data '{ "jsonrpc": "2.0", "method": "platform.getBlockchains", "params": {}, "id": 1 }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P ``` **Example Response:** ```json { "jsonrpc": "2.0", "result": { "blockchains": [ { "id": "2oYMBNV4eNHyqk2fjjV5nVQLDbtmNJzq5s3qs3Lo6ftnC6FByM", "name": "X-Chain", "subnetID": "11111111111111111111111111111111LpoYY", "vmID": "jvYyfQTxGMJLuGWa55kdP2p2zSUYsQ5Raupu4TW34ZAUBAbtq" }, { "id": "2q9e4r6Mu3U68nU1fYjgbR6JvwrRx36CohpAX5UQxse55x1Q5", "name": "C-Chain", "subnetID": "11111111111111111111111111111111LpoYY", "vmID": "mgj786NP7uDwBCcq6YwThhaN8FLyybkCa4zBWTQbNgmK6k9A6" }, { "id": "CqhF97NNugqYLiGaQJ2xckfmkEr8uNeGG5TQbyGcgnZ5ahQwa", "name": "Simple DAG Payments", "subnetID": "11111111111111111111111111111111LpoYY", "vmID": "sqjdyTKUSrQs1YmKDTUbdUhdstSdtRTGRbUn8sqK8B6pkZkz1" }, { "id": "VcqKNBJsYanhVFxGyQE5CyNVYxL3ZFD7cnKptKWeVikJKQkjv", "name": "Simple Chain Payments", "subnetID": "11111111111111111111111111111111LpoYY", "vmID": "sqjchUjzDqDfBPGjfQq2tXW1UCwZTyvzAWHsNzF2cb1eVHt6w" }, { "id": "2SMYrx4Dj6QqCEA3WjnUTYEFSnpqVTwyV3GPNgQqQZbBbFgoJX", "name": "Simple Timestamp Server", "subnetID": "11111111111111111111111111111111LpoYY", "vmID": "tGas3T58KzdjLHhBDMnH2TvrddhqTji5iZAMZ3RXs2NLpSnhH" }, { "id": "KDYHHKjM4yTJTT8H8qPs5KXzE6gQH5TZrmP1qVr1P6qECj3XN", "name": "My new timestamp", "subnetID": "2bRCr6B4MiEfSjidDwxDpdCyviwnfUVqB2HGwhm947w9YYqb7r", "vmID": "tGas3T58KzdjLHhBDMnH2TvrddhqTji5iZAMZ3RXs2NLpSnhH" }, { "id": "2TtHFqEAAJ6b33dromYMqfgavGPF3iCpdG3hwNMiart2aB5QHi", "name": "My new AVM", "subnetID": "2bRCr6B4MiEfSjidDwxDpdCyviwnfUVqB2HGwhm947w9YYqb7r", "vmID": "jvYyfQTxGMJLuGWa55kdP2p2zSUYsQ5Raupu4TW34ZAUBAbtq" } ] }, "id": 1 } ``` ### `platform.getBlockchainStatus` Get the status of a blockchain. **Signature:** ``` platform.getBlockchainStatus( { blockchainID: string } ) -> {status: string} ``` `status` is one of: - `Validating`: The blockchain is being validated by this node. - `Created`: The blockchain exists but isn’t being validated by this node. - `Preferred`: The blockchain was proposed to be created and is likely to be created but the transaction isn’t yet accepted. - `Syncing`: This node is participating in this blockchain as a non-validating node. - `Unknown`: The blockchain either wasn’t proposed or the proposal to create it isn’t preferred. The proposal may be resubmitted. **Example Call:** ```sh curl -X POST --data '{ "jsonrpc": "2.0", "method": "platform.getBlockchainStatus", "params":{ "blockchainID":"2NbS4dwGaf2p1MaXb65PrkZdXRwmSX4ZzGnUu7jm3aykgThuZE" }, "id": 1 }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P ``` **Example Response:** ```json { "jsonrpc": "2.0", "result": { "status": "Created" }, "id": 1 } ``` ### `platform.getCurrentSupply` Returns an upper bound on amount of tokens that exist that can stake the requested Subnet. This is an upper bound because it does not account for burnt tokens, including transaction fees. **Signature:** ``` platform.getCurrentSupply ({ subnetID: string // optional }) -> { supply: int } ``` - `supply` is an upper bound on the number of tokens that exist. **Example Call:** ```sh curl -X POST --data '{ "jsonrpc": "2.0", "method": "platform.getCurrentSupply", "params": { "subnetID": "11111111111111111111111111111111LpoYY" }, "id": 1 }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P ``` **Example Response:** ```json { "jsonrpc": "2.0", "result": { "supply": "365865167637779183" }, "id": 1 } ``` The response in this example indicates that AVAX’s supply is at most 365.865 million. ### `platform.getCurrentValidators` List the current validators of the given Subnet. **Signature:** ``` platform.getCurrentValidators({ subnetID: string, // optional nodeIDs: string[], // optional }) -> { validators: []{ txID: string, startTime: string, endTime: string, nodeID: string, weight: string, validationID: string, publicKey: string, remainingBalanceOwner: { locktime: string, threshold: string, addresses: string[] }, deactivationOwner: { locktime: string, threshold: string, addresses: string[] }, minNonce: string, balance: string, validationRewardOwner: { locktime: string, threshold: string, addresses: string[] }, delegationRewardOwner: { locktime: string, threshold: string, addresses: string[] }, potentialReward: string, delegationFee: string, uptime: string, connected: bool, signer: { publicKey: string, proofOfPosession: string }, delegatorCount: string, delegatorWeight: string, delegators: []{ txID: string, startTime: string, endTime: string, weight: string, nodeID: string, rewardOwner: { locktime: string, threshold: string, addresses: string[] }, potentialReward: string, } } } ``` - `subnetID` is the Subnet whose current validators are returned. If omitted, returns the current validators of the Primary Network. - `nodeIDs` is a list of the NodeIDs of current validators to request. If omitted, all current validators are returned. If a specified NodeID is not in the set of current validators, it will not be included in the response. - `validators` can include different fields based on the subnet type (L1, PoA Subnets, the Primary Network): - `txID` is the validator transaction. - `startTime` is the Unix time when the validator starts validating the Subnet. - `endTime` is the Unix time when the validator stops validating the Subnet. Omitted if `subnetID` is a L1 Subnet. - `nodeID` is the validator’s node ID. - `weight` is the validator’s weight (stake) when sampling validators. - `validationID` is the ID for L1 subnet validator registration transaction. Omitted if `subnetID` is not an L1 Subnet. - `publicKey` is the compressed BLS public key of the validator. Omitted if `subnetID` is not an L1 Subnet. - `remainingBalanceOwner` is an `OutputOwners` which includes a `locktime`, `threshold`, and an array of `addresses`. It specifies the owner that will receive any withdrawn balance. Omitted if `subnetID` is not an L1 Subnet. - `deactivationOwner` is an `OutputOwners` which includes a `locktime`, `threshold`, and an array of `addresses`. It specifies the owner that can withdraw the balance. Omitted if `subnetID` is not an L1 Subnet. - `minNonce` is minimum nonce that must be included in a `SetL1ValidatorWeightTx` for the transaction to be valid. Omitted if `subnetID` is not an L1 Subnet. - `balance` is current remaining balance that can be used to pay for the validators continuous fee. Omitted if `subnetID` is not an L1 Subnet. - `validationRewardOwner` is an `OutputOwners` output which includes `locktime`, `threshold` and array of `addresses`. Specifies the owner of the potential reward earned from staking. Omitted if `subnetID` is not the Primary Network. - `delegationRewardOwner` is an `OutputOwners` output which includes `locktime`, `threshold` and array of `addresses`. Specifies the owner of the potential reward earned from delegations. Omitted if `subnetID` is not the Primary Network. - `potentialReward` is the potential reward earned from staking. Omitted if `subnetID` is not the Primary Network. - `delegationFeeRate` is the percent fee this validator charges when others delegate stake to them. Omitted if `subnetID` is not the Primary Network. - `uptime` is the % of time the queried node has reported the peer as online and validating the Subnet. Omitted if `subnetID` is not the Primary Network. - `connected` is if the node is connected and tracks the Subnet. Omitted if `subnetID` is not the Primary Network. - `signer` is the node's BLS public key and proof of possession. Omitted if the validator doesn't have a BLS public key. Omitted if `subnetID` is not the Primary Network. - `delegatorCount` is the number of delegators on this validator. Omitted if `subnetID` is not the Primary Network. - `delegatorWeight` is total weight of delegators on this validator. Omitted if `subnetID` is not the Primary Network. - `delegators` is the list of delegators to this validator. Omitted if `subnetID` is not the Primary Network. Omitted unless `nodeIDs` specifies a single NodeID. - `txID` is the delegator transaction. - `startTime` is the Unix time when the delegator started. - `endTime` is the Unix time when the delegator stops. - `weight` is the amount of nAVAX this delegator staked. - `nodeID` is the validating node’s node ID. - `rewardOwner` is an `OutputOwners` output which includes `locktime`, `threshold` and array of `addresses`. - `potentialReward` is the potential reward earned from staking Note: An L1 Subnet can include both initial legacy PoA validators (before L1 conversion) and L1 validators. The response will include both types of validators. **Example Call:** ```sh curl -X POST --data '{ "jsonrpc": "2.0", "method": "platform.getCurrentValidators", "params": { "nodeIDs": ["NodeID-5mb46qkSBj81k9g9e4VFjGGSbaaSLFRzD"] }, "id": 1 }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P ``` **Example Response (Primary Network):** ```json { "jsonrpc": "2.0", "result": { "validators": [ { "txID": "2NNkpYTGfTFLSGXJcHtVv6drwVU2cczhmjK2uhvwDyxwsjzZMm", "startTime": "1600368632", "endTime": "1602960455", "weight": "2000000000000", "nodeID": "NodeID-5mb46qkSBj81k9g9e4VFjGGSbaaSLFRzD", "validationRewardOwner": { "locktime": "0", "threshold": "1", "addresses": ["P-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5"] }, "delegationRewardOwner": { "locktime": "0", "threshold": "1", "addresses": ["P-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5"] }, "potentialReward": "117431493426", "delegationFee": "10.0000", "uptime": "0.0000", "connected": false, "delegatorCount": "1", "delegatorWeight": "25000000000", "delegators": [ { "txID": "Bbai8nzGVcyn2VmeYcbS74zfjJLjDacGNVuzuvAQkHn1uWfoV", "startTime": "1600368523", "endTime": "1602960342", "weight": "25000000000", "nodeID": "NodeID-5mb46qkSBj81k9g9e4VFjGGSbaaSLFRzD", "rewardOwner": { "locktime": "0", "threshold": "1", "addresses": ["P-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5"] }, "potentialReward": "11743144774" } ] } ] }, "id": 1 } ``` **Example Response (L1):** ```json { "jsonrpc": "2.0", "result": { "validators": [ { "validationID": "2wTscvX3JUsMbZHFRd9t8Ywz2q9j2BmETg8cTvgUHgawjbSvZX", "nodeID": "NodeID-5mb46qkSBj81k9g9e4VFjGGSbaaSLFRzD", "publicKey": "0x91951771ff32b1a985a4936592bce8512a986353c4c2eb5a0f12dbb76bda3a0a0c975e26413ff44c0ee9d8d689eff8ed", "remainingBalanceOwner": { "locktime": "0", "threshold": "1", "addresses": [ "P-fuji1ywzvrftfqexh5g6qa9zyrytj6pqdfetza2hqln" ] }, "deactivationOwner": { "locktime": "0", "threshold": "1", "addresses": [ "P-fuji1ywzvrftfqexh5g6qa9zyrytj6pqdfetza2hqln" ] }, "startTime": "1734034648", "weight": "20", "minNonce": "0", "balance": "8780477952" } ] }, "id": 1 } ``` ### `platform.getFeeConfig` Returns the dynamic fee configuration of the P-chain. **Signature:** ``` platform.getFeeConfig() -> { weights: []uint64, maxCapacity: uint64, maxPerSecond: uint64, targetPerSecond: uint64, minPrice: uint64, excessConversionConstant: uint64 } ``` - `weights` to merge fee dimensions into a single gas value - `maxCapacity` is the amount of gas the chain is allowed to store for future use - `maxPerSecond` is the amount of gas the chain is allowed to consume per second - `targetPerSecond` is the target amount of gas the chain should consume per second to keep fees stable - `minPrice` is the minimum price per unit of gas - `excessConversionConstant` is used to convert excess gas to a gas price **Example Call:** ```sh curl -X POST --data '{ "jsonrpc": "2.0", "method": "platform.getFeeConfig", "params": {}, "id": 1 }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P ``` **Example Response:** ```json { "jsonrpc": "2.0", "result": { "weights": [1, 1000, 1000, 4], "maxCapacity": 1000000, "maxPerSecond": 100000, "targetPerSecond": 50000, "minPrice": 1, "excessConversionConstant": 2164043 }, "id": 1 } ``` ### `platform.getFeeState` Returns the current fee state of the P-chain. **Signature:** ``` platform.getFeeState() -> { capacity: uint64, excess: uint64, price: uint64, timestamp: string } ``` **Example Call:** ```sh curl -X POST --data '{ "jsonrpc": "2.0", "method": "platform.getFeeState", "params": {}, "id": 1 }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P ``` **Example Response:** ```json { "jsonrpc": "2.0", "result": { "capacity": 973044, "excess": 26956, "price": 1, "timestamp": "2024-12-16T17:19:07Z" }, "id": 1 } ``` ### `platform.getHeight` Returns the height of the last accepted block. **Signature:** ``` platform.getHeight() -> { height: int, } ``` **Example Call:** ```sh curl -X POST --data '{ "jsonrpc": "2.0", "method": "platform.getHeight", "params": {}, "id": 1 }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P ``` **Example Response:** ```json { "jsonrpc": "2.0", "result": { "height": "56" }, "id": 1 } ``` ### `platform.getL1Validator` Returns a current L1 validator. **Signature:** ``` platform.getL1Validator({ validationID: string, }) -> { validationID: string, subnetID: string, nodeID: string, publicKey: string, remainingBalanceOwner: { locktime: string, threshold: string, addresses: string[] }, deactivationOwner: { locktime: string, threshold: string, addresses: string[] }, startTime: string, weight: string, minNonce: string, balance: string, height: string } ``` - `validationID` is the ID for L1 subnet validator registration transaction. - `subnetID` is the L1 this validator is validating. - `nodeID` is the node ID of the validator. - `publicKey` is the compressed BLS public key of the validator. - `remainingBalanceOwner` is an `OutputOwners` which includes a `locktime`, `threshold`, and an array of `addresses`. It specifies the owner that will receive any withdrawn balance. - `deactivationOwner` is an `OutputOwners` which includes a `locktime`, `threshold`, and an array of `addresses`. It specifies the owner that can withdraw the balance. - `startTime` is the unix timestamp, in seconds, of when this validator was added to the validator set. - `weight` is weight of this validator used for consensus voting and ICM. - `minNonce` is minimum nonce that must be included in a `SetL1ValidatorWeightTx` for the transaction to be valid. - `balance` is current remaining balance that can be used to pay for the validators continuous fee. - `height` is height of the last accepted block. **Example Call:** ```sh curl -X POST --data '{ "jsonrpc": "2.0", "method": "platform.getL1Validator", "params": { "validationID": ["9FAftNgNBrzHUMMApsSyV6RcFiL9UmCbvsCu28xdLV2mQ7CMo"] }, "id": 1 }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P ``` **Example Response:** ```json { "jsonrpc": "2.0", "result": { "subnetID": "2DeHa7Qb6sufPkmQcFWG2uCd4pBPv9WB6dkzroiMQhd1NSRtof", "nodeID": "NodeID-7Xhw2mDxuDS44j42TCB6U5579esbSt3Lg", "validationID": "9FAftNgNBrzHUMMApsSyV6RcFiL9UmCbvsCu28xdLV2mQ7CMo", "publicKey": "0x900c9b119b5c82d781d4b49be78c3fc7ae65f2b435b7ed9e3a8b9a03e475edff86d8a64827fec8db23a6f236afbf127d", "remainingBalanceOwner": { "locktime": "0", "threshold": "0", "addresses": [] }, "deactivationOwner": { "locktime": "0", "threshold": "0", "addresses": [] }, "startTime": "1731445206", "weight": "49463", "minNonce": "0", "balance": "1000000000", "height": "3" }, "id": 1 } ``` ### `platform.getProposedHeight` Returns this node's current proposer VM height **Signature:** ``` platform.getProposedHeight() -> { height: int, } ``` **Example Call:** ```sh curl -X POST --data '{ "jsonrpc": "2.0", "method": "platform.getProposedHeight", "params": {}, "id": 1 }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P ``` **Example Response:** ```json { "jsonrpc": "2.0", "result": { "height": "56" }, "id": 1 } ``` ### `platform.getMinStake` Get the minimum amount of tokens required to validate the requested Subnet and the minimum amount of tokens that can be delegated. **Signature:** ``` platform.getMinStake({ subnetID: string // optional }) -> { minValidatorStake : uint64, minDelegatorStake : uint64 } ``` **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"platform.getMinStake", "params": { "subnetID":"11111111111111111111111111111111LpoYY" }, }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P ``` **Example Response:** ```json { "jsonrpc": "2.0", "result": { "minValidatorStake": "2000000000000", "minDelegatorStake": "25000000000" }, "id": 1 } ``` ### `platform.getRewardUTXOs` Deprecated as of [**v1.9.12**](https://github.com/ava-labs/avalanchego/releases/tag/v1.9.12). Returns the UTXOs that were rewarded after the provided transaction's staking or delegation period ended. **Signature:** ``` platform.getRewardUTXOs({ txID: string, encoding: string // optional }) -> { numFetched: integer, utxos: []string, encoding: string } ``` - `txID` is the ID of the staking or delegating transaction - `numFetched` is the number of returned UTXOs - `utxos` is an array of encoded reward UTXOs - `encoding` specifies the format for the returned UTXOs. Can only be `hex` when a value is provided. **Example Call:** ```sh curl -X POST --data '{ "jsonrpc": "2.0", "method": "platform.getRewardUTXOs", "params": { "txID": "2nmH8LithVbdjaXsxVQCQfXtzN9hBbmebrsaEYnLM9T32Uy2Y5" }, "id": 1 }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P ``` **Example Response:** ```json { "jsonrpc": "2.0", "result": { "numFetched": "2", "utxos": [ "0x0000a195046108a85e60f7a864bb567745a37f50c6af282103e47cc62f036cee404700000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f216c1f01765", "0x0000ae8b1b94444eed8de9a81b1222f00f1b4133330add23d8ac288bffa98b85271100000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f216473d042a" ], "encoding": "hex" }, "id": 1 } ``` ### `platform.getStake` Deprecated as of [**v1.9.12**](https://github.com/ava-labs/avalanchego/releases/tag/v1.9.12). Get the amount of nAVAX staked by a set of addresses. The amount returned does not include staking rewards. **Signature:** ``` platform.getStake({ addresses: []string, validatorsOnly: true or false }) -> { stakeds: string -> int, stakedOutputs: []string, encoding: string } ``` - `addresses` are the addresses to get information about. - `validatorsOnly` can be either `true` or `false`. If `true`, will skip checking delegators for stake. - `stakeds` is a map from assetID to the amount staked by addresses provided. - `stakedOutputs` are the string representation of staked outputs. - `encoding` specifies the format for the returned outputs. **Example Call:** ```sh curl -X POST --data '{ "jsonrpc": "2.0", "method": "platform.getStake", "params": { "addresses": [ "P-avax1pmgmagjcljjzuz2ve339dx82khm7q8getlegte" ], "validatorsOnly": true }, "id": 1 } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P ``` **Example Response:** ```json { "jsonrpc": "2.0", "result": { "staked": "6500000000000", "stakeds": { "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z": "6500000000000" }, "stakedOutputs": [ "0x000021e67317cbc4be2aeb00677ad6462778a8f52274b9d605df2591b23027a87dff00000007000005e96630e800000000000000000000000001000000011f1c933f38da6ba0ba46f8c1b0a7040a9a991a80dd338ed1" ], "encoding": "hex" }, "id": 1 } ``` ### `platform.getStakingAssetID` Retrieve an assetID for a Subnet’s staking asset. **Signature:** ``` platform.getStakingAssetID({ subnetID: string // optional }) -> { assetID: string } ``` - `subnetID` is the Subnet whose assetID is requested. - `assetID` is the assetID for a Subnet’s staking asset. **Example Call:** ```sh curl -X POST --data '{ "jsonrpc": "2.0", "method": "platform.getStakingAssetID", "params": { "subnetID": "11111111111111111111111111111111LpoYY" }, "id": 1 }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P ``` **Example Response:** ```json { "jsonrpc": "2.0", "result": { "assetID": "2fombhL7aGPwj3KH4bfrmJwW6PVnMobf9Y2fn9GwxiAAJyFDbe" }, "id": 1 } ``` The AssetID for AVAX differs depending on the network you are on. Mainnet: FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z Testnet: U8iRqJoiJm8xZHAacmvYyZVwqQx6uDNtQeP3CQ6fcgQk3JqnK ### `platform.getSubnet` Get owners and info about the Subnet or L1. **Signature:** ``` platform.getSubnet({ subnetID: string }) -> { isPermissioned: bool, controlKeys: []string, threshold: string, locktime: string, subnetTransformationTxID: string, conversionID: string, managerChainID: string, managerAddress: string } ``` - `subnetID` is the ID of the Subnet to get information about. If omitted, fails. - `threshold` signatures from addresses in `controlKeys` are needed to make changes to a permissioned subnet. If the Subnet is not a PoA Subnet, then `threshold` will be `0` and `controlKeys` will be empty. - changes can not be made into the subnet until `locktime` is in the past. - `subnetTransformationTxID` is the ID of the transaction that changed the subnet into an elastic one, if it exists. - `conversionID` is the ID of the conversion from a permissioned Subnet into an L1, if it exists. - `managerChainID` is the ChainID that has the ability to modify this L1s validator set, if it exists. - `managerAddress` is the address that has the ability to modify this L1s validator set, if it exists. **Example Call:** ```sh curl -X POST --data '{ "jsonrpc": "2.0", "method": "platform.getSubnet", "params": {"subnetID":"Vz2ArUpigHt7fyE79uF3gAXvTPLJi2LGgZoMpgNPHowUZJxBb"}, "id": 1 }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P ``` **Example Response:** ```json { "jsonrpc": "2.0", "result": { "isPermissioned": true, "controlKeys": [ "P-fuji1ztvstx6naeg6aarfd047fzppdt8v4gsah88e0c", "P-fuji193kvt4grqewv6ce2x59wnhydr88xwdgfcedyr3" ], "threshold": "1", "locktime": "0", "subnetTransformationTxID": "11111111111111111111111111111111LpoYY", "conversionID": "11111111111111111111111111111111LpoYY", "managerChainID": "11111111111111111111111111111111LpoYY", "managerAddress": null }, "id": 1 } ``` ### `platform.getSubnets` Deprecated as of [**v1.9.12**](https://github.com/ava-labs/avalanchego/releases/tag/v1.9.12). Get info about the Subnets. **Signature:** ``` platform.getSubnets({ ids: []string }) -> { subnets: []{ id: string, controlKeys: []string, threshold: string } } ``` - `ids` are the IDs of the Subnets to get information about. If omitted, gets information about all Subnets. - `id` is the Subnet’s ID. - `threshold` signatures from addresses in `controlKeys` are needed to add a validator to the Subnet. If the Subnet is not a PoA Subnet, then `threshold` will be `0` and `controlKeys` will be empty. **Example Call:** ```sh curl -X POST --data '{ "jsonrpc": "2.0", "method": "platform.getSubnets", "params": {"ids":["hW8Ma7dLMA7o4xmJf3AXBbo17bXzE7xnThUd3ypM4VAWo1sNJ"]}, "id": 1 }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P ``` **Example Response:** ```json { "jsonrpc": "2.0", "result": { "subnets": [ { "id": "hW8Ma7dLMA7o4xmJf3AXBbo17bXzE7xnThUd3ypM4VAWo1sNJ", "controlKeys": [ "KNjXsaA1sZsaKCD1cd85YXauDuxshTes2", "Aiz4eEt5xv9t4NCnAWaQJFNz5ABqLtJkR" ], "threshold": "2" } ] }, "id": 1 } ``` ### `platform.getTimestamp` Get the current P-Chain timestamp. **Signature:** ``` platform.getTimestamp() -> {time: string} ``` **Example Call:** ```sh curl -X POST --data '{ "jsonrpc": "2.0", "method": "platform.getTimestamp", "params": {}, "id": 1 } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P ``` **Example Response:** ```json { "jsonrpc": "2.0", "result": { "timestamp": "2021-09-07T00:00:00-04:00" }, "id": 1 } ``` ### `platform.getTotalStake` Get the total amount of tokens staked on the requested Subnet. **Signature:** ``` platform.getTotalStake({ subnetID: string }) -> { stake: int weight: int } ``` #### Primary Network Example **Example Call:** ```sh curl -X POST --data '{ "jsonrpc": "2.0", "method": "platform.getTotalStake", "params": { "subnetID": "11111111111111111111111111111111LpoYY" }, "id": 1 } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P ``` **Example Response:** ```json { "jsonrpc": "2.0", "result": { "stake": "279825917679866811", "weight": "279825917679866811" }, "id": 1 } ``` #### Subnet Example **Example Call:** ```sh curl -X POST --data '{ "jsonrpc": "2.0", "method": "platform.getTotalStake", "params": { "subnetID": "2bRCr6B4MiEfSjidDwxDpdCyviwnfUVqB2HGwhm947w9YYqb7r", }, "id": 1 } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/P ``` **Example Response:** ```json { "jsonrpc": "2.0", "result": { "weight": "100000" }, "id": 1 } ``` ### `platform.getTx` Gets a transaction by its ID. Optional `encoding` parameter to specify the format for the returned transaction. Can be either `hex` or `json`. Defaults to `hex`. **Signature:** ``` platform.getTx({ txID: string, encoding: string // optional }) -> { tx: string, encoding: string, } ``` **Example Call:** ```sh curl -X POST --data '{ "jsonrpc": "2.0", "method": "platform.getTx", "params": { "txID":"28KVjSw5h3XKGuNpJXWY74EdnGq4TUWvCgEtJPymgQTvudiugb", "encoding": "json" }, "id": 1 }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P ``` **Example Response:** ```json { "jsonrpc": "2.0", "result": { "tx": { "unsignedTx": { "networkID": 1, "blockchainID": "11111111111111111111111111111111LpoYY", "outputs": [], "inputs": [ { "txID": "NXNJHKeaJyjjWVSq341t6LGQP5UNz796o1crpHPByv1TKp9ZP", "outputIndex": 0, "assetID": "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z", "fxID": "spdxUxVJQbX85MGxMHbKw1sHxMnSqJ3QBzDyDYEP3h6TLuxqQ", "input": { "amount": 20824279595, "signatureIndices": [0] } }, { "txID": "2ahK5SzD8iqi5KBqpKfxrnWtrEoVwQCqJsMoB9kvChCaHgAQC9", "outputIndex": 1, "assetID": "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z", "fxID": "spdxUxVJQbX85MGxMHbKw1sHxMnSqJ3QBzDyDYEP3h6TLuxqQ", "input": { "amount": 28119890783, "signatureIndices": [0] } } ], "memo": "0x", "validator": { "nodeID": "NodeID-VT3YhgFaWEzy4Ap937qMeNEDscCammzG", "start": 1682945406, "end": 1684155006, "weight": 48944170378 }, "stake": [ { "assetID": "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z", "fxID": "spdxUxVJQbX85MGxMHbKw1sHxMnSqJ3QBzDyDYEP3h6TLuxqQ", "output": { "addresses": ["P-avax1tnuesf6cqwnjw7fxjyk7lhch0vhf0v95wj5jvy"], "amount": 48944170378, "locktime": 0, "threshold": 1 } } ], "rewardsOwner": { "addresses": ["P-avax19zfygxaf59stehzedhxjesads0p5jdvfeedal0"], "locktime": 0, "threshold": 1 } }, "credentials": [ { "signatures": [ "0x6954e90b98437646fde0c1d54c12190fc23ae5e319c4d95dda56b53b4a23e43825251289cdc3728f1f1e0d48eac20e5c8f097baa9b49ea8a3cb6a41bb272d16601" ] }, { "signatures": [ "0x6954e90b98437646fde0c1d54c12190fc23ae5e319c4d95dda56b53b4a23e43825251289cdc3728f1f1e0d48eac20e5c8f097baa9b49ea8a3cb6a41bb272d16601" ] } ], "id": "28KVjSw5h3XKGuNpJXWY74EdnGq4TUWvCgEtJPymgQTvudiugb" }, "encoding": "json" }, "id": 1 } ``` ### `platform.getTxStatus` Gets a transaction’s status by its ID. If the transaction was dropped, response will include a `reason` field with more information why the transaction was dropped. **Signature:** ``` platform.getTxStatus({ txID: string }) -> { status: string } ``` `status` is one of: - `Committed`: The transaction is (or will be) accepted by every node - `Processing`: The transaction is being voted on by this node - `Dropped`: The transaction will never be accepted by any node in the network, check `reason` field for more information - `Unknown`: The transaction hasn’t been seen by this node **Example Call:** ```sh curl -X POST --data '{ "jsonrpc": "2.0", "method": "platform.getTxStatus", "params": { "txID":"TAG9Ns1sa723mZy1GSoGqWipK6Mvpaj7CAswVJGM6MkVJDF9Q" }, "id": 1 }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P ``` **Example Response:** ```json { "jsonrpc": "2.0", "result": { "status": "Committed" }, "id": 1 } ``` ### `platform.getUTXOs` Gets the UTXOs that reference a given set of addresses. **Signature:** ``` platform.getUTXOs( { addresses: []string, limit: int, // optional startIndex: { // optional address: string, utxo: string }, sourceChain: string, // optional encoding: string, // optional }, ) -> { numFetched: string, utxos: []string, endIndex: { address: string, utxo: string }, encoding: string, } ``` - `utxos` is a list of UTXOs such that each UTXO references at least one address in `addresses`. - At most `limit` UTXOs are returned. If `limit` is omitted or greater than 1024, it is set to 1024. - This method supports pagination. `endIndex` denotes the last UTXO returned. To get the next set of UTXOs, use the value of `endIndex` as `startIndex` in the next call. - If `startIndex` is omitted, will fetch all UTXOs up to `limit`. - When using pagination (that is when `startIndex` is provided), UTXOs are not guaranteed to be unique across multiple calls. That is, a UTXO may appear in the result of the first call, and then again in the second call. - When using pagination, consistency is not guaranteed across multiple calls. That is, the UTXO set of the addresses may have changed between calls. - `encoding` specifies the format for the returned UTXOs. Can only be `hex` when a value is provided. #### **Example** Suppose we want all UTXOs that reference at least one of `P-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5` and `P-avax1d09qn852zcy03sfc9hay2llmn9hsgnw4tp3dv6`. ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"platform.getUTXOs", "params" :{ "addresses":["P-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5", "P-avax1d09qn852zcy03sfc9hay2llmn9hsgnw4tp3dv6"], "limit":5, "encoding": "hex" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P ``` This gives response: ```json { "jsonrpc": "2.0", "result": { "numFetched": "5", "utxos": [ "0x0000a195046108a85e60f7a864bb567745a37f50c6af282103e47cc62f036cee404700000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f216c1f01765", "0x0000ae8b1b94444eed8de9a81b1222f00f1b4133330add23d8ac288bffa98b85271100000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f216473d042a", "0x0000731ce04b1feefa9f4291d869adc30a33463f315491e164d89be7d6d2d7890cfc00000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f21600dd3047", "0x0000b462030cc4734f24c0bc224cf0d16ee452ea6b67615517caffead123ab4fbf1500000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f216c71b387e", "0x000054f6826c39bc957c0c6d44b70f961a994898999179cc32d21eb09c1908d7167b00000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f2166290e79d" ], "endIndex": { "address": "P-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5", "utxo": "kbUThAUfmBXUmRgTpgD6r3nLj7rJUGho6xyht5nouNNypH45j" }, "encoding": "hex" }, "id": 1 } ``` Since `numFetched` is the same as `limit`, we can tell that there may be more UTXOs that were not fetched. We call the method again, this time with `startIndex`: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"platform.getUTXOs", "params" :{ "addresses":["P-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5"], "limit":5, "startIndex": { "address": "P-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5", "utxo": "0x62fc816bb209857923770c286192ab1f9e3f11e4a7d4ba0943111c3bbfeb9e4a5ea72fae" }, "encoding": "hex" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P ``` This gives response: ```json { "jsonrpc": "2.0", "result": { "numFetched": "4", "utxos": [ "0x000020e182dd51ee4dcd31909fddd75bb3438d9431f8e4efce86a88a684f5c7fa09300000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f21662861d59", "0x0000a71ba36c475c18eb65dc90f6e85c4fd4a462d51c5de3ac2cbddf47db4d99284e00000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f21665f6f83f", "0x0000925424f61cb13e0fbdecc66e1270de68de9667b85baa3fdc84741d048daa69fa00000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f216afecf76a", "0x000082f30327514f819da6009fad92b5dba24d27db01e29ad7541aa8e6b6b554615c00000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f216779c2d59" ], "endIndex": { "address": "P-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5", "utxo": "21jG2RfqyHUUgkTLe2tUp6ETGLriSDTW3th8JXFbPRNiSZ11jK" }, "encoding": "hex" }, "id": 1 } ``` Since `numFetched` is less than `limit`, we know that we are done fetching UTXOs and don’t need to call this method again. Suppose we want to fetch the UTXOs exported from the X Chain to the P Chain in order to build an ImportTx. Then we need to call GetUTXOs with the `sourceChain` argument in order to retrieve the atomic UTXOs: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"platform.getUTXOs", "params" :{ "addresses":["P-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5"], "sourceChain": "X", "encoding": "hex" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P ``` This gives response: ```json { "jsonrpc": "2.0", "result": { "numFetched": "1", "utxos": [ "0x00001f989ffaf18a18a59bdfbf209342aa61c6a62a67e8639d02bb3c8ddab315c6fa0000000139c33a499ce4c33a3b09cdd2cfa01ae70dbf2d18b2d7d168524440e55d55008800000007000000746a528800000000000000000000000001000000013cb7d3842e8cee6a0ebd09f1fe884f6861e1b29cd704fe76" ], "endIndex": { "address": "P-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5", "utxo": "S5UKgWoVpoGFyxfisebmmRf8WqC7ZwcmYwS7XaDVZqoaFcCwK" }, "encoding": "hex" }, "id": 1 } ``` ### `platform.getValidatorsAt` Get the validators and their weights of a Subnet or the Primary Network at a given P-Chain height. **Signature:** ``` platform.getValidatorsAt( { height: [int|string], subnetID: string, // optional } ) ``` - `height` is the P-Chain height to get the validator set at, or the string literal "proposed" to return the validator set at this node's ProposerVM height. - `subnetID` is the Subnet ID to get the validator set of. If not given, gets validator set of the Primary Network. **Example Call:** ```bash curl -X POST --data '{ "jsonrpc": "2.0", "method": "platform.getValidatorsAt", "params": { "height":1 }, "id": 1 }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P ``` **Example Response:** ```json { "jsonrpc": "2.0", "result": { "validators": { "NodeID-7Xhw2mDxuDS44j42TCB6U5579esbSt3Lg": 2000000000000000, "NodeID-GWPcbFJZFfZreETSoWjPimr846mXEKCtu": 2000000000000000, "NodeID-MFrZFVCXPv5iCn6M9K6XduxGTYp891xXZ": 2000000000000000, "NodeID-NFBbbJ4qCmNaCzeW7sxErhvWqvEQMnYcN": 2000000000000000, "NodeID-P7oB2McjBGgW2NXXWVYjV8JEDFoW9xDE5": 2000000000000000 } }, "id": 1 } ``` ### `platform.getAllValidatorsAt` Get the validators and their weights of all Subnets and the Primary Network at a given P-Chain height. **Signature:** ``` platform.getAllValidatorsAt( { height: [int|string], } ) ``` ```bash curl -X POST --data '{ "jsonrpc": "2.0", "method": "platform.getAllValidatorsAt", "params": { "height":1 }, "id": 1 }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P ``` **Example Response:** ```json { "jsonrpc": "2.0", "result": { "validatorSets": { "11111111111111111111111111111111LpoYY": { "validators": [ { "publicKey": "0x8048109c3da13de0700f9f3590c3270bfc42277417f6d0cc84282947e1a1f8b4980fd3e3fe223acf0f56a5838890814a", "weight": "2000000000000000", "nodeIDs": [ "NodeID-P7oB2McjBGgW2NXXWVYjV8JEDFoW9xDE5" ] }, { "publicKey": "0xa058ff27a4c570664bfa28e34939368539a1340867951943d0f56fa8aac13bc09ff64f341acf8cc0cef74202c2d6f9c0", "weight": "2000000000000000", "nodeIDs": [ "NodeID-MFrZFVCXPv5iCn6M9K6XduxGTYp891xXZ" ] }, { "publicKey": "0xa10b6955a85684a0f5c94b8381f04506f1bee60625927d372323f78b3d30196cc56c8618c77eaf429298e74673d832c3", "weight": "2000000000000000", "nodeIDs": [ "NodeID-NFBbbJ4qCmNaCzeW7sxErhvWqvEQMnYcN" ] }, { "publicKey": "0xaccd61ceb90c61628aa0fa34acab27ecb08f6897e9ccad283578c278c52109f9e10e4f8bc31aa6d7905c4e1623de367e", "weight": "2000000000000000", "nodeIDs": [ "NodeID-GWPcbFJZFfZreETSoWjPimr846mXEKCtu" ] }, { "publicKey": "0x900c9b119b5c82d781d4b49be78c3fc7ae65f2b435b7ed9e3a8b9a03e475edff86d8a64827fec8db23a6f236afbf127d", "weight": "2000000000000000", "nodeIDs": [ "NodeID-7Xhw2mDxuDS44j42TCB6U5579esbSt3Lg" ] } ], "totalWeight": "10000000000000000" } } }, "id": 1 } ``` - `height` is the P-Chain height to get the validator set at, or the string literal "proposed" to return the validator set at this node's ProposerVM height. **Example Call:** ### `platform.getValidatorFeeConfig` Returns the validator fee configuration of the P-Chain. **Signature:** ``` platform.getValidatorFeeConfig() -> { capacity: uint64, target: uint64, minPrice: uint64, excessConversionConstant: uint64 } ``` - `capacity` is the maximum number of L1 validators the chain is allowed to have at any given time - `target` is the target number of L1 validators the chain should have to keep fees stable - `minPrice` is the minimum price per L1 validator - `excessConversionConstant` is used to convert excess L1 validators to a gas price **Example Call:** ```sh curl -X POST --data '{ "jsonrpc": "2.0", "method": "platform.getValidatorFeeConfig", "params": {}, "id": 1 }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P ``` **Example Response:** ```json { "jsonrpc": "2.0", "result": { "capacity": 20000, "target": 10000, "targetPerSecond": 50000, "minPrice": 512, "excessConversionConstant": 1246488515 }, "id": 1 } ``` ### `platform.getValidatorFeeState` Returns the current validator fee state of the P-Chain. **Signature:** ``` platform.getValidatorFeeState() -> { excess: uint64, price: uint64, timestamp: string } ``` **Example Call:** ```sh curl -X POST --data '{ "jsonrpc": "2.0", "method": "platform.getValidatorFeeState", "params": {}, "id": 1 }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P ``` **Example Response:** ```json { "jsonrpc": "2.0", "result": { "excess": 26956, "price": 512, "timestamp": "2024-12-16T17:19:07Z" }, "id": 1 } ``` ### `platform.issueTx` Issue a transaction to the Platform Chain. **Signature:** ``` platform.issueTx({ tx: string, encoding: string, // optional }) -> { txID: string } ``` - `tx` is the byte representation of a transaction. - `encoding` specifies the encoding format for the transaction bytes. Can only be `hex` when a value is provided. - `txID` is the transaction’s ID. **Example Call:** ```sh curl -X POST --data '{ "jsonrpc": "2.0", "method": "platform.issueTx", "params": { "tx":"0x00000009de31b4d8b22991d51aa6aa1fc733f23a851a8c9400000000000186a0000000005f041280000000005f9ca900000030390000000000000001fceda8f90fcb5d30614b99d79fc4baa29307762668f16eb0259a57c2d3b78c875c86ec2045792d4df2d926c40f829196e0bb97ee697af71f5b0a966dabff749634c8b729855e937715b0e44303fd1014daedc752006011b730", "encoding": "hex" }, "id": 1 }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P ``` **Example Response:** ```json { "jsonrpc": "2.0", "result": { "txID": "G3BuH6ytQ2averrLxJJugjWZHTRubzCrUZEXoheG5JMqL5ccY" }, "id": 1 } ``` ### `platform.sampleValidators` Sample validators from the specified Subnet. **Signature:** ``` platform.sampleValidators( { size: int, subnetID: string, // optional } ) -> { validators: []string } ``` - `size` is the number of validators to sample. - `subnetID` is the Subnet to sampled from. If omitted, defaults to the Primary Network. - Each element of `validators` is the ID of a validator. **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"platform.sampleValidators", "params" :{ "size":2 } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P ``` **Example Response:** ```json { "jsonrpc": "2.0", "id": 1, "result": { "validators": [ "NodeID-MFrZFVCXPv5iCn6M9K6XduxGTYp891xXZ", "NodeID-NFBbbJ4qCmNaCzeW7sxErhvWqvEQMnYcN" ] } } ``` ### `platform.validatedBy` Get the Subnet that validates a given blockchain. **Signature:** ``` platform.validatedBy( { blockchainID: string } ) -> { subnetID: string } ``` - `blockchainID` is the blockchain’s ID. - `subnetID` is the ID of the Subnet that validates the blockchain. **Example Call:** ```sh curl -X POST --data '{ "jsonrpc": "2.0", "method": "platform.validatedBy", "params": { "blockchainID": "KDYHHKjM4yTJTT8H8qPs5KXzE6gQH5TZrmP1qVr1P6qECj3XN" }, "id": 1 }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P ``` **Example Response:** ```json { "jsonrpc": "2.0", "result": { "subnetID": "2bRCr6B4MiEfSjidDwxDpdCyviwnfUVqB2HGwhm947w9YYqb7r" }, "id": 1 } ``` ### `platform.validates` Get the IDs of the blockchains a Subnet validates. **Signature:** ``` platform.validates( { subnetID: string } ) -> { blockchainIDs: []string } ``` - `subnetID` is the Subnet’s ID. - Each element of `blockchainIDs` is the ID of a blockchain the Subnet validates. **Example Call:** ```sh curl -X POST --data '{ "jsonrpc": "2.0", "method": "platform.validates", "params": { "subnetID":"2bRCr6B4MiEfSjidDwxDpdCyviwnfUVqB2HGwhm947w9YYqb7r" }, "id": 1 }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/P ``` **Example Response:** ```json { "jsonrpc": "2.0", "result": { "blockchainIDs": [ "KDYHHKjM4yTJTT8H8qPs5KXzE6gQH5TZrmP1qVr1P6qECj3XN", "2TtHFqEAAJ6b33dromYMqfgavGPF3iCpdG3hwNMiart2aB5QHi" ] }, "id": 1 } ``` # Transaction Format (/docs/rpcs/p-chain/txn-format) --- title: Transaction Format --- This file is meant to be the single source of truth for how we serialize transactions in Avalanche's Platform Virtual Machine, aka the `Platform Chain` or `P-Chain`. This document uses the [primitive serialization](/docs/rpcs/other/standards/serialization-primitives) format for packing and [secp256k1](/docs/rpcs/other/standards/cryptographic-primitives#secp256k1-addresses) for cryptographic user identification. ## Codec ID Some data is prepended with a codec ID (unt16) that denotes how the data should be deserialized. Right now, the only valid codec ID is 0 (`0x00 0x00`). ## Proof of Possession A BLS public key and a proof of possession of the key. ### What Proof of Possession Contains - **PublicKey** is the 48 byte representation of the public key. - **Signature** is the 96 byte signature by the private key over its public key. ### Proof of Possession Specification ```text +------------+----------+-------------------------+ | public_key : [48]byte | 48 bytes | +------------+----------+-------------------------+ | signature : [96]byte | 96 bytes | +------------+----------+-------------------------+ | 144 bytes | +-------------------------+ ``` ### Proof of Possession Specification ```text message ProofOfPossession { bytes public_key = 1; // 48 bytes bytes signature = 2; // 96 bytes } ``` ### Proof of Possession Example ```text // Public Key: 0x85, 0x02, 0x5b, 0xca, 0x6a, 0x30, 0x2d, 0xc6, 0x13, 0x38, 0xff, 0x49, 0xc8, 0xba, 0xa5, 0x72, 0xde, 0xd3, 0xe8, 0x6f, 0x37, 0x59, 0x30, 0x4c, 0x7f, 0x61, 0x8a, 0x2a, 0x25, 0x93, 0xc1, 0x87, 0xe0, 0x80, 0xa3, 0xcf, 0xde, 0xc9, 0x50, 0x40, 0x30, 0x9a, 0xd1, 0xf1, 0x58, 0x95, 0x30, 0x67, // Signature: 0x8b, 0x1d, 0x61, 0x33, 0xd1, 0x7e, 0x34, 0x83, 0x22, 0x0a, 0xd9, 0x60, 0xb6, 0xfd, 0xe1, 0x1e, 0x4e, 0x12, 0x14, 0xa8, 0xce, 0x21, 0xef, 0x61, 0x62, 0x27, 0xe5, 0xd5, 0xee, 0xf0, 0x70, 0xd7, 0x50, 0x0e, 0x6f, 0x7d, 0x44, 0x52, 0xc5, 0xa7, 0x60, 0x62, 0x0c, 0xc0, 0x67, 0x95, 0xcb, 0xe2, 0x18, 0xe0, 0x72, 0xeb, 0xa7, 0x6d, 0x94, 0x78, 0x8d, 0x9d, 0x01, 0x17, 0x6c, 0xe4, 0xec, 0xad, 0xfb, 0x96, 0xb4, 0x7f, 0x94, 0x22, 0x81, 0x89, 0x4d, 0xdf, 0xad, 0xd1, 0xc1, 0x74, 0x3f, 0x7f, 0x54, 0x9f, 0x1d, 0x07, 0xd5, 0x9d, 0x55, 0x65, 0x59, 0x27, 0xf7, 0x2b, 0xc6, 0xbf, 0x7c, 0x12 ``` ## Transferable Output Transferable outputs wrap an output with an asset ID. ### What Transferable Output Contains A transferable output contains an `AssetID` and an `Output`. - **`AssetID`** is a 32-byte array that defines which asset this output references. The only valid `AssetID` is the AVAX `AssetID`. - **`Output`** is an output, as defined below. For example, this can be a SECP256K1 transfer output. ### Gantt Transferable Output Specification ```text +----------+----------+-------------------------+ | asset_id : [32]byte | 32 bytes | +----------+----------+-------------------------+ | output : Output | size(output) bytes | +----------+----------+-------------------------+ | 32 + size(output) bytes | +-------------------------+ ``` ### Proto Transferable Output Specification ```text message TransferableOutput { bytes asset_id = 1; // 32 bytes Output output = 2; // size(output) } ``` ### Transferable Output Example Let's make a transferable output: - `AssetID: 0x6870b7d66ac32540311379e5b5dbad28ec7eb8ddbfc8f4d67299ebb48475907a` - `Output: "Example SECP256K1 Transfer Output from below"` ```text [ AssetID <- 0x6870b7d66ac32540311379e5b5dbad28ec7eb8ddbfc8f4d67299ebb48475907a, Output <- 0x0000000700000000ee5be5c000000000000000000000000100000001da2bee01be82ecc00c34f361eda8eb30fb5a715c, ] = [ // assetID: 0x68, 0x70, 0xb7, 0xd6, 0x6a, 0xc3, 0x25, 0x40, 0x31, 0x13, 0x79, 0xe5, 0xb5, 0xdb, 0xad, 0x28, 0xec, 0x7e, 0xb8, 0xdd, 0xbf, 0xc8, 0xf4, 0xd6, 0x72, 0x99, 0xeb, 0xb4, 0x84, 0x75, 0x90, 0x7a, // output: 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x00, 0xee, 0x5b, 0xe5, 0xc0, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x01, 0xda, 0x2b, 0xee, 0x01, 0xbe, 0x82, 0xec, 0xc0, 0x0c, 0x34, 0xf3, 0x61, 0xed, 0xa8, 0xeb, 0x30, 0xfb, 0x5a, 0x71, 0x5c, ] ``` ## Transferable Input Transferable inputs describe a specific UTXO with a provided transfer input. ### What Transferable Input Contains A transferable input contains a `TxID`, `UTXOIndex` `AssetID` and an `Input`. - **`TxID`** is a 32-byte array that defines which transaction this input is consuming an output from. - **`UTXOIndex`** is an int that defines which utxo this input is consuming the specified transaction. - **`AssetID`** is a 32-byte array that defines which asset this input references. The only valid `AssetID` is the AVAX `AssetID`. - **`Input`** is a transferable input object. ### Gantt Transferable Input Specification ```text +------------+----------+------------------------+ | tx_id : [32]byte | 32 bytes | +------------+----------+------------------------+ | utxo_index : int | 04 bytes | +------------+----------+------------------------+ | asset_id : [32]byte | 32 bytes | +------------+----------+------------------------+ | input : Input | size(input) bytes | +------------+----------+------------------------+ | 68 + size(input) bytes | +------------------------+ ``` ### Proto Transferable Input Specification ```text message TransferableInput { bytes tx_id = 1; // 32 bytes uint32 utxo_index = 2; // 04 bytes bytes asset_id = 3; // 32 bytes Input input = 4; // size(input) } ``` ### Transferable Input Example Let's make a transferable input: - **`TxID`**: `0x0dfafbdf5c81f635c9257824ff21c8e3e6f7b632ac306e11446ee540d34711a15` - **`UTXOIndex`**: `0` - **`AssetID`**: `0x6870b7d66ac32540311379e5b5dbad28ec7eb8ddbfc8f4d67299ebb48475907a` - **`Input`**: `"Example SECP256K1 Transfer Input from below"` ```text [ TxID <- 0x0dfafbdf5c81f635c9257824ff21c8e3e6f7b632ac306e11446ee540d34711a15 UTXOIndex <- 0x00000001 AssetID <- 0x6870b7d66ac32540311379e5b5dbad28ec7eb8ddbfc8f4d67299ebb48475907a Input <- 0x0000000500000000ee6b28000000000100000000 ] = [ // txID: 0xdf, 0xaf, 0xbd, 0xf5, 0xc8, 0x1f, 0x63, 0x5c, 0x92, 0x57, 0x82, 0x4f, 0xf2, 0x1c, 0x8e, 0x3e, 0x6f, 0x7b, 0x63, 0x2a, 0xc3, 0x06, 0xe1, 0x14, 0x46, 0xee, 0x54, 0x0d, 0x34, 0x71, 0x1a, 0x15, // utxoIndex: 0x00, 0x00, 0x00, 0x01, // assetID: 0x68, 0x70, 0xb7, 0xd6, 0x6a, 0xc3, 0x25, 0x40, 0x31, 0x13, 0x79, 0xe5, 0xb5, 0xdb, 0xad, 0x28, 0xec, 0x7e, 0xb8, 0xdd, 0xbf, 0xc8, 0xf4, 0xd6, 0x72, 0x99, 0xeb, 0xb4, 0x84, 0x75, 0x90, 0x7a, // input: 0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x00, 0xee, 0x6b, 0x28, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00 ] ``` ## Outputs Outputs have two possible type: `SECP256K1TransferOutput`, `SECP256K1OutputOwners`. ## SECP256K1 Transfer Output A [secp256k1](/docs/rpcs/other/standards/cryptographic-primitives#secp256k1-addresses) transfer output allows for sending a quantity of an asset to a collection of addresses after a specified Unix time. The only valid asset is AVAX. ### What SECP256K1 Transfer Output Contains A secp256k1 transfer output contains a `TypeID`, `Amount`, `Locktime`, `Threshold`, and `Addresses`. - **`TypeID`** is the ID for this output type. It is `0x00000007`. - **`Amount`** is a long that specifies the quantity of the asset that this output owns. Must be positive. - **`Locktime`** is a long that contains the Unix timestamp that this output can be spent after. The Unix timestamp is specific to the second. - **`Threshold`** is an int that names the number of unique signatures required to spend the output. Must be less than or equal to the length of **`Addresses`**. If **`Addresses`** is empty, must be 0. - **`Addresses`** is a list of unique addresses that correspond to the private keys that can be used to spend this output. Addresses must be sorted lexicographically. ### Gantt SECP256K1 Transfer Output Specification ```text +-----------+------------+--------------------------------+ | type_id : int | 4 bytes | +-----------+------------+--------------------------------+ | amount : long | 8 bytes | +-----------+------------+--------------------------------+ | locktime : long | 8 bytes | +-----------+------------+--------------------------------+ | threshold : int | 4 bytes | +-----------+------------+--------------------------------+ | addresses : [][20]byte | 4 + 20 * len(addresses) bytes | +-----------+------------+--------------------------------+ | 28 + 20 * len(addresses) bytes | +--------------------------------+ ``` ### Proto SECP256K1 Transfer Output Specification ```text message SECP256K1TransferOutput { uint32 type_id = 1; // 04 bytes uint64 amount = 2; // 08 bytes uint64 locktime = 3; // 08 bytes uint32 threshold = 4; // 04 bytes repeated bytes addresses = 5; // 04 bytes + 20 bytes * len(addresses) } ``` ### SECP256K1 Transfer Output Example Let's make a secp256k1 transfer output with: - **`TypeID`**: 7 - **`Amount`**: 3999000000 - **`Locktime`**: 0 - **`Threshold`**: 1 - **`Addresses`**: - 0xda2bee01be82ecc00c34f361eda8eb30fb5a715c ```text [ TypeID <- 0x00000007 Amount <- 0x00000000ee5be5c0 Locktime <- 0x0000000000000000 Threshold <- 0x00000001 Addresses <- [ 0xda2bee01be82ecc00c34f361eda8eb30fb5a715c, ] ] = [ // type_id: 0x00, 0x00, 0x00, 0x07, // amount: 0x00, 0x00, 0x00, 0x00, 0xee, 0x5b, 0xe5, 0xc0, // locktime: 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, // threshold: 0x00, 0x00, 0x00, 0x01, // number of addresses: 0x00, 0x00, 0x00, 0x01, // addrs[0]: 0xda, 0x2b, 0xee, 0x01, 0xbe, 0x82, 0xec, 0xc0, 0x0c, 0x34, 0xf3, 0x61, 0xed, 0xa8, 0xeb, 0x30, 0xfb, 0x5a, 0x71, 0x5c, ] ``` ## SECP256K1 Output Owners Output A [secp256k1](/docs/rpcs/other/standards/cryptographic-primitives#secp256k1-addresses) output owners output will receive the staking rewards when the lock up period ends. ### What SECP256K1 Output Owners Output Contains A secp256k1 output owners output contains a `TypeID`, `Locktime`, `Threshold`, and `Addresses`. - **`TypeID`** is the ID for this output type. It is `0x0000000b`. - **`Locktime`** is a long that contains the Unix timestamp that this output can be spent after. The Unix timestamp is specific to the second. - **`Threshold`** is an int that names the number of unique signatures required to spend the output. Must be less than or equal to the length of **`Addresses`**. If **`Addresses`** is empty, must be 0. - **`Addresses`** is a list of unique addresses that correspond to the private keys that can be used to spend this output. Addresses must be sorted lexicographically. ### Gantt SECP256K1 Output Owners Output Specification ```text +-----------+------------+--------------------------------+ | type_id : int | 4 bytes | +-----------+------------+--------------------------------+ | locktime : long | 8 bytes | +-----------+------------+--------------------------------+ | threshold : int | 4 bytes | +-----------+------------+--------------------------------+ | addresses : [][20]byte | 4 + 20 * len(addresses) bytes | +-----------+------------+--------------------------------+ | 20 + 20 * len(addresses) bytes | +--------------------------------+ ``` ### Proto SECP256K1 Output Owners Output Specification ```text message SECP256K1OutputOwnersOutput { uint32 type_id = 1; // 04 bytes uint64 locktime = 2; // 08 bytes uint32 threshold = 3; // 04 bytes repeated bytes addresses = 4; // 04 bytes + 20 bytes * len(addresses) } ``` ### SECP256K1 Output Owners Output Example Let's make a secp256k1 output owners output with: - **`TypeID`**: 11 - **`Locktime`**: 0 - **`Threshold`**: 1 - **`Addresses`**: - 0xda2bee01be82ecc00c34f361eda8eb30fb5a715c ```text [ TypeID <- 0x0000000b Locktime <- 0x0000000000000000 Threshold <- 0x00000001 Addresses <- [ 0xda2bee01be82ecc00c34f361eda8eb30fb5a715c, ] ] = [ // type_id: 0x00, 0x00, 0x00, 0x0b, // locktime: 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, // threshold: 0x00, 0x00, 0x00, 0x01, // number of addresses: 0x00, 0x00, 0x00, 0x01, // addrs[0]: 0xda, 0x2b, 0xee, 0x01, 0xbe, 0x82, 0xec, 0xc0, 0x0c, 0x34, 0xf3, 0x61, 0xed, 0xa8, 0xeb, 0x30, 0xfb, 0x5a, 0x71, 0x5c, ] ``` ## Inputs Inputs have one possible type: `SECP256K1TransferInput`. ## SECP256K1 Transfer Input A [secp256k1](/docs/rpcs/other/standards/cryptographic-primitives#secp256k1-addresses) transfer input allows for spending an unspent secp256k1 transfer output. ### What SECP256K1 Transfer Input Contains A secp256k1 transfer input contains an `Amount` and `AddressIndices`. - **`TypeID`** is the ID for this output type. It is `0x00000005`. - **`Amount`** is a long that specifies the quantity that this input should be consuming from the UTXO. Must be positive. Must be equal to the amount specified in the UTXO. - **`AddressIndices`** is a list of unique ints that define the private keys are being used to spend the UTXO. Each UTXO has an array of addresses that can spend the UTXO. Each int represents the index in this address array that will sign this transaction. The array must be sorted low to high. ### Gantt SECP256K1 Transfer Input Specification ```text +-------------------------+-------------------------------------+ | type_id : int | 4 bytes | +-----------------+-------+-------------------------------------+ | amount : long | 8 bytes | +-----------------+-------+-------------------------------------+ | address_indices : []int | 4 + 4 * len(address_indices) bytes | +-----------------+-------+-------------------------------------+ | 16 + 4 * len(address_indices) bytes | +-------------------------------------+ ``` **Proto SECP256K1 Transfer Input Specification** ```text message SECP256K1TransferInput { uint32 type_id = 1; // 04 bytes uint64 amount = 2; // 08 bytes repeated uint32 address_indices = 3; // 04 bytes + 4 bytes * len(address_indices) } ``` ### SECP256K1 Transfer Input Example Let's make a payment input with: - **`TypeID`**: 5 - **`Amount`**: 4000000000 - **`AddressIndices`**: \[0\] ```text [ TypeID <- 0x00000005 Amount <- 0x00000000ee6b2800, AddressIndices <- [0x00000000] ] = [ // type_id: 0x00, 0x00, 0x00, 0x05, // amount: 0x00, 0x00, 0x00, 0x00, 0xee, 0x6b, 0x28, 0x00, // length: 0x00, 0x00, 0x00, 0x01, // address_indices[0] 0x00, 0x00, 0x00, 0x00 ] ``` ## Unsigned Transactions Unsigned transactions contain the full content of a transaction with only the signatures missing. Unsigned transactions have six possible types: `AddValidatorTx`, `AddSubnetValidatorTx`, `AddDelegatorTx`, `CreateSubnetTx`, `ImportTx`, and `ExportTx`. They embed `BaseTx`, which contains common fields and operations. ## Unsigned BaseTx ### What Base TX Contains A base TX contains a `TypeID`, `NetworkID`, `BlockchainID`, `Outputs`, `Inputs`, and `Memo`. - **`TypeID`** is the ID for this type. It is `0x00000000`. - **`NetworkID`** is an int that defines which network this transaction is meant to be issued to. This value is meant to support transaction routing and is not designed for replay attack prevention. - **`BlockchainID`** is a 32-byte array that defines which blockchain this transaction was issued to. This is used for replay attack prevention for transactions that could potentially be valid across network or blockchain. - **`Outputs`** is an array of transferable output objects. Outputs must be sorted lexicographically by their serialized representation. The total quantity of the assets created in these outputs must be less than or equal to the total quantity of each asset consumed in the inputs minus the transaction fee. - **`Inputs`** is an array of transferable input objects. Inputs must be sorted and unique. Inputs are sorted first lexicographically by their **`TxID`** and then by the **`UTXOIndex`** from low to high. If there are inputs that have the same **`TxID`** and **`UTXOIndex`**, then the transaction is invalid as this would result in a double spend. - **`Memo`** Memo field contains arbitrary bytes, up to 256 bytes. ### Gantt Base TX Specification ```text +---------------+----------------------+-----------------------------------------+ | type_id : int | 4 bytes | +---------------+----------------------+-----------------------------------------+ | network_id : int | 4 bytes | +---------------+----------------------+-----------------------------------------+ | blockchain_id : [32]byte | 32 bytes | +---------------+----------------------+-----------------------------------------+ | outputs : []TransferableOutput | 4 + size(outputs) bytes | +---------------+----------------------+-----------------------------------------+ | inputs : []TransferableInput | 4 + size(inputs) bytes | +---------------+----------------------+-----------------------------------------+ | memo : [256]byte | 4 + size(memo) bytes | +---------------+----------------------+-----------------------------------------+ | 52 + size(outputs) + size(inputs) + size(memo) bytes | +------------------------------------------------------+ ``` ### Proto Base TX Specification ```text message BaseTx { uint32 type_id = 1; // 04 bytes uint32 network_id = 2; // 04 bytes bytes blockchain_id = 3; // 32 bytes repeated Output outputs = 4; // 04 bytes + size(outs) repeated Input inputs = 5; // 04 bytes + size(ins) bytes memo = 6; // 04 bytes + size(memo) } ``` ### Base TX Example Let's make a base TX that uses the inputs and outputs from the previous examples: - **`TypeID`**: `0` - **`NetworkID`**: `12345` - **`BlockchainID`**: `0x000000000000000000000000000000000000000000000000000000000000000` - **`Outputs`**: - `"Example Transferable Output as defined above"` - **`Inputs`**: - `"Example Transferable Input as defined above"` ```text [ TypeID <- 0x00000000 NetworkID <- 0x00003039 BlockchainID <- 0x000000000000000000000000000000000000000000000000000000000000000 Outputs <- [ 0x6870b7d66ac32540311379e5b5dbad28ec7eb8ddbfc8f4d67299ebb48475907a0000000700000000ee5be5c000000000000000000000000100000001da2bee01be82ecc00c34f361eda8eb30fb5a715c ] Inputs <- [ 0xdfafbdf5c81f635c9257824ff21c8e3e6f7b632ac306e11446ee540d34711a15000000016870b7d66ac32540311379e5b5dbad28ec7eb8ddbfc8f4d67299ebb48475907a0000000500000000ee6b28000000000100000000 ] ] = [ // type_id: 0x00, 0x00, 0x00, 0x00, // networkID: 0x00, 0x00, 0x30, 0x39, // blockchainID: 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, // number of outputs: 0x00, 0x00, 0x00, 0x01, // transferable output: 0x68, 0x70, 0xb7, 0xd6, 0x6a, 0xc3, 0x25, 0x40, 0x31, 0x13, 0x79, 0xe5, 0xb5, 0xdb, 0xad, 0x28, 0xec, 0x7e, 0xb8, 0xdd, 0xbf, 0xc8, 0xf4, 0xd6, 0x72, 0x99, 0xeb, 0xb4, 0x84, 0x75, 0x90, 0x7a, 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x00, 0xee, 0x5b, 0xe5, 0xc0, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x01, 0xda, 0x2b, 0xee, 0x01, 0xbe, 0x82, 0xec, 0xc0, 0x0c, 0x34, 0xf3, 0x61, 0xed, 0xa8, 0xeb, 0x30, 0xfb, 0x5a, 0x71, 0x5c, // number of inputs: 0x00, 0x00, 0x00, 0x01, // transferable input: 0xdf, 0xaf, 0xbd, 0xf5, 0xc8, 0x1f, 0x63, 0x5c, 0x92, 0x57, 0x82, 0x4f, 0xf2, 0x1c, 0x8e, 0x3e, 0x6f, 0x7b, 0x63, 0x2a, 0xc3, 0x06, 0xe1, 0x14, 0x46, 0xee, 0x54, 0x0d, 0x34, 0x71, 0x1a, 0x15, 0x00, 0x00, 0x00, 0x01, 0x68, 0x70, 0xb7, 0xd6, 0x6a, 0xc3, 0x25, 0x40, 0x31, 0x13, 0x79, 0xe5, 0xb5, 0xdb, 0xad, 0x28, 0xec, 0x7e, 0xb8, 0xdd, 0xbf, 0xc8, 0xf4, 0xd6, 0x72, 0x99, 0xeb, 0xb4, 0x84, 0x75, 0x90, 0x7a, 0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x00, 0xee, 0x6b, 0x28, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, // Memo length: 0x00, 0x00, 0x00, 0x00, ] ``` ## Unsigned Add Validator TX ### What Unsigned Add Validator TX Contains An unsigned add validator TX contains a `BaseTx`, `Validator`, `Stake`, `RewardsOwner`, and `Shares`. The `TypeID` for this type is `0x0000000c`. - **`BaseTx`** - **`Validator`** Validator has a `NodeID`, `StartTime`, `EndTime`, and `Weight` - **`NodeID`** is 20 bytes which is the node ID of the validator. - **`StartTime`** is a long which is the Unix time when the validator starts validating. - **`EndTime`** is a long which is the Unix time when the validator stops validating. - **`Weight`** is a long which is the amount the validator stakes - **`Stake`** Stake has `LockedOuts` - **`LockedOuts`** An array of Transferable Outputs that are locked for the duration of the staking period. At the end of the staking period, these outputs are refunded to their respective addresses. - **`RewardsOwner`** A `SECP256K1OutputOwners` - **`Shares`** 10,000 times percentage of reward taken from delegators ### Gantt Unsigned Add Validator TX Specification ```text +---------------+-----------------------+-----------------------------------------+ | base_tx : BaseTx | size(base_tx) bytes | +---------------+-----------------------+-----------------------------------------+ | validator : Validator | 44 bytes | +---------------+-----------------------+-----------------------------------------+ | stake : Stake | size(LockedOuts) bytes | +---------------+-----------------------+-----------------------------------------+ | rewards_owner : SECP256K1OutputOwners | size(rewards_owner) bytes | +---------------+-----------------------+-----------------------------------------+ | shares : Shares | 4 bytes | +---------------+-----------------------+-----------------------------------------+ | 48 + size(stake) + size(rewards_owner) + size(base_tx) bytes | +--------------------------------------------------------------+ ``` ### Proto Unsigned Add Validator TX Specification ```text message AddValidatorTx { BaseTx base_tx = 1; // size(base_tx) Validator validator = 2; // 44 bytes Stake stake = 3; // size(LockedOuts) SECP256K1OutputOwners rewards_owner = 4; // size(rewards_owner) uint32 shares = 5; // 04 bytes } ``` ### Unsigned Add Validator TX Example Let's make an unsigned add validator TX that uses the inputs and outputs from the previous examples: - **`BaseTx`**: `"Example BaseTx as defined above with ID set to 0c"` - **`Validator`** Validator has a `NodeID`, `StartTime`, `EndTime`, and `Weight` - **`NodeID`** is 20 bytes which is the node ID of the validator. - **`StartTime`** is a long which is the Unix time when the validator starts validating. - **`EndTime`** is a long which is the Unix time when the validator stops validating. - **`Weight`** is a long which is the amount the validator stakes - **`Stake`**: `0x0000000139c33a499ce4c33a3b09cdd2cfa01ae70dbf2d18b2d7d168524440e55d55008800000007000001d1a94a2000000000000000000000000001000000013cb7d3842e8cee6a0ebd09f1fe884f6861e1b29c` - **`RewardsOwner`**: `0x0000000b00000000000000000000000100000001da2bee01be82ecc00c34f361eda8eb30fb5a715c` - **`Shares`**: `0x00000064` ```text [ BaseTx <- 0x0000000c000030390000000000000000000000000000000000000000000000000000000000000006870b7d66ac32540311379e5b5dbad28ec7eb8ddbfc8f4d67299ebb48475907a0000000700000000ee5be5c000000000000000000000000100000001da2bee01be82ecc00c34f361eda8eb30fb5a715cdfafbdf5c81f635c9257824ff21c8e3e6f7b632ac306e11446ee540d34711a15000000016870b7d66ac32540311379e5b5dbad28ec7eb8ddbfc8f4d67299ebb48475907a0000000500000000ee6b28000000000100000000 NodeID <- 0xe9094f73698002fd52c90819b457b9fbc866ab80 StarTime <- 0x000000005f21f31d EndTime <- 0x000000005f497dc6 Weight <- 0x000000000000d431 Stake <- 0x0000000139c33a499ce4c33a3b09cdd2cfa01ae70dbf2d18b2d7d168524440e55d55008800000007000001d1a94a2000000000000000000000000001000000013cb7d3842e8cee6a0ebd09f1fe884f6861e1b29c RewardsOwner <- 0x0000000b00000000000000000000000100000001da2bee01be82ecc00c34f361eda8eb30fb5a715c Shares <- 0x00000064 ] = [ // base tx: 0x00, 0x00, 0x00, 0x0c, 0x00, 0x00, 0x30, 0x39, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x68, 0x70, 0xb7, 0xd6, 0x6a, 0xc3, 0x25, 0x40, 0x31, 0x13, 0x79, 0xe5, 0xb5, 0xdb, 0xad, 0x28, 0xec, 0x7e, 0xb8, 0xdd, 0xbf, 0xc8, 0xf4, 0xd6, 0x72, 0x99, 0xeb, 0xb4, 0x84, 0x75, 0x90, 0x7a, 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x00, 0xee, 0x5b, 0xe5, 0xc0, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x01, 0xda, 0x2b, 0xee, 0x01, 0xbe, 0x82, 0xec, 0xc0, 0x0c, 0x34, 0xf3, 0x61, 0xed, 0xa8, 0xeb, 0x30, 0xfb, 0x5a, 0x71, 0x5c, 0x00, 0x00, 0x00, 0x01, 0xdf, 0xaf, 0xbd, 0xf5, 0xc8, 0x1f, 0x63, 0x5c, 0x92, 0x57, 0x82, 0x4f, 0xf2, 0x1c, 0x8e, 0x3e, 0x6f, 0x7b, 0x63, 0x2a, 0xc3, 0x06, 0xe1, 0x14, 0x46, 0xee, 0x54, 0x0d, 0x34, 0x71, 0x1a, 0x15, 0x00, 0x00, 0x00, 0x01, 0x68, 0x70, 0xb7, 0xd6, 0x6a, 0xc3, 0x25, 0x40, 0x31, 0x13, 0x79, 0xe5, 0xb5, 0xdb, 0xad, 0x28, 0xec, 0x7e, 0xb8, 0xdd, 0xbf, 0xc8, 0xf4, 0xd6, 0x72, 0x99, 0xeb, 0xb4, 0x84, 0x75, 0x90, 0x7a, 0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x00, 0xee, 0x6b, 0x28, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, // Node ID 0xe9, 0x09, 0x4f, 0x73, 0x69, 0x80, 0x02, 0xfd, 0x52, 0xc9, 0x08, 0x19, 0xb4, 0x57, 0xb9, 0xfb, 0xc8, 0x66, 0xab, 0x80, // StartTime 0x00, 0x00, 0x00, 0x00, 0x5f, 0x21, 0xf3, 0x1d, // EndTime 0x00, 0x00, 0x00, 0x00, 0x5f, 0x49, 0x7d, 0xc6, // Weight 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xd4, 0x31, // Stake 0x00, 0x00, 0x00, 0x01, 0x39, 0xc3, 0x3a, 0x49, 0x9c, 0xe4, 0xc3, 0x3a, 0x3b, 0x09, 0xcd, 0xd2, 0xcf, 0xa0, 0x1a, 0xe7, 0x0d, 0xbf, 0x2d, 0x18, 0xb2, 0xd7, 0xd1, 0x68, 0x52, 0x44, 0x40, 0xe5, 0x5d, 0x55, 0x00, 0x88, 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x01, 0xd1, 0xa9, 0x4a, 0x20, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x01, 0x3c, 0xb7, 0xd3, 0x84, 0x2e, 0x8c, 0xee, 0x6a, 0x0e, 0xbd, 0x09, 0xf1, 0xfe, 0x88, 0x4f, 0x68, 0x61, 0xe1, 0xb2, 0x9c, // RewardsOwner 0x00, 0x00, 0x00, 0x0b, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x01, 0xda, 0x2b, 0xee, 0x01, 0xbe, 0x82, 0xec, 0xc0, 0x0c, 0x34, 0xf3, 0x61, 0xed, 0xa8, 0xeb, 0x30, 0xfb, 0x5a, 0x71, 0x5c, // Shares 0x00, 0x00, 0x00, 0x64, ] ``` ## Unsigned Remove Avalanche L1 Validator TX ### What Unsigned Remove Avalanche L1 Validator TX Contains An unsigned remove Avalanche L1 validator TX contains a `BaseTx`, `NodeID`, `SubnetID`, and `SubnetAuth`. The `TypeID` for this type is 23 or `0x00000017`. - **`BaseTx`** - **`NodeID`** is the 20 byte node ID of the validator. - **`SubnetID`** is the 32 byte Avalanche L1 ID (SubnetID) that the validator is being removed from. - **`SubnetAuth`** contains `SigIndices` and has a type id of `0x0000000a`. `SigIndices` is a list of unique ints that define the addresses signing the control signature which proves that the issuer has the right to remove the node from the Avalanche L1. The array must be sorted low to high. ### Gantt Unsigned Remove Avalanche L1 Validator TX Specification ```text +---------------+----------------------+------------------------------------------------+ | base_tx : BaseTx | size(base_tx) bytes | +---------------+----------------------+------------------------------------------------+ | node_id : [20]byte | 20 bytes | +---------------+----------------------+------------------------------------------------+ | subnet_id : [32]byte | 32 bytes | +---------------+----------------------+------------------------------------------------+ | sig_indices : SubnetAuth | 4 bytes + len(sig_indices) bytes | +---------------+----------------------+------------------------------------------------+ | 56 + len(sig_indices) + size(base_tx) bytes | +---------------------------------------------------------------------------------------+ ``` ### Proto Unsigned Remove Avalanche L1 Validator TX Specification ```text message RemoveSubnetValidatorTx { BaseTx base_tx = 1; // size(base_tx) string node_id = 2; // 20 bytes SubnetID subnet_id = 3; // 32 bytes SubnetAuth subnet_auth = 4; // 04 bytes + len(sig_indices) } ``` ### Unsigned Remove Avalanche L1 Validator TX Example Let's make an unsigned remove Avalanche L1 validator TX that uses the inputs and outputs from the previous examples: - **`BaseTx`**: `"Example BaseTx as defined above with ID set to 17"` - **`NodeID`**: `0xe902a9a86640bfdb1cd0e36c0cc982b83e5765fa` - **`SubnetID`**: `0x4a177205df5c29929d06db9d941f83d5ea985de302015e99252d16469a6610db` - **`SubnetAuth`**: `0x0000000a0000000100000000` ```text [ BaseTx <- 0x0000000000013d0ad12b8ee8928edf248ca91ca55600fb383f07c32bff1d6dec472b25cf59a700000000000000000000000 NodeID <- 0xe902a9a86640bfdb1cd0e36c0cc982b83e5765fa SubnetID <- 0x4a177205df5c29929d06db9d941f83d5ea985de302015e99252d16469a6610db SubnetAuth <- 0x0000000a0000000100000000 ] = [ // BaseTx 0x00, 0x00, 0x00, 0x17, 0x00, 0x00, 0x30, 0x39, 0x3d, 0x0a, 0xd1, 0x2b, 0x8e, 0xe8, 0x92, 0x8e, 0xdf, 0x24, 0x8c, 0xa9, 0x1c, 0xa5, 0x56, 0x00, 0xfb, 0x38, 0x3f, 0x07, 0xc3, 0x2b, 0xff, 0x1d, 0x6d, 0xec, 0x47, 0x2b, 0x25, 0xcf, 0x59, 0xa7, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, // NodeID 0xe9, 0x02, 0xa9, 0xa8, 0x66, 0x40, 0xbf, 0xdb, 0x1c, 0xd0, 0xe3, 0x6c, 0x0c, 0xc9, 0x82, 0xb8, 0x3e, 0x57, 0x65, 0xfa, // SubnetID 0x4a, 0x17, 0x72, 0x05, 0xdf, 0x5c, 0x29, 0x92, 0x9d, 0x06, 0xdb, 0x9d, 0x94, 0x1f, 0x83, 0xd5, 0xea, 0x98, 0x5d, 0xe3, 0x02, 0x01, 0x5e, 0x99, 0x25, 0x2d, 0x16, 0x46, 0x9a, 0x66, 0x10, 0xdb, // SubnetAuth // SubnetAuth TypeID 0x00, 0x00, 0x00, 0x0a, // SigIndices length 0x00, 0x00, 0x00, 0x01, // SigIndices 0x00, 0x00, 0x00, 0x00, ] ``` ## Unsigned Add Permissionless Validator TX ### What Unsigned Add Permissionless Validator TX Contains An unsigned add permissionless validator TX contains a `BaseTx`, `Validator`, `SubnetID`, `Signer`, `StakeOuts`, `ValidatorRewardsOwner`, `DelegatorRewardsOwner`, and `DelegationShares`. The `TypeID` for this type is 25 or `0x00000019`. - **`BaseTx`** - **`Validator`** Validator has a `NodeID`, `StartTime`, `EndTime`, and `Weight` - **`NodeID`** is the 20 byte node ID of the validator. - **`StartTime`** is a long which is the Unix time when the validator starts validating. - **`EndTime`** is a long which is the Unix time when the validator stops validating. - **`Weight`** is a long which is the amount the validator stakes - **`SubnetID`** is the 32 byte Avalanche L1 ID (SubnetID) of the Avalanche L1 this validator will validate. - **`Signer`** If the [SubnetID] is the primary network, [Signer] is the type ID 28 (`0x1C`) followed by a [Proof of Possession](#proof-of-possession). If the [SubnetID] is not the primary network, this value is the empty signer, whose byte representation is only the type ID 27 (`0x1B`). - **`StakeOuts`** An array of Transferable Outputs. Where to send staked tokens when done validating. - **`ValidatorRewardsOwner`** Where to send validation rewards when done validating. - **`DelegatorRewardsOwner`** Where to send delegation rewards when done validating. - **`DelegationShares`** a short which is the fee this validator charges delegators as a percentage, times 10,000 For example, if this validator has DelegationShares=300,000 then they take 30% of rewards from delegators. ### Gantt Unsigned Add Permissionless Validator TX Specification ```text +---------------+----------------------+------------------------------------------------+ | base_tx : BaseTx | size(base_tx) bytes | +---------------+----------------------+------------------------------------------------+ | validator : Validator | 44 bytes | +---------------+----------------------+------------------------------------------------+ | subnet_id : [32]byte | 32 bytes | +---------------+----------------------+------------------------------------------------+ | signer : Signer | 144 bytes | +---------------+----------------------+------------------------------------------------+ | stake_outs : []TransferOut | 4 + size(stake_outs) bytes | +---------------+----------------------+------------------------------------------------+ | validator_rewards_owner : SECP256K1OutputOwners | size(validator_rewards_owner) bytes | +---------------+----------------------+------------------------------------------------+ | delegator_rewards_owner : SECP256K1OutputOwners | size(delegator_rewards_owner) bytes | +---------------+----------------------+------------------------------------------------+ | delegation_shares : uint32 | 4 bytes | +---------------+----------------------+------------------------------------------------+ | 232 + size(base_tx) + size(stake_outs) + | | size(validator_rewards_owner) + size(delegator_rewards_owner) bytes | +---------------------------------------------------------------------------------------+ ``` ### Proto Unsigned Add Permissionless Validator TX Specification ```text message AddPermissionlessValidatorTx { BaseTx base_tx = 1; // size(base_tx) Validator validator = 2; // 44 bytes SubnetID subnet_id = 3; // 32 bytes Signer signer = 4; // 148 bytes repeated TransferOut stake_outs = 5; // 4 bytes + size(stake_outs) SECP256K1OutputOwners validator_rewards_owner = 6; // size(validator_rewards_owner) bytes SECP256K1OutputOwners delegator_rewards_owner = 7; // size(delegator_rewards_owner) bytes uint32 delegation_shares = 8; // 4 bytes } ``` ### Unsigned Add Permissionless Validator TX Example Let's make an unsigned add permissionless validator TX that uses the inputs and outputs from the previous examples: - **`BaseTx`**: `"Example BaseTx as defined above with ID set to 1a"` - **`Validator`**: `0x5fa29ed4356903dac2364713c60f57d8472c7dda000000006397616e0000000063beee6e000001d1a94a2000` - **`SubnetID`**: `0xf3086d7bfc35be1c68db664ba9ce61a2060126b0d6b4bfb09fd7a5fb7678cada` - **`Signer`**: `0x0000001ca5af179e4188583893c2b99e1a8be27d90a9213cfbff1d75b74fe2bc9f3b072c2ded0863a9d9acd9033f223295810e429238e28d3c9b7f7212b63d746b2ae73a54fe08a3de61b132f2f89e9eeff97d4d7ca3a3c88986aa855cd36296fcfe8f02162d0258be494d267d4c5798bc081ab602ded90b0fc16d8a035e68ff5294794cb63ff1ee068fbfc2b4c8cd2d08ebf297` - **`StakeOuts`**: `0x000000013d0ad12b8ee8928edf248ca91ca55600fb383f07c32bff1d6dec472b25cf59a700000007000001d1a94a20000000000000000000000000010000000133eeffc64785cf9d80e7731d9f31f67bd03c5cf0` - **`ValidatorRewardsOwner`**: `0x0000000b0000000000000000000000010000000172f3eb9aeaf8283011ce6e437fdecd65eace8f52` - **`DelegatorRewardsOwner`**: `0x0000000b00000000000000000000000100000001b2b91313ac487c222445254e26cd026d21f6f440` - **`DelegationShares`**: `0x00004e20` ```text [ BaseTx <- 0x0000001a00003039e902a9a86640bfdb1cd0e36c0cc982b83e5765fad5f6bbe6abdcce7b5ae7d7c700000000000000014a177205df5c29929d06db9d941f83d5ea985de302015e99252d16469a6610db000000003d0ad12b8ee8928edf248ca91ca55600fb383f07c32bff1d6dec472b25cf59a700000005000001d1a94a2000000000010000000000000000 Validator <- 0x5fa29ed4356903dac2364713c60f57d8472c7dda000000006397616e0000000063beee6e000001d1a94a2000 SubnetID <- 0xf3086d7bfc35be1c68db664ba9ce61a2060126b0d6b4bfb09fd7a5fb7678cada Signer <- 0x0000001ca5af179e4188583893c2b99e1a8be27d90a9213cfbff1d75b74fe2bc9f3b072c2ded0863a9d9acd9033f223295810e429238e28d3c9b7f7212b63d746b2ae73a54fe08a3de61b132f2f89e9eeff97d4d7ca3a3c88986aa855cd36296fcfe8f02162d0258be494d267d4c5798bc081ab602ded90b0fc16d8a035e68ff5294794cb63ff1ee068fbfc2b4c8cd2d08ebf297 StakeOuts <- 0x000000013d0ad12b8ee8928edf248ca91ca55600fb383f07c32bff1d6dec472b25cf59a700000007000001d1a94a20000000000000000000000000010000000133eeffc64785cf9d80e7731d9f31f67bd03c5cf0 ValidatorRewardsOwner <- 0x0000000b0000000000000000000000010000000172f3eb9aeaf8283011ce6e437fdecd65eace8f52 DelegatorRewardsOwner <- 0x0000000b0000000000000000000000010000000172f3eb9aeaf8283011ce6e437fdecd65eace8f52 DelegationShares <- 0x00004e20 ] = [ // BaseTx 0x00, 0x00, 0x00, 0x1a, 0x00, 0x00, 0x30, 0x39, 0xe9, 0x02, 0xa9, 0xa8, 0x66, 0x40, 0xbf, 0xdb, 0x1c, 0xd0, 0xe3, 0x6c, 0x0c, 0xc9, 0x82, 0xb8, 0x3e, 0x57, 0x65, 0xfa, 0xd5, 0xf6, 0xbb, 0xe6, 0xab, 0xdc, 0xce, 0x7b, 0x5a, 0xe7, 0xd7, 0xc7, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x4a, 0x17, 0x72, 0x05, 0xdf, 0x5c, 0x29, 0x92, 0x9d, 0x06, 0xdb, 0x9d, 0x94, 0x1f, 0x83, 0xd5, 0xea, 0x98, 0x5d, 0xe3, 0x02, 0x01, 0x5e, 0x99, 0x25, 0x2d, 0x16, 0x46, 0x9a, 0x66, 0x10, 0xdb, 0x00, 0x00, 0x00, 0x00, 0x3d, 0x0a, 0xd1, 0x2b, 0x8e, 0xe8, 0x92, 0x8e, 0xdf, 0x24, 0x8c, 0xa9, 0x1c, 0xa5, 0x56, 0x00, 0xfb, 0x38, 0x3f, 0x07, 0xc3, 0x2b, 0xff, 0x1d, 0x6d, 0xec, 0x47, 0x2b, 0x25, 0xcf, 0x59, 0xa7, 0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x01, 0xd1, 0xa9, 0x4a, 0x20, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, // Validator // NodeID 0x5f, 0xa2, 0x9e, 0xd4, 0x35, 0x69, 0x03, 0xda, 0xc2, 0x36, 0x47, 0x13, 0xc6, 0x0f, 0x57, 0xd8, 0x47, 0x2c, 0x7d, 0xda, 0x // Start time 0x00, 0x00, 0x00, 0x00, 0x63, 0x97, 0x61, 0x6e, // End time 0x00, 0x00, 0x00, 0x00, 0x63, 0xbe, 0xee, 0x6e, // Weight 0x00, 0x00, 0x01, 0xd1, 0xa9, 0x4a, 0x20, 0x00, // SubnetID 0xf3, 0x08, 0x6d, 0x7b, 0xfc, 0x35, 0xbe, 0x1c, 0x68, 0xdb, 0x66, 0x4b, 0xa9, 0xce, 0x61, 0xa2, 0x06, 0x01, 0x26, 0xb0, 0xd6, 0xb4, 0xbf, 0xb0, 0x9f, 0xd7, 0xa5, 0xfb, 0x76, 0x78, 0xca, 0xda, // Signer // TypeID 0x00, 0x00, 0x00, 0x1c, // Pub key 0xa5, 0xaf, 0x17, 0x9e, 0x41, 0x88, 0x58, 0x38, 0x93, 0xc2, 0xb9, 0x9e, 0x1a, 0x8b, 0xe2, 0x7d, 0x90, 0xa9, 0x21, 0x3c, 0xfb, 0xff, 0x1d, 0x75, 0xb7, 0x4f, 0xe2, 0xbc, 0x9f, 0x3b, 0x07, 0x2c, 0x2d, 0xed, 0x08, 0x63, 0xa9, 0xd9, 0xac, 0xd9, 0x03, 0x3f, 0x22, 0x32, 0x95, 0x81, 0x0e, 0x42, // Sig 0x92, 0x38, 0xe2, 0x8d, 0x3c, 0x9b, 0x7f, 0x72, 0x12, 0xb6, 0x3d, 0x74, 0x6b, 0x2a, 0xe7, 0x3a, 0x54, 0xfe, 0x08, 0xa3, 0xde, 0x61, 0xb1, 0x32, 0xf2, 0xf8, 0x9e, 0x9e, 0xef, 0xf9, 0x7d, 0x4d, 0x7c, 0xa3, 0xa3, 0xc8, 0x89, 0x86, 0xaa, 0x85, 0x5c, 0xd3, 0x62, 0x96, 0xfc, 0xfe, 0x8f, 0x02, 0x16, 0x2d, 0x02, 0x58, 0xbe, 0x49, 0x4d, 0x26, 0x7d, 0x4c, 0x57, 0x98, 0xbc, 0x08, 0x1a, 0xb6, 0x02, 0xde, 0xd9, 0x0b, 0x0f, 0xc1, 0x6d, 0x8a, 0x03, 0x5e, 0x68, 0xff, 0x52, 0x94, 0x79, 0x4c, 0xb6, 0x3f, 0xf1, 0xee, 0x06, 0x8f, 0xbf, 0xc2, 0xb4, 0xc8, 0xcd, 0x2d, 0x08, 0xeb, 0xf2, 0x97, // Stake outs // Num stake outs 0x00, 0x00, 0x00, 0x01, // AssetID 0x3d, 0x0a, 0xd1, 0x2b, 0x8e, 0xe8, 0x92, 0x8e, 0xdf, 0x24, 0x8c, 0xa9, 0x1c, 0xa5, 0x56, 0x00, 0xfb, 0x38, 0x3f, 0x07, 0xc3, 0x2b, 0xff, 0x1d, 0x6d, 0xec, 0x47, 0x2b, 0x25, 0xcf, 0x59, 0xa7, // Output // typeID 0x00, 0x00, 0x00, 0x07, // Amount 0x00, 0x00, 0x01, 0xd1, 0xa9, 0x4a, 0x20, 0x00, // Locktime 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, // Threshold 0x00, 0x00, 0x00, 0x01, // Num addrs 0x00, 0x00, 0x00, 0x01, // Addr 0 0x33, 0xee, 0xff, 0xc6, 0x47, 0x85, 0xcf, 0x9d, 0x80, 0xe7, 0x73, 0x1d, 0x9f, 0x31, 0xf6, 0x7b, 0xd0, 0x3c, 0x5c, 0xf0, // Validator rewards owner // TypeID 0x00, 0x00, 0x00, 0x0b, // Locktime 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, // Threshold 0x00, 0x00, 0x00, 0x01, // Num addrs 0x00, 0x00, 0x00, 0x01, // Addr 0 0x72, 0xf3, 0xeb, 0x9a, 0xea, 0xf8, 0x28, 0x30, 0x11, 0xce, 0x6e, 0x43, 0x7f, 0xde, 0xcd, 0x65, 0xea, 0xce, 0x8f, 0x52, // Delegator rewards owner // TypeID 0x00, 0x00, 0x00, 0x0b, // Locktime 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, // Threshold 0x00, 0x00, 0x00, 0x01, // Num addrs 0x00, 0x00, 0x00, 0x01, // Addr 0 0xb2, 0xb9, 0x13, 0x13, 0xac, 0x48, 0x7c, 0x22, 0x24, 0x45, 0x25, 0x4e, 0x26, 0xcd, 0x02, 0x6d, 0x21, 0xf6, 0xf4, 0x40, // Delegation shares 0x00, 0x00, 0x4e, 0x20, ] ``` ## Unsigned Add Permissionless Delegator TX ### What Unsigned Add Permissionless Delegator TX Contains An unsigned add permissionless delegator TX contains a `BaseTx`, `Validator`, `SubnetID`, `StakeOuts`, and `DelegatorRewardsOwner`. The `TypeID` for this type is 26 or `0x0000001a`. - **`BaseTx`** - **`Validator`** Validator has a `NodeID`, `StartTime`, `EndTime`, and `Weight` - **`NodeID`** is the 20 byte node ID of the validator. - **`StartTime`** is a long which is the Unix time when the validator starts validating. - **`EndTime`** is a long which is the Unix time when the validator stops validating. - **`Weight`** is a long which is the amount the validator stakes - **`SubnetID`** is the 32 byte Avalanche L1 ID (SubnetID) of the Avalanche L1 this delegation is on. - **`StakeOuts`** An array of Transferable Outputs. Where to send staked tokens when done validating. - **`DelegatorRewardsOwner`** Where to send staking rewards when done validating. ### Gantt Unsigned Add Permissionless Delegator TX Specification ```text +---------------+----------------------+------------------------------------------------+ | base_tx : BaseTx | size(base_tx) bytes | +---------------+----------------------+------------------------------------------------+ | validator : Validator | 44 bytes | +---------------+----------------------+------------------------------------------------+ | subnet_id : [32]byte | 32 bytes | +---------------+----------------------+------------------------------------------------+ | stake_outs : []TransferOut | 4 + size(stake_outs) bytes | +---------------+----------------------+------------------------------------------------+ | delegator_rewards_owner : SECP256K1OutputOwners | size(delegator_rewards_owner) bytes | +---------------+----------------------+------------------------------------------------+ | 80 + size(base_tx) + size(stake_outs) + size(delegator_rewards_owner) bytes | +---------------------------------------------------------------------------------------+ ``` ### Proto Unsigned Add Permissionless Delegator TX Specification ```text message AddPermissionlessDelegatorTx { BaseTx base_tx = 1; // size(base_tx) Validator validator = 2; // size(validator) SubnetID subnet_id = 3; // 32 bytes repeated TransferOut stake_outs = 4; // 4 bytes + size(stake_outs) SECP256K1OutputOwners delegator_rewards_owner = 5; // size(delegator_rewards_owner) bytes } ``` ### Unsigned Add Permissionless Delegator TX Example Let's make an unsigned add permissionless delegator TX that uses the inputs and outputs from the previous examples: - **`BaseTx`**: `"Example BaseTx as defined above with ID set to 1a"` - **`Validator`**: `0x5fa29ed4356903dac2364713c60f57d8472c7dda00000000639761970000000063beee97000001d1a94a2000` - **`SubnetID`**: `0xf3086d7bfc35be1c68db664ba9ce61a2060126b0d6b4bfb09fd7a5fb7678cada` - **`StakeOuts`**: `0x000000013d0ad12b8ee8928edf248ca91ca55600fb383f07c32bff1d6dec472b25cf59a700000007000001d1a94a20000000000000000000000000010000000133eeffc64785cf9d80e7731d9f31f67bd03c5cf0` - **`DelegatorRewardsOwner`**: `0x0000000b0000000000000000000000010000000172f3eb9aeaf8283011ce6e437fdecd65eace8f52` ```text [ BaseTx <- 0x0000001a00003039e902a9a86640bfdb1cd0e36c0cc982b83e5765fad5f6bbe6abdcce7b5ae7d7c700000000000000014a177205df5c29929d06db9d941f83d5ea985de302015e99252d16469a6610db000000003d0ad12b8ee8928edf248ca91ca55600fb383f07c32bff1d6dec472b25cf59a700000005000001d1a94a2000000000010000000000000000 Validator <- 0x5fa29ed4356903dac2364713c60f57d8472c7dda00000000639761970000000063beee97000001d1a94a2000 SubnetID <- 0xf3086d7bfc35be1c68db664ba9ce61a2060126b0d6b4bfb09fd7a5fb7678cada StakeOuts <- 0x000000013d0ad12b8ee8928edf248ca91ca55600fb383f07c32bff1d6dec472b25cf59a700000007000001d1a94a20000000000000000000000000010000000133eeffc64785cf9d80e7731d9f31f67bd03c5cf0 DelegatorRewardsOwner <- 0x0000000b0000000000000000000000010000000172f3eb9aeaf8283011ce6e437fdecd65eace8f52 ] = [ // BaseTx 0x00, 0x00, 0x00, 0x1a, 0x00, 0x00, 0x30, 0x39, 0xe9, 0x02, 0xa9, 0xa8, 0x66, 0x40, 0xbf, 0xdb, 0x1c, 0xd0, 0xe3, 0x6c, 0x0c, 0xc9, 0x82, 0xb8, 0x3e, 0x57, 0x65, 0xfa, 0xd5, 0xf6, 0xbb, 0xe6, 0xab, 0xdc, 0xce, 0x7b, 0x5a, 0xe7, 0xd7, 0xc7, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x4a, 0x17, 0x72, 0x05, 0xdf, 0x5c, 0x29, 0x92, 0x9d, 0x06, 0xdb, 0x9d, 0x94, 0x1f, 0x83, 0xd5, 0xea, 0x98, 0x5d, 0xe3, 0x02, 0x01, 0x5e, 0x99, 0x25, 0x2d, 0x16, 0x46, 0x9a, 0x66, 0x10, 0xdb, 0x00, 0x00, 0x00, 0x00, 0x3d, 0x0a, 0xd1, 0x2b, 0x8e, 0xe8, 0x92, 0x8e, 0xdf, 0x24, 0x8c, 0xa9, 0x1c, 0xa5, 0x56, 0x00, 0xfb, 0x38, 0x3f, 0x07, 0xc3, 0x2b, 0xff, 0x1d, 0x6d, 0xec, 0x47, 0x2b, 0x25, 0xcf, 0x59, 0xa7, 0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x01, 0xd1, 0xa9, 0x4a, 0x20, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, // Validator // NodeID 0x5f, 0xa2, 0x9e, 0xd4, 0x35, 0x69, 0x03, 0xda, 0xc2, 0x36, 0x47, 0x13, 0xc6, 0x0f, 0x57, 0xd8, 0x47, 0x2c, 0x7d, 0xda, // Start time 0x00, 0x00, 0x00, 0x00, 0x63, 0x97, 0x61, 0x97, // End time 0x00, 0x00, 0x00, 0x00, 0x63, 0xbe, 0xee, 0x97, // Weight 0x00, 0x00, 0x01, 0xd1, 0xa9, 0x4a, 0x20, 0x00, // Stake_outs // Num stake outs 0x00, 0x00, 0x00, 0x01, // Stake out 0 // AssetID 0x3d, 0x0a, 0xd1, 0x2b, 0x8e, 0xe8, 0x92, 0x8e, 0xdf, 0x24, 0x8c, 0xa9, 0x1c, 0xa5, 0x56, 0x00, 0xfb, 0x38, 0x3f, 0x07, 0xc3, 0x2b, 0xff, 0x1d, 0x6d, 0xec, 0x47, 0x2b, 0x25, 0xcf, 0x59, 0xa7, // TypeID 0x00, 0x00, 0x00, 0x07, // Amount 0x00, 0x00, 0x01, 0xd1, 0xa9, 0x4a, 0x20, 0x00, // Locktime 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, // Threshold 0x00, 0x00, 0x00, 0x01, // Num addrs 0x00, 0x00, 0x00, 0x01, // Addr 0 0x33, 0xee, 0xff, 0xc6, 0x47, 0x85, 0xcf, 0x9d, 0x80, 0xe7, 0x73, 0x1d, 0x9f, 0x31, 0xf6, 0x7b, 0xd0, 0x3c, 0x5c, 0xf0, // Delegator_rewards_owner // TypeID 0x00, 0x00, 0x00, 0x0b, // Locktime 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, // Threshold 0x00, 0x00, 0x00, 0x01, // Num addrs 0x00, 0x00, 0x00, 0x01, // Addr 0 0x72, 0xf3, 0xeb, 0x9a, 0xea, 0xf8, 0x28, 0x30, 0x11, 0xce, 0x6e, 0x43, 0x7f, 0xde, 0xcd, 0x65, 0xea, 0xce, 0x8f, 0x52, ] ``` ## Unsigned Transform Avalanche L1 TX > **Note:** This transaction type has been disabled post-activation of ACP-77 (Etna upgrade). The `TransformSubnetTx` is no longer accepted on the P-Chain after the activation of this upgrade. Transforms a permissioned Avalanche L1 into a permissionless Avalanche L1. Must be signed by the Avalanche L1 owner. ### What Unsigned Transform Avalanche L1 TX Contains An unsigned transform Avalanche L1 TX contains a `BaseTx`, `SubnetID`, `AssetID`, `InitialSupply`, `MaximumSupply`, `MinConsumptionRate`, `MaxConsumptionRate`, `MinValidatorStake`, `MaxValidatorStake`, `MinStakeDuration`, `MaxStakeDuration`, `MinDelegationFee`, `MinDelegatorStake`, `MaxValidatorWeightFactor`, `UptimeRequirement`, and `SubnetAuth`. The `TypeID` for this type is 24 or `0x00000018`. - **`BaseTx`** - **`SubnetID`** a 32-byte Avalanche L1 ID of the Avalanche L1 to transform. - **`AssetID`** is a 32-byte array that defines which asset to use when staking on the Avalanche L1. - Restrictions - Must not be the Empty ID - Must not be the AVAX ID - **`InitialSupply`** is a long which is the amount to initially specify as the current supply. - Restrictions - Must be > 0 - **`MaximumSupply`** is a long which is the amount to specify as the maximum token supply. - Restrictions - Must be >= [InitialSupply] - **`MinConsumptionRate`** is a long which is the rate to allocate funds if the validator's stake duration is 0. - **`MaxConsumptionRate`** is a long which is the rate to allocate funds if the validator's stake duration is equal to the minting period. - Restrictions - Must be `>=` [MinConsumptionRate] - Must be `<=` [`reward.PercentDenominator`] - **`MinValidatorStake`** is a long which the minimum amount of funds required to become a validator. - Restrictions - Must be `>` 0 - Must be `<=` [InitialSupply] - **`MaxValidatorStake`** is a long which is the maximum amount of funds a single validator can be allocated, including delegated funds. - Restrictions: - Must be `>=` [MinValidatorStake] - Must be `<=` [MaximumSupply] - **`MinStakeDuration`** is a short which is the minimum number of seconds a staker can stake for. - Restrictions - Must be `>` 0 - **`MaxStakeDuration`** is a short which is the maximum number of seconds a staker can stake for. - Restrictions - Must be `>=` [MinStakeDuration] - Must be `<=` [GlobalMaxStakeDuration] - **`MinDelegationFee`** is a short is the minimum percentage a validator must charge a delegator for delegating. - Restrictions - Must be `<=` [`reward.PercentDenominator`] - **`MinDelegatorStake`** is a short which is the minimum amount of funds required to become a delegator. - Restrictions - Must be `>` 0 - **`MaxValidatorWeightFactor`** is a byte which is the factor which calculates the maximum amount of delegation a validator can receive. Note: a value of 1 effectively disables delegation. - Restrictions - Must be `>` 0 - **`UptimeRequirement`** is a short which is the minimum percentage a validator must be online and responsive to receive a reward. - Restrictions - Must be `<=` [`reward.PercentDenominator`] - **`SubnetAuth`** contains `SigIndices` and has a type id of `0x0000000a`. `SigIndices` is a list of unique ints that define the addresses signing the control signature to authorizes this transformation. The array must be sorted low to high. ### Gantt Unsigned Transform Avalanche L1 TX Specification ```text +----------------------+------------------+----------------------------------+ | base_tx : BaseTx | size(base_tx) bytes | +----------------------+------------------+----------------------------------+ | subnet_id : [32]byte | 32 bytes | +----------------------+------------------+----------------------------------+ | asset_id : [32]byte | 32 bytes | +----------------------+------------------+----------------------------------+ | initial_supply : long | 8 bytes | +----------------------+------------------+----------------------------------+ | maximum_supply : long | 8 bytes | +----------------------+------------------+----------------------------------+ | min_consumption_rate : long | 8 bytes | +----------------------+------------------+----------------------------------+ | max_consumption_rate : long | 8 bytes | +----------------------+------------------+----------------------------------+ | min_validator_stake : long | 8 bytes | +----------------------+------------------+----------------------------------+ | max_validator_stake : long | 8 bytes | +----------------------+------------------+----------------------------------+ | min_stake_duration : short | 4 bytes | +----------------------+------------------+----------------------------------+ | max_stake_duration : short | 4 bytes | +----------------------+------------------+----------------------------------+ | min_delegation_fee : short | 4 bytes | +----------------------+------------------+----------------------------------+ | min_delegator_stake : long | 8 bytes | +----------------------+------------------+----------------------------------+ | max_validator_weight_factor : byte | 1 byte | +----------------------+------------------+----------------------------------+ | uptime_requirement : short | 4 bytes | +----------------------+------------------+----------------------------------+ | subnet_auth : SubnetAuth | 4 bytes + len(sig_indices) bytes | +----------------------+------------------+----------------------------------+ | 141 + size(base_tx) + len(sig_indices) bytes | +----------------------------------------------------------------------------+ ``` ### Proto Unsigned Transform Avalanche L1 TX Specification ```text message TransformSubnetTx { BaseTx base_tx = 1; // size(base_tx) SubnetID subnet_id = 2; // 32 bytes bytes asset_id = 3; // 32 bytes uint64 initial_supply = 4; // 08 bytes uint64 maximum_supply = 5; // 08 bytes uint64 min_consumption_rate = 6; // 08 bytes uint64 max_consumption_rate = 7; // 08 bytes uint64 min_validator_stake = 8; // 08 bytes uint64 max_validator_stake = 9; // 08 bytes uint32 min_stake_duration = 10; // 04 bytes uint32 max_stake_duration = 11; // 04 bytes uint32 min_delegation_fee = 12; // 04 bytes uint32 min_delegator_stake = 13; // 08 bytes byte max_validator_weight_factor = 14; // 01 byte uint32 uptime_requirement = 15; // 04 bytes SubnetAuth subnet_auth = 16; // 04 bytes + len(sig_indices) } ``` ### Unsigned Transform Avalanche L1 TX Example Let's make an unsigned transform Avalanche L1 TX that uses the inputs and outputs from the previous examples: - **`BaseTx`**: `"Example BaseTx as defined above with ID set to 18"` - **`SubnetID`**: `0x5fa29ed4356903dac2364713c60f57d8472c7dda4a5e08d88a88ad8ea71aed60` - **`AssetID`**: `0xf3086d7bfc35be1c68db664ba9ce61a2060126b0d6b4bfb09fd7a5fb7678cada` - **`InitialSupply`**: `0x000000e8d4a51000` - **`MaximumSupply`**: `0x000009184e72a000` - **`MinConsumptionRate`**: `0x0000000000000001` - **`MaxConsumptionRate`**: `0x000000000000000a` - **`MinValidatorStake`**: `0x000000174876e800` - **`MaxValidatorStake`**: `0x000001d1a94a2000` - **`MinStakeDuration`**: `0x00015180` - **`MaxStakeDuration`**: `0x01e13380` - **`MinDelegationFee`**: `0x00002710` - **`MinDelegatorStake`**: `0x000000174876e800` - **`MaxValidatorWeightFactor`**: `0x05` - **`UptimeRequirement`**: `0x000c3500` - **`SubnetAuth`**: - **`TypeID`**: `0x0000000a` - **`SigIndices`**: `0x00000000` ```text [ BaseTx <- 0000001800003039e902a9a86640bfdb1cd0e36c0cc982b83e5765fad5f6bbe6abdcce7b5ae7d7c700000000000000014a177205df5c29929d06db9d941f83d5ea985de302015e99252d16469a6610db000000003d0ad12b8ee8928edf248ca91ca55600fb383f07c32bff1d6dec472b25cf59a70000000500000000000f4240000000010000000000000000 SubnetID <- 0x5fa29ed4356903dac2364713c60f57d8472c7dda4a5e08d88a88ad8ea71aed60 AssetID <- 0xf3086d7bfc35be1c68db664ba9ce61a2060126b0d6b4bfb09fd7a5fb7678cada InitialSupply <- 0x000000e8d4a51000 MaximumSupply <- 0x000009184e72a000 MinConsumptionRate <- 0x0000000000000001 MaxConsumptionRate <- 0x000000000000000a MinValidatorStake <- 0x000000174876e800 MaxValidatorStake <- 0x000001d1a94a2000 MinStakeDuration <- 0x00015180 MaxStakeDuration <- 0x01e13380 MinDelegationFee <- 0x00002710 MinDelegatorStake <- 0x000000174876e800 MaxValidatorWeightFactor <- 0x05 UptimeRequirement <- 0x000c3500 SubnetAuth <- 0x0000000a0000000100000000 ] = [ // BaseTx: 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x30, 0x39, 0xe9, 0x02, 0xa9, 0xa8, 0x66, 0x40, 0xbf, 0xdb, 0x1c, 0xd0, 0xe3, 0x6c, 0x0c, 0xc9, 0x82, 0xb8, 0x3e, 0x57, 0x65, 0xfa, 0xd5, 0xf6, 0xbb, 0xe6, 0xab, 0xdc, 0xce, 0x7b, 0x5a, 0xe7, 0xd7, 0xc7, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x4a, 0x17, 0x72, 0x05, 0xdf, 0x5c, 0x29, 0x92, 0x9d, 0x06, 0xdb, 0x9d, 0x94, 0x1f, 0x83, 0xd5, 0xea, 0x98, 0x5d, 0xe3, 0x02, 0x01, 0x5e, 0x99, 0x25, 0x2d, 0x16, 0x46, 0x9a, 0x66, 0x10, 0xdb, 0x00, 0x00, 0x00, 0x00, 0x3d, 0x0a, 0xd1, 0x2b, 0x8e, 0xe8, 0x92, 0x8e, 0xdf, 0x24, 0x8c, 0xa9, 0x1c, 0xa5, 0x56, 0x00, 0xfb, 0x38, 0x3f, 0x07, 0xc3, 0x2b, 0xff, 0x1d, 0x6d, 0xec, 0x47, 0x2b, 0x25, 0xcf, 0x59, 0xa7, 0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0f, 0x42, 0x40, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x5f, 0xa2, 0x9e, 0xd4, 0x35, 0x69, 0x03, 0xda, 0xc2, 0x36, 0x47, 0x13, 0xc6, 0x0f, 0x57, 0xd8, 0x47, 0x2c, 0x7d, 0xda, 0x4a, 0x5e, 0x08, 0xd8, 0x8a, 0x88, 0xad, 0x8e, 0xa7, 0x1a, 0xed, 0x60, 0xf3, 0x08, 0x6d, 0x7b, 0xfc, 0x35, 0xbe, 0x1c, 0x68, 0xdb, 0x66, 0x4b, 0xa9, 0xce, 0x61, 0xa2, 0x06, 0x01, 0x26, 0xb0, 0xd6, 0xb4, 0xbf, 0xb0, 0x9f, 0xd7, 0xa5, 0xfb, 0x76, 0x78, 0xca, 0xda, 0x00, 0x00, 0x00, 0xe8, 0xd4, 0xa5, 0x10, 0x00, 0x00, 0x00, 0x09, 0x18, 0x4e, 0x72, 0xa0, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0a, 0x00, 0x00, 0x00, 0x17, 0x48, 0x76, 0xe8, 0x00, 0x00, 0x00, 0x01, 0xd1, 0xa9, 0x4a, 0x20, 0x00, 0x00, 0x01, 0x51, 0x80, 0x01, 0xe1, 0x33, 0x80, 0x00, 0x00, 0x27, 0x10, 0x00, 0x00, 0x00, 0x17, 0x48, 0x76, 0xe8, 0x00, 0x05, 0x00, 0x0c, 0x35, 0x00, 0x00, 0x00, 0x00, 0x0a, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, // SubnetID 0x5f, 0xa2, 0x9e, 0xd4, 0x35, 0x69, 0x03, 0xda, 0xc2, 0x36, 0x47, 0x13, 0xc6, 0x0f, 0x57, 0xd8, 0x47, 0x2c, 0x7d, 0xda, 0x4a, 0x5e, 0x08, 0xd8, 0x8a, 0x88, 0xad, 0x8e, 0xa7, 0x1a, 0xed, 0x60, // AssetID 0xf3, 0x08, 0x6d, 0x7b, 0xfc, 0x35, 0xbe, 0x1c, 0x68, 0xdb, 0x66, 0x4b, 0xa9, 0xce, 0x61, 0xa2, 0x06, 0x01, 0x26, 0xb0, 0xd6, 0xb4, 0xbf, 0xb0, 0x9f, 0xd7, 0xa5, 0xfb, 0x76, 0x78, 0xca, 0xda, // InitialSupply 0x00, 0x00, 0x00, 0xe8, 0xd4, 0xa5, 0x10, 0x00, // MaximumSupply 0x00, 0x00, 0x09, 0x18, 0x4e, 0x72, 0xa0, 0x00, // MinConsumptionRate 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, // MaxConsumptionRate 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0a, // MinValidatorStake 0x00, 0x00, 0x00, 0x17, 0x48, 0x76, 0xe8, 0x00, // MaxValidatorStake 0x00, 0x00, 0x01, 0xd1, 0xa9, 0x4a, 0x20, 0x00, // MinStakeDuration 0x00, 0x01, 0x51, 0x80, // MaxStakeDuration 0x01, 0xe1, 0x33, 0x80, // MinDelegationFee 0x00, 0x00, 0x27, 0x10, // MinDelegatorStake 0x00, 0x00, 0x00, 0x17, 0x48, 0x76, 0xe8, 0x00, // MaxValidatorWeightFactor 0x05, // UptimeRequirement 0x00, 0x0c, 0x35, 0x00, ``` // SubnetAuth ``` // SubnetAuth TypeID 0x00, 0x00, 0x00, 0x0a, // SigIndices length 0x00, 0x00, 0x00, 0x01, // SigIndices 0x00, 0x00, 0x00, 0x00, ] ``` ## Unsigned Add Avalanche L1 Validator TX ### What Unsigned Add Avalanche L1 Validator TX Contains An unsigned add Avalanche L1 validator TX contains a `BaseTx`, `Validator`, `SubnetID`, and `SubnetAuth`. The `TypeID` for this type is `0x0000000d`. - **`BaseTx`** - **`Validator`** Validator has a `NodeID`, `StartTime`, `EndTime`, and `Weight` - **`NodeID`** is the 20 byte node ID of the validator. - **`StartTime`** is a long which is the Unix time when the validator starts validating. - **`EndTime`** is a long which is the Unix time when the validator stops validating. - **`Weight`** is a long which is the amount the validator stakes - **`SubnetID`** is the 32 byte Avalanche L1 ID to add the validator to. - **`SubnetAuth`** contains `SigIndices` and has a type id of `0x0000000a`. `SigIndices` is a list of unique ints that define the addresses signing the control signature to add a validator to an Avalanche L1. The array must be sorted low to high. ### Gantt Unsigned Add Avalanche L1 Validator TX Specification ```text +---------------+----------------------+-----------------------------------------+ | base_tx : BaseTx | size(base_tx) bytes | +---------------+----------------------+-----------------------------------------+ | validator : Validator | 44 bytes | +---------------+----------------------+-----------------------------------------+ | subnet_id : [32]byte | 32 bytes | +---------------+----------------------+-----------------------------------------+ | subnet_auth : SubnetAuth | 4 bytes + len(sig_indices) bytes | +---------------+----------------------+-----------------------------------------+ | 80 + len(sig_indices) + size(base_tx) bytes | +---------------------------------------------+ ``` ### Proto Unsigned Add Avalanche L1 Validator TX Specification ```text message AddSubnetValidatorTx { BaseTx base_tx = 1; // size(base_tx) Validator validator = 2; // size(validator) SubnetID subnet_id = 3; // 32 bytes SubnetAuth subnet_auth = 4; // 04 bytes + len(sig_indices) } ``` ### Unsigned Add Avalanche L1 Validator TX Example Let's make an unsigned add Avalanche L1 validator TX that uses the inputs and outputs from the previous examples: - **`BaseTx`**: `"Example BaseTx as defined above with ID set to 0d"` - **`NodeID`**: `0xe9094f73698002fd52c90819b457b9fbc866ab80` - **`StarTime`**: `0x000000005f21f31d` - **`EndTime`**: `0x000000005f497dc6` - **`Weight`**: `0x000000000000d431` - **`SubnetID`**: `0x58b1092871db85bc752742054e2e8be0adf8166ec1f0f0769f4779f14c71d7eb` - **`SubnetAuth`**: - **`TypeID`**: `0x0000000a` - **`SigIndices`**: `0x00000000` ```text [ BaseTx <- 0x0000000d000030390000000000000000000000000000000000000000000000000000000000000006870b7d66ac32540311379e5b5dbad28ec7eb8ddbfc8f4d67299ebb48475907a0000000700000000ee5be5c000000000000000000000000100000001da2bee01be82ecc00c34f361eda8eb30fb5a715cdfafbdf5c81f635c9257824ff21c8e3e6f7b632ac306e11446ee540d34711a15000000016870b7d66ac32540311379e5b5dbad28ec7eb8ddbfc8f4d67299ebb48475907a0000000500000000ee6b28000000000100000000 NodeID <- 0xe9094f73698002fd52c90819b457b9fbc866ab80 StarTime <- 0x000000005f21f31d EndTime <- 0x000000005f497dc6 Weight <- 0x000000000000d431 SubnetID <- 0x58b1092871db85bc752742054e2e8be0adf8166ec1f0f0769f4779f14c71d7eb SubnetAuth TypeID <- 0x0000000a SubnetAuth <- 0x00000000 ] = [ // base tx: 0x00, 0x00, 0x00, 0x0d, 0x00, 0x00, 0x30, 0x39, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x68, 0x70, 0xb7, 0xd6, 0x6a, 0xc3, 0x25, 0x40, 0x31, 0x13, 0x79, 0xe5, 0xb5, 0xdb, 0xad, 0x28, 0xec, 0x7e, 0xb8, 0xdd, 0xbf, 0xc8, 0xf4, 0xd6, 0x72, 0x99, 0xeb, 0xb4, 0x84, 0x75, 0x90, 0x7a, 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x00, 0xee, 0x5b, 0xe5, 0xc0, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x01, 0xda, 0x2b, 0xee, 0x01, 0xbe, 0x82, 0xec, 0xc0, 0x0c, 0x34, 0xf3, 0x61, 0xed, 0xa8, 0xeb, 0x30, 0xfb, 0x5a, 0x71, 0x5c, 0x00, 0x00, 0x00, 0x01, 0xdf, 0xaf, 0xbd, 0xf5, 0xc8, 0x1f, 0x63, 0x5c, 0x92, 0x57, 0x82, 0x4f, 0xf2, 0x1c, 0x8e, 0x3e, 0x6f, 0x7b, 0x63, 0x2a, 0xc3, 0x06, 0xe1, 0x14, 0x46, 0xee, 0x54, 0x0d, 0x34, 0x71, 0x1a, 0x15, 0x00, 0x00, 0x00, 0x01, 0x68, 0x70, 0xb7, 0xd6, 0x6a, 0xc3, 0x25, 0x40, 0x31, 0x13, 0x79, 0xe5, 0xb5, 0xdb, 0xad, 0x28, 0xec, 0x7e, 0xb8, 0xdd, 0xbf, 0xc8, 0xf4, 0xd6, 0x72, 0x99, 0xeb, 0xb4, 0x84, 0x75, 0x90, 0x7a, 0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x00, 0xee, 0x6b, 0x28, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, // Node ID 0xe9, 0x09, 0x4f, 0x73, 0x69, 0x80, 0x02, 0xfd, 0x52, 0xc9, 0x08, 0x19, 0xb4, 0x57, 0xb9, 0xfb, 0xc8, 0x66, 0xab, 0x80, // StartTime 0x00, 0x00, 0x00, 0x00, 0x5f, 0x21, 0xf3, 0x1d, // EndTime 0x00, 0x00, 0x00, 0x00, 0x5f, 0x49, 0x7d, 0xc6, // Weight 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xd4, 0x31, // SubnetID 0x58, 0xb1, 0x09, 0x28, 0x71, 0xdb, 0x85, 0xbc, 0x75, 0x27, 0x42, 0x05, 0x4e, 0x2e, 0x8b, 0xe0, 0xad, 0xf8, 0x16, 0x6e, 0xc1, 0xf0, 0xf0, 0x76, 0x9f, 0x47, 0x79, 0xf1, 0x4c, 0x71, 0xd7, 0xeb, // SubnetAuth // SubnetAuth TypeID 0x00, 0x00, 0x00, 0x0a, // SigIndices length 0x00, 0x00, 0x00, 0x01, // SigIndices 0x00, 0x00, 0x00, 0x00, ] ``` ## Unsigned Add Delegator TX ### What Unsigned Add Delegator TX Contains An unsigned add delegator TX contains a `BaseTx`, `Validator`, `Stake`, and `RewardsOwner`. The `TypeID` for this type is `0x0000000e`. - **`BaseTx`** - **`Validator`** Validator has a `NodeID`, `StartTime`, `EndTime`, and `Weight` - **`NodeID`** is 20 bytes which is the node ID of the delegatee. - **`StartTime`** is a long which is the Unix time when the delegator starts delegating. - **`EndTime`** is a long which is the Unix time when the delegator stops delegating (and staked AVAX is returned). - **`Weight`** is a long which is the amount the delegator stakes - **`Stake`** Stake has `LockedOuts` - **`LockedOuts`** An array of Transferable Outputs that are locked for the duration of the staking period. At the end of the staking period, these outputs are refunded to their respective addresses. - **`RewardsOwner`** An `SECP256K1OutputOwners` ### Gantt Unsigned Add Delegator TX Specification ```text +---------------+-----------------------+-----------------------------------------+ | base_tx : BaseTx | size(base_tx) bytes | +---------------+-----------------------+-----------------------------------------+ | validator : Validator | 44 bytes | +---------------+-----------------------+-----------------------------------------+ | stake : Stake | size(LockedOuts) bytes | +---------------+-----------------------+-----------------------------------------+ | rewards_owner : SECP256K1OutputOwners | size(rewards_owner) bytes | +---------------+-----------------------+-----------------------------------------+ | 44 + size(stake) + size(rewards_owner) + size(base_tx) bytes | +-----------------------------------------------------------------+ ``` ### Proto Unsigned Add Delegator TX Specification ```text message AddDelegatorTx { BaseTx base_tx = 1; // size(base_tx) Validator validator = 2; // 44 bytes Stake stake = 3; // size(LockedOuts) SECP256K1OutputOwners rewards_owner = 4; // size(rewards_owner) } ``` ### Unsigned Add Delegator TX Example Let's make an unsigned add delegator TX that uses the inputs and outputs from the previous examples: - **`BaseTx`**: `"Example BaseTx as defined above with ID set to 0e"` - **`NodeID`**: `0xe9094f73698002fd52c90819b457b9fbc866ab80` - **`StarTime`**: `0x000000005f21f31d` - **`EndTime`**: `0x000000005f497dc6` - **`Weight`**: `0x000000000000d431` - **`Stake`**: `0x0000000139c33a499ce4c33a3b09cdd2cfa01ae70dbf2d18b2d7d168524440e55d55008800000007000001d1a94a2000000000000000000000000001000000013cb7d3842e8cee6a0ebd09f1fe884f6861e1b29c` - **`RewardsOwner`**: `0x0000000b00000000000000000000000100000001da2bee01be82ecc00c34f361eda8eb30fb5a715c` ```text [ BaseTx <- 0x0000000e000030390000000000000000000000000000000000000000000000000000000000000006870b7d66ac32540311379e5b5dbad28ec7eb8ddbfc8f4d67299ebb48475907a0000000700000000ee5be5c000000000000000000000000100000001da2bee01be82ecc00c34f361eda8eb30fb5a715cdfafbdf5c81f635c9257824ff21c8e3e6f7b632ac306e11446ee540d34711a15000000016870b7d66ac32540311379e5b5dbad28ec7eb8ddbfc8f4d67299ebb48475907a0000000500000000ee6b28000000000100000000 NodeID <- 0xe9094f73698002fd52c90819b457b9fbc866ab80 StarTime <- 0x000000005f21f31d EndTime <- 0x000000005f497dc6 Weight <- 0x000000000000d431 Stake <- 0x0000000139c33a499ce4c33a3b09cdd2cfa01ae70dbf2d18b2d7d168524440e55d55008800000007000001d1a94a2000000000000000000000000001000000013cb7d3842e8cee6a0ebd09f1fe884f6861e1b29c RewardsOwner <- 0x0000000b00000000000000000000000100000001da2bee01be82ecc00c34f361eda8eb30fb5a715c ] = [ // base tx: 0x00, 0x00, 0x00, 0x0e, 0x00, 0x00, 0x30, 0x39, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x68, 0x70, 0xb7, 0xd6, 0x6a, 0xc3, 0x25, 0x40, 0x31, 0x13, 0x79, 0xe5, 0xb5, 0xdb, 0xad, 0x28, 0xec, 0x7e, 0xb8, 0xdd, 0xbf, 0xc8, 0xf4, 0xd6, 0x72, 0x99, 0xeb, 0xb4, 0x84, 0x75, 0x90, 0x7a, 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x00, 0xee, 0x5b, 0xe5, 0xc0, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x01, 0xda, 0x2b, 0xee, 0x01, 0xbe, 0x82, 0xec, 0xc0, 0x0c, 0x34, 0xf3, 0x61, 0xed, 0xa8, 0xeb, 0x30, 0xfb, 0x5a, 0x71, 0x5c, 0x00, 0x00, 0x00, 0x01, 0xdf, 0xaf, 0xbd, 0xf5, 0xc8, 0x1f, 0x63, 0x5c, 0x92, 0x57, 0x82, 0x4f, 0xf2, 0x1c, 0x8e, 0x3e, 0x6f, 0x7b, 0x63, 0x2a, 0xc3, 0x06, 0xe1, 0x14, 0x46, 0xee, 0x54, 0x0d, 0x34, 0x71, 0x1a, 0x15, 0x00, 0x00, 0x00, 0x01, 0x68, 0x70, 0xb7, 0xd6, 0x6a, 0xc3, 0x25, 0x40, 0x31, 0x13, 0x79, 0xe5, 0xb5, 0xdb, 0xad, 0x28, 0xec, 0x7e, 0xb8, 0xdd, 0xbf, 0xc8, 0xf4, 0xd6, 0x72, 0x99, 0xeb, 0xb4, 0x84, 0x75, 0x90, 0x7a, 0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x00, 0xee, 0x6b, 0x28, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, // Node ID 0xe9, 0x09, 0x4f, 0x73, 0x69, 0x80, 0x02, 0xfd, 0x52, 0xc9, 0x08, 0x19, 0xb4, 0x57, 0xb9, 0xfb, 0xc8, 0x66, 0xab, 0x80, // StartTime 0x00, 0x00, 0x00, 0x00, 0x5f, 0x21, 0xf3, 0x1d, // EndTime 0x00, 0x00, 0x00, 0x00, 0x5f, 0x49, 0x7d, 0xc6, // Weight 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xd4, 0x31, // Stake 0x00, 0x00, 0x00, 0x01, 0x39, 0xc3, 0x3a, 0x49, 0x9c, 0xe4, 0xc3, 0x3a, 0x3b, 0x09, 0xcd, 0xd2, 0xcf, 0xa0, 0x1a, 0xe7, 0x0d, 0xbf, 0x2d, 0x18, 0xb2, 0xd7, 0xd1, 0x68, 0x52, 0x44, 0x40, 0xe5, 0x5d, 0x55, 0x00, 0x88, 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x01, 0xd1, 0xa9, 0x4a, 0x20, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x01, 0x3c, 0xb7, 0xd3, 0x84, 0x2e, 0x8c, 0xee, 0x6a, 0x0e, 0xbd, 0x09, 0xf1, 0xfe, 0x88, 0x4f, 0x68, 0x61, 0xe1, 0xb2, 0x9c, // RewardsOwner 0x00, 0x00, 0x00, 0x0b, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x01, 0xda, 0x2b, 0xee, 0x01, 0xbe, 0x82, 0xec, 0xc0, 0x0c, 0x34, 0xf3, 0x61, 0xed, 0xa8, 0xeb, 0x30, 0xfb, 0x5a, 0x71, 0x5c, ] ``` ## Unsigned Create Chain TX ### What Unsigned Create Chain TX Contains An unsigned create chain TX contains a `BaseTx`, `SubnetID`, `ChainName`, `VMID`, `FxIDs`, `GenesisData` and `SubnetAuth`. The `TypeID` for this type is `0x0000000f`. - **`BaseTx`** - **`SubnetID`** ID of the Avalanche L1 that validates this blockchain - **`ChainName`** A human readable name for the chain; need not be unique - **`VMID`** ID of the VM running on the new chain - **`FxIDs`** IDs of the feature extensions running on the new chain - **`GenesisData`** Byte representation of genesis state of the new chain - **`SubnetAuth`** Authorizes this blockchain to be added to this Avalanche L1 ### Gantt Unsigned Create Chain TX Specification ```text +--------------+-------------+------------------------------------------+ | base_tx : BaseTx | size(base_tx) bytes | +--------------+-------------+------------------------------------------+ | subnet_id : SubnetID | 32 bytes | +--------------+-------------+------------------------------------------+ | chain_name : ChainName | 2 + len(chain_name) bytes | +--------------+-------------+------------------------------------------+ | vm_id : VMID | 32 bytes | +--------------+-------------+------------------------------------------+ | fx_ids : FxIDs | 4 + size(fx_ids) bytes | +--------------+-------------+------------------------------------------+ | genesis_data : GenesisData | 4 + size(genesis_data) bytes | +--------------+-------------+------------------------------------------+ | subnet_auth : SubnetAuth | size(subnet_auth) bytes | +--------------+-------------+------------------------------------------+ | 74 + size(base_tx) + size(chain_name) + size(fx_ids) + | | size(genesis_data) + size(subnet_auth) bytes | +--------------+--------------------------------------------------------+ ``` ### Proto Unsigned Create Chain TX Specification ```text message CreateChainTx { BaseTx base_tx = 1; // size(base_tx) SubnetID subnet_id = 2; // 32 bytes ChainName chain_name = 3; // 2 + len(chain_name) bytes VMID vm_id = 4; // 32 bytes FxIDs fx_ids = 5; // 4 + size(fx_ids) bytes GenesisData genesis_data = 6 // 4 + size(genesis_data) bytes SubnetAuth subnet_auth = 7; // size(subnet_auth) bytes } ``` ### Unsigned Create Chain TX Example Let's make an unsigned create chain TX that uses the inputs and outputs from the previous examples: - **`BaseTx`**: `"Example BaseTx as defined above with ID set to 0f"` - **`SubnetID`**: `24tZhrm8j8GCJRE9PomW8FaeqbgGS4UAQjJnqqn8pq5NwYSYV1` - **`ChainName`**: `EPIC AVM` - **`VMID`**: `avm` - **`FxIDs`**: [`secp256k1fx`] - **`GenesisData`**: `11111DdZMhYXUZiFV9FNpfpTSQroysXhzWicG954YAKfkrk3bCEzLVY7gun1eAmAwMiQzVhtGpdR6dnPVcfhBE7brzkJ1r4wzi3dgA8G9Jwc4WpZ6Uh4Dr9aTdw7sFA5cpvCAVBsx6Xf3CB82jwH1gjPZ3WQnnCSKr2reoLtam6TfyYRra5xxXSkZcUm6BaJMW4fKzNP58uyExajPYKZvT5LrQ7MPJ9Fp7ebmYSzXg7YYauNARj` - **`SubnetAuth`**: `0x0000000a0000000100000000` ```text [ BaseTx <- 0x0000000f000030390000000000000000000000000000000000000000000000000000000000000006870b7d66ac32540311379e5b5dbad28ec7eb8ddbfc8f4d67299ebb48475907a0000000700000000ee5be5c000000000000000000000000100000001da2bee01be82ecc00c34f361eda8eb30fb5a715cdfafbdf5c81f635c9257824ff21c8e3e6f7b632ac306e11446ee540d34711a15000000016870b7d66ac32540311379e5b5dbad28ec7eb8ddbfc8f4d67299ebb48475907a0000000500000000ee6b28000000000100000000 SubnetID <- 0x8c86d07cd60218661863e0116552dccd5bd84c564bd29d7181dbddd5ec616104 ChainName <- 0x455049432041564d VMID <- 0x61766d0000000000000000000000000000000000000000000000000000000000 FxIDs <- 0x736563703235366b316678000000000000000000000000000000000000000000 GenesisData <- 0x000000000001000e4173736574416c6961735465737400000539000000000000000000000000000000000000000000000000000000000000000000000000000000000000001b66726f6d20736e6f77666c616b6520746f206176616c616e636865000a54657374204173736574000454455354000000000100000000000000010000000700000000000001fb000000000000000000000001000000013cb7d3842e8cee6a0ebd09f1fe884f6861e1b29c SubnetAuth <- 0x0000000a0000000100000000 ] = [ // base tx 0x00, 0x00, 0x00, 0x0f, 0x00, 0x00, 0x30, 0x39, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x39, 0xc3, 0x3a, 0x49, 0x9c, 0xe4, 0xc3, 0x3a, 0x3b, 0x09, 0xcd, 0xd2, 0xcf, 0xa0, 0x1a, 0xe7, 0x0d, 0xbf, 0x2d, 0x18, 0xb2, 0xd7, 0xd1, 0x68, 0x52, 0x44, 0x40, 0xe5, 0x5d, 0x55, 0x00, 0x88, 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x12, 0x30, 0x9c, 0xd5, 0xfd, 0xc0, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x01, 0x3c, 0xb7, 0xd3, 0x84, 0x2e, 0x8c, 0xee, 0x6a, 0x0e, 0xbd, 0x09, 0xf1, 0xfe, 0x88, 0x4f, 0x68, 0x61, 0xe1, 0xb2, 0x9c, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, // end base tx // Subnet id 0x8c, 0x86, 0xd0, 0x7c, 0xd6, 0x02, 0x18, 0x66, 0x18, 0x63, 0xe0, 0x11, 0x65, 0x52, 0xdc, 0xcd, 0x5b, 0xd8, 0x4c, 0x56, 0x4b, 0xd2, 0x9d, 0x71, 0x81, 0xdb, 0xdd, 0xd5, 0xec, 0x61, 0x61, 0x04, // chain name length 0x00, 0x08, // chain name 0x45, 0x50, 0x49, 0x43, 0x20, 0x41, 0x56, 0x4d, // vm id 0x61, 0x76, 0x6d, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, // fxids // num fxids 0x00, 0x00, 0x00, 0x01, // fxid 0x73, 0x65, 0x63, 0x70, 0x32, 0x35, 0x36, 0x6b, 0x31, 0x66, 0x78, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, // genesis data len 0x00, 0x00, 0x00, 0xb0, // genesis data 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x0e, 0x41, 0x73, 0x73, 0x65, 0x74, 0x41, 0x6c, 0x69, 0x61, 0x73, 0x54, 0x65, 0x73, 0x74, 0x00, 0x00, 0x05, 0x39, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x1b, 0x66, 0x72, 0x6f, 0x6d, 0x20, 0x73, 0x6e, 0x6f, 0x77, 0x66, 0x6c, 0x61, 0x6b, 0x65, 0x20, 0x74, 0x6f, 0x20, 0x61, 0x76, 0x61, 0x6c, 0x61, 0x6e, 0x63, 0x68, 0x65, 0x00, 0x0a, 0x54, 0x65, 0x73, 0x74, 0x20, 0x41, 0x73, 0x73, 0x65, 0x74, 0x00, 0x04, 0x54, 0x45, 0x53, 0x54, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0xfb, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x01, 0x3c, 0xb7, 0xd3, 0x84, 0x2e, 0x8c, 0xee, 0x6a, 0x0e, 0xbd, 0x09, 0xf1, 0xfe, 0x88, 0x4f, 0x68, 0x61, 0xe1, 0xb2, 0x9c, // type id (Subnet Auth) 0x00, 0x00, 0x00, 0x0a, // num address indices 0x00, 0x00, 0x00, 0x01, // address index 0x00, 0x00, 0x00, 0x00, ] ``` ## Unsigned Create Avalanche L1 TX ### What Unsigned Create Avalanche L1 TX Contains An unsigned create Avalanche L1 TX contains a `BaseTx`, and `RewardsOwner`. The `TypeID` for this type is `0x00000010`. - **`BaseTx`** - **`RewardsOwner`** A `SECP256K1OutputOwners` ### Gantt Unsigned Create Avalanche L1 TX Specification ```text +-----------------+-----------------------|---------------------------------+ | base_tx : BaseTx | size(base_tx) bytes | +-----------------+-----------------------+--------------------------------+ | rewards_owner : SECP256K1OutputOwners | size(rewards_owner) bytes | +-----------------+-----------------------+---------------------------------+ | size(rewards_owner) + size(base_tx) bytes | +-------------------------------------------+ ``` ### Proto Unsigned Create Avalanche L1 TX Specification ```text message CreateSubnetTx { BaseTx base_tx = 1; // size(base_tx) SECP256K1OutputOwners rewards_owner = 2; // size(rewards_owner) } ``` ### Unsigned Create Avalanche L1 TX Example Let's make an unsigned create Avalanche L1 TX that uses the inputs from the previous examples: - **`BaseTx`**: "Example BaseTx as defined above but with TypeID set to 16" - **`RewardsOwner`**: - **`TypeId`**: 11 - **`Locktime`**: 0 - **`Threshold`**: 1 - **`Addresses`**: \[ 0xda2bee01be82ecc00c34f361eda8eb30fb5a715c \] ```text [ BaseTx <- 0x00000010000030390000000000000000000000000000000000000000000000000000000000000006870b7d66ac32540311379e5b5dbad28ec7eb8ddbfc8f4d67299ebb48475907a0000000700000000ee5be5c000000000000000000000000100000001da2bee01be82ecc00c34f361eda8eb30fb5a715cdfafbdf5c81f635c9257824ff21c8e3e6f7b632ac306e11446ee540d34711a15000000016870b7d66ac32540311379e5b5dbad28ec7eb8ddbfc8f4d67299ebb48475907a0000000500000000ee6b28000000000100000000 RewardsOwner <- TypeID <- 0x0000000b Locktime <- 0x0000000000000000 Threshold <- 0x00000001 Addresses <- [ 0xda2bee01be82ecc00c34f361eda8eb30fb5a715c, ] ] = [ // base tx: 0x00, 0x00, 0x00, 0x10, 0x00, 0x00, 0x30, 0x39, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x39, 0xc3, 0x3a, 0x49, 0x9c, 0xe4, 0xc3, 0x3a, 0x3b, 0x09, 0xcd, 0xd2, 0xcf, 0xa0, 0x1a, 0xe7, 0x0d, 0xbf, 0x2d, 0x18, 0xb2, 0xd7, 0xd1, 0x68, 0x52, 0x44, 0x40, 0xe5, 0x5d, 0x55, 0x00, 0x88, 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x12, 0x30, 0x9c, 0xd5, 0xfd, 0xc0, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x01, 0x3c, 0xb7, 0xd3, 0x84, 0x2e, 0x8c, 0xee, 0x6a, 0x0e, 0xbd, 0x09, 0xf1, 0xfe, 0x88, 0x4f, 0x68, 0x61, 0xe1, 0xb2, 0x9c, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, // RewardsOwner type id 0x00, 0x00, 0x00, 0x0b, // locktime: 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, // threshold: 0x00, 0x00, 0x00, 0x01, // number of addresses: 0x00, 0x00, 0x00, 0x01, // addrs[0]: 0xda, 0x2b, 0xee, 0x01, 0xbe, 0x82, 0xec, 0xc0, 0x0c, 0x34, 0xf3, 0x61, 0xed, 0xa8, 0xeb, 0x30, 0xfb, 0x5a, 0x71, 0x5c ] ``` ## Unsigned Import TX ### What Unsigned Import TX Contains An unsigned import TX contains a `BaseTx`, `SourceChain`, and `Ins`. The `TypeID` for this type is `0x00000011`. - **`BaseTx`** - **`SourceChain`** is a 32-byte source blockchain ID. - **`Ins`** is a variable length array of Transferable Inputs. ### Gantt Unsigned Import TX Specification ```text +-----------------+--------------|---------------------------------+ | base_tx : BaseTx | size(base_tx) bytes | +-----------------+--------------+---------------------------------+ | source_chain : [32]byte | 32 bytes | +-----------------+--------------+---------------------------------+ | ins : []TransferIn | 4 + size(ins) bytes | +-----------------+--------------+---------------------------------+ | 36 + size(ins) + size(base_tx) bytes | +--------------------------------------+ ``` ### Proto Unsigned Import TX Specification ```text message ImportTx { BaseTx base_tx = 1; // size(base_tx) bytes source_chain = 2; // 32 bytes repeated TransferIn ins = 3; // 4 bytes + size(ins) } ``` ### Unsigned Import TX Example Let's make an unsigned import TX that uses the inputs from the previous examples: - **`BaseTx`**: "Example BaseTx as defined above with TypeID set to 17" - **`SourceChain`**: - **`Ins`**: "Example SECP256K1 Transfer Input as defined above" ```text [ BaseTx <- 0x00000011000030390000000000000000000000000000000000000000000000000000000000000006870b7d66ac32540311379e5b5dbad28ec7eb8ddbfc8f4d67299ebb48475907a0000000700000000ee5be5c000000000000000000000000100000001da2bee01be82ecc00c34f361eda8eb30fb5a715cdfafbdf5c81f635c9257824ff21c8e3e6f7b632ac306e11446ee540d34711a15000000016870b7d66ac32540311379e5b5dbad28ec7eb8ddbfc8f4d67299ebb48475907a0000000500000000ee6b28000000000100000000 SourceChain <- 0x787cd3243c002e9bf5bbbaea8a42a16c1a19cc105047c66996807cbf16acee10 Ins <- [ // input: ] ] = [ // base tx: 0x00, 0x00, 0x00, 0x11, 0x00, 0x00, 0x30, 0x39, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x39, 0xc3, 0x3a, 0x49, 0x9c, 0xe4, 0xc3, 0x3a, 0x3b, 0x09, 0xcd, 0xd2, 0xcf, 0xa0, 0x1a, 0xe7, 0x0d, 0xbf, 0x2d, 0x18, 0xb2, 0xd7, 0xd1, 0x68, 0x52, 0x44, 0x40, 0xe5, 0x5d, 0x55, 0x00, 0x88, 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x12, 0x30, 0x9c, 0xd5, 0xfd, 0xc0, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x01, 0x3c, 0xb7, 0xd3, 0x84, 0x2e, 0x8c, 0xee, 0x6a, 0x0e, 0xbd, 0x09, 0xf1, 0xfe, 0x88, 0x4f, 0x68, 0x61, 0xe1, 0xb2, 0x9c, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, // sourceChain 0x78, 0x7c, 0xd3, 0x24, 0x3c, 0x00, 0x2e, 0x9b, 0xf5, 0xbb, 0xba, 0xea, 0x8a, 0x42, 0xa1, 0x6c, 0x1a, 0x19, 0xcc, 0x10, 0x50, 0x47, 0xc6, 0x69, 0x96, 0x80, 0x7c, 0xbf, 0x16, 0xac, 0xee, 0x10, // input count: 0x00, 0x00, 0x00, 0x01, // txID: 0xf1, 0xe1, 0xd1, 0xc1, 0xb1, 0xa1, 0x91, 0x81, 0x71, 0x61, 0x51, 0x41, 0x31, 0x21, 0x11, 0x01, 0xf0, 0xe0, 0xd0, 0xc0, 0xb0, 0xa0, 0x90, 0x80, 0x70, 0x60, 0x50, 0x40, 0x30, 0x20, 0x10, 0x00, // utxoIndex: 0x00, 0x00, 0x00, 0x05, // assetID: 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, // input: 0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x00, 0xee, 0x6b, 0x28, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, ] ``` ## Unsigned Export TX ### What Unsigned Export TX Contains An unsigned export TX contains a `BaseTx`, `DestinationChain`, and `Outs`. The `TypeID` for this type is `0x00000012`. - **`DestinationChain`** is the 32 byte ID of the chain where the funds are being exported to. - **`Outs`** is a variable length array of Transferable Outputs. ### Gantt Unsigned Export TX Specification ```text +-------------------+---------------+--------------------------------------+ | base_tx : BaseTx | size(base_tx) bytes | +-------------------+---------------+--------------------------------------+ | destination_chain : [32]byte | 32 bytes | +-------------------+---------------+--------------------------------------+ | outs : []TransferOut | 4 + size(outs) bytes | +-------------------+---------------+--------------------------------------+ | 36 + size(outs) + size(base_tx) bytes | +---------------------------------------+ ``` ### Proto Unsigned Export TX Specification ```text message ExportTx { BaseTx base_tx = 1; // size(base_tx) bytes destination_chain = 2; // 32 bytes repeated TransferOut outs = 3; // 4 bytes + size(outs) } ``` ### Unsigned Export TX Example Let's make an unsigned export TX that uses the outputs from the previous examples: - `BaseTx`: "Example BaseTx as defined above" with `TypeID` set to 18 - `DestinationChain`: `0x0000000000000000000000000000000000000000000000000000000000000000` - `Outs`: "Example SECP256K1 Transfer Output as defined above" ```text [ BaseTx <- 0x00000012000030390000000000000000000000000000000000000000000000000000000000000006870b7d66ac32540311379e5b5dbad28ec7eb8ddbfc8f4d67299ebb48475907a0000000700000000ee5be5c000000000000000000000000100000001da2bee01be82ecc00c34f361eda8eb30fb5a715cdfafbdf5c81f635c9257824ff21c8e3e6f7b632ac306e11446ee540d34711a15000000016870b7d66ac32540311379e5b5dbad28ec7eb8ddbfc8f4d67299ebb48475907a0000000500000000ee6b28000000000100000000 DestinationChain <- 0x0000000000000000000000000000000000000000000000000000000000000000 Outs <- [ 000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f000000070000000000003039000000000000d431000000010000000251025c61fbcfc078f69334f834be6dd26d55a955c3344128e060128ede3523a24a461c8943ab0859, ] ] = [ // base tx: 0x00, 0x00, 0x00, 0x12 0x00, 0x00, 0x00, 0x04, 0xff, 0xff, 0xff, 0xff, 0xee, 0xee, 0xee, 0xee, 0xdd, 0xdd, 0xdd, 0xdd, 0xcc, 0xcc, 0xcc, 0xcc, 0xbb, 0xbb, 0xbb, 0xbb, 0xaa, 0xaa, 0xaa, 0xaa, 0x99, 0x99, 0x99, 0x99, 0x88, 0x88, 0x88, 0x88, 0x00, 0x00, 0x00, 0x01, 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x30, 0x39, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xd4, 0x31, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x02, 0x51, 0x02, 0x5c, 0x61, 0xfb, 0xcf, 0xc0, 0x78, 0xf6, 0x93, 0x34, 0xf8, 0x34, 0xbe, 0x6d, 0xd2, 0x6d, 0x55, 0xa9, 0x55, 0xc3, 0x34, 0x41, 0x28, 0xe0, 0x60, 0x12, 0x8e, 0xde, 0x35, 0x23, 0xa2, 0x4a, 0x46, 0x1c, 0x89, 0x43, 0xab, 0x08, 0x59, 0x00, 0x00, 0x00, 0x01, 0xf1, 0xe1, 0xd1, 0xc1, 0xb1, 0xa1, 0x91, 0x81, 0x71, 0x61, 0x51, 0x41, 0x31, 0x21, 0x11, 0x01, 0xf0, 0xe0, 0xd0, 0xc0, 0xb0, 0xa0, 0x90, 0x80, 0x70, 0x60, 0x50, 0x40, 0x30, 0x20, 0x10, 0x00, 0x00, 0x00, 0x00, 0x05, 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, 0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x00, 0x07, 0x5b, 0xcd, 0x15, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x04, 0x00, 0x01, 0x02, 0x03 // destination_chain: 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, // outs[] count: 0x00, 0x00, 0x00, 0x01, // assetID: 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, // output: 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x30, 0x39, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xd4, 0x31, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x02, 0x51, 0x02, 0x5c, 0x61, 0xfb, 0xcf, 0xc0, 0x78, 0xf6, 0x93, 0x34, 0xf8, 0x34, 0xbe, 0x6d, 0xd2, 0x6d, 0x55, 0xa9, 0x55, 0xc3, 0x34, 0x41, 0x28, 0xe0, 0x60, 0x12, 0x8e, 0xde, 0x35, 0x23, 0xa2, 0x4a, 0x46, 0x1c, 0x89, 0x43, 0xab, 0x08, 0x59, ] ``` ## Credentials Credentials have one possible types: `SECP256K1Credential`. Each credential is paired with an Input or Operation. The order of the credentials match the order of the inputs or operations. ## SECP256K1 Credential A [secp256k1](/docs/rpcs/other/standards/cryptographic-primitives#secp256k1-addresses) credential contains a list of 65-byte recoverable signatures. ### What SECP256K1 Credential Contains - **`TypeID`** is the ID for this type. It is `0x00000009`. - **`Signatures`** is an array of 65-byte recoverable signatures. The order of the signatures must match the input's signature indices. ### Gantt SECP256K1 Credential Specification ```text +------------------------------+---------------------------------+ | type_id : int | 4 bytes | +-----------------+------------+---------------------------------+ | signatures : [][65]byte | 4 + 65 * len(signatures) bytes | +-----------------+------------+---------------------------------+ | 8 + 65 * len(signatures) bytes | +---------------------------------+ ``` ### Proto SECP256K1 Credential Specification ```text message SECP256K1Credential { uint32 TypeID = 1; // 4 bytes repeated bytes signatures = 2; // 4 bytes + 65 bytes * len(signatures) } ``` ### SECP256K1 Credential Example Let's make a payment input with: - **`signatures`**: - `0x000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1e1d1f202122232425262728292a2b2c2e2d2f303132333435363738393a3b3c3d3e3f00` - `0x404142434445464748494a4b4c4d4e4f505152535455565758595a5b5c5e5d5f606162636465666768696a6b6c6e6d6f707172737475767778797a7b7c7d7e7f00` ```text [ Signatures <- [ 0x000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1e1d1f202122232425262728292a2b2c2e2d2f303132333435363738393a3b3c3d3e3f00, 0x404142434445464748494a4b4c4d4e4f505152535455565758595a5b5c5e5d5f606162636465666768696a6b6c6e6d6f707172737475767778797a7b7c7d7e7f00, ] ] = [ // Type ID 0x00, 0x00, 0x00, 0x09, // length: 0x00, 0x00, 0x00, 0x02, // sig[0] 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1e, 0x1d, 0x1f, 0x20, 0x21, 0x22, 0x23, 0x24, 0x25, 0x26, 0x27, 0x28, 0x29, 0x2a, 0x2b, 0x2c, 0x2e, 0x2d, 0x2f, 0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37, 0x38, 0x39, 0x3a, 0x3b, 0x3c, 0x3d, 0x3e, 0x3f, 0x00, // sig[1] 0x40, 0x41, 0x42, 0x43, 0x44, 0x45, 0x46, 0x47, 0x48, 0x49, 0x4a, 0x4b, 0x4c, 0x4d, 0x4e, 0x4f, 0x50, 0x51, 0x52, 0x53, 0x54, 0x55, 0x56, 0x57, 0x58, 0x59, 0x5a, 0x5b, 0x5c, 0x5e, 0x5d, 0x5f, 0x60, 0x61, 0x62, 0x63, 0x64, 0x65, 0x66, 0x67, 0x68, 0x69, 0x6a, 0x6b, 0x6c, 0x6e, 0x6d, 0x6f, 0x70, 0x71, 0x72, 0x73, 0x74, 0x75, 0x76, 0x77, 0x78, 0x79, 0x7a, 0x7b, 0x7c, 0x7d, 0x7e, 0x7f, 0x00, ] ``` ## Signed Transaction A signed transaction is an unsigned transaction with the addition of an array of credentials. ### What Signed Transaction Contains A signed transaction contains a `CodecID`, `UnsignedTx`, and `Credentials`. - **`CodecID`** The only current valid codec id is `00 00`. - **`UnsignedTx`** is an unsigned transaction, as described above. - **`Credentials`** is an array of credentials. Each credential will be paired with the input in the same index at this credential. ### Gantt Signed Transaction Specification ```text +---------------------+--------------+------------------------------------------------+ | codec_id : uint16 | 2 bytes | +---------------------+--------------+------------------------------------------------+ | unsigned_tx : UnsignedTx | size(unsigned_tx) bytes | +---------------------+--------------+------------------------------------------------+ | credentials : []Credential | 4 + size(credentials) bytes | +---------------------+--------------+------------------------------------------------+ | 6 + size(unsigned_tx) + len(credentials) bytes | +------------------------------------------------+ ``` ### Proto Signed Transaction Specification ```text message Tx { uint32 codec_id = 1; // 2 bytes UnsignedTx unsigned_tx = 2; // size(unsigned_tx) repeated Credential credentials = 3; // 4 bytes + size(credentials) } ``` ### Signed Transaction Example Let's make a signed transaction that uses the unsigned transaction and credential from the previous examples. - **`CodecID`**: `0` - **`UnsignedTx`**: `0x0000000100000003ffffffffeeeeeeeeddddddddccccccccbbbbbbbbaaaaaaaa999999998888888800000001000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f000000070000000000003039000000000000d431000000010000000251025c61fbcfc078f69334f834be6dd26d55a955c3344128e060128ede3523a24a461c8943ab085900000001f1e1d1c1b1a191817161514131211101f0e0d0c0b0a09080706050403020100000000005000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f0000000500000000075bcd150000000200000003000000070000000400010203` - **`Credentials`** `0x0000000900000002000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1e1d1f202122232425262728292a2b2c2e2d2f303132333435363738393a3b3c3d3e3f00404142434445464748494a4b4c4d4e4f505152535455565758595a5b5c5e5d5f606162636465666768696a6b6c6e6d6f707172737475767778797a7b7c7d7e7f00` ```text [ CodecID <- 0x0000 UnsignedTx <- 0x0000000100000003ffffffffeeeeeeeeddddddddccccccccbbbbbbbbaaaaaaaa999999998888888800000001000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f000000070000000000003039000000000000d431000000010000000251025c61fbcfc078f69334f834be6dd26d55a955c3344128e060128ede3523a24a461c8943ab085900000001f1e1d1c1b1a191817161514131211101f0e0d0c0b0a09080706050403020100000000005000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f0000000500000000075bcd150000000200000003000000070000000400010203 Credentials <- [ 0x0000000900000002000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1e1d1f202122232425262728292a2b2c2e2d2f303132333435363738393a3b3c3d3e3f00404142434445464748494a4b4c4d4e4f505152535455565758595a5b5c5e5d5f606162636465666768696a6b6c6e6d6f707172737475767778797a7b7c7d7e7f00, ] ] = [ // Codec ID 0x00, 0x00, // unsigned transaction: 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x03, 0xff, 0xff, 0xff, 0xff, 0xee, 0xee, 0xee, 0xee, 0xdd, 0xdd, 0xdd, 0xdd, 0xcc, 0xcc, 0xcc, 0xcc, 0xbb, 0xbb, 0xbb, 0xbb, 0xaa, 0xaa, 0xaa, 0xaa, 0x99, 0x99, 0x99, 0x99, 0x88, 0x88, 0x88, 0x88, 0x00, 0x00, 0x00, 0x01, 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x30, 0x39, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xd4, 0x31, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x02, 0x51, 0x02, 0x5c, 0x61, 0xfb, 0xcf, 0xc0, 0x78, 0xf6, 0x93, 0x34, 0xf8, 0x34, 0xbe, 0x6d, 0xd2, 0x6d, 0x55, 0xa9, 0x55, 0xc3, 0x34, 0x41, 0x28, 0xe0, 0x60, 0x12, 0x8e, 0xde, 0x35, 0x23, 0xa2, 0x4a, 0x46, 0x1c, 0x89, 0x43, 0xab, 0x08, 0x59, 0x00, 0x00, 0x00, 0x01, 0xf1, 0xe1, 0xd1, 0xc1, 0xb1, 0xa1, 0x91, 0x81, 0x71, 0x61, 0x51, 0x41, 0x31, 0x21, 0x11, 0x01, 0xf0, 0xe0, 0xd0, 0xc0, 0xb0, 0xa0, 0x90, 0x80, 0x70, 0x60, 0x50, 0x40, 0x30, 0x20, 0x10, 0x00, 0x00, 0x00, 0x00, 0x05, 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, 0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x00, 0x07, 0x5b, 0xcd, 0x15, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x04, 0x00, 0x01, 0x02, 0x03 // number of credentials: 0x00, 0x00, 0x00, 0x01, // credential[0]: 0x00, 0x00, 0x00, 0x09, 0x00, 0x00, 0x00, 0x02, 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1e, 0x1d, 0x1f, 0x20, 0x21, 0x22, 0x23, 0x24, 0x25, 0x26, 0x27, 0x28, 0x29, 0x2a, 0x2b, 0x2c, 0x2e, 0x2d, 0x2f, 0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37, 0x38, 0x39, 0x3a, 0x3b, 0x3c, 0x3d, 0x3e, 0x3f, 0x00, 0x40, 0x41, 0x42, 0x43, 0x44, 0x45, 0x46, 0x47, 0x48, 0x49, 0x4a, 0x4b, 0x4c, 0x4d, 0x4e, 0x4f, 0x50, 0x51, 0x52, 0x53, 0x54, 0x55, 0x56, 0x57, 0x58, 0x59, 0x5a, 0x5b, 0x5c, 0x5e, 0x5d, 0x5f, 0x60, 0x61, 0x62, 0x63, 0x64, 0x65, 0x66, 0x67, 0x68, 0x69, 0x6a, 0x6b, 0x6c, 0x6e, 0x6d, 0x6f, 0x70, 0x71, 0x72, 0x73, 0x74, 0x75, 0x76, 0x77, 0x78, 0x79, 0x7a, 0x7b, 0x7c, 0x7d, 0x7e, 0x7f, 0x00, ] ``` ## UTXO A UTXO is a standalone representation of a transaction output. ### What UTXO Contains A UTXO contains a `CodecID`, `TxID`, `UTXOIndex`, and `Output`. - **`CodecID`** The only current valid codec id is `00 00`. - **`TxID`** is a 32-byte transaction ID. Transaction IDs are calculated by taking sha256 of the bytes of the signed transaction. - **`UTXOIndex`** is an int that specifies which output in the transaction specified by **`TxID`** that this utxo was created by. - **`AssetID`** is a 32-byte array that defines which asset this utxo references. - **`Output`** is the output object that created this utxo. The serialization of Outputs was defined above. #### Gantt UTXO Specification ```text +--------------+----------+-------------------------+ | codec_id : uint16 | 2 bytes | +--------------+----------+-------------------------+ | tx_id : [32]byte | 32 bytes | +--------------+----------+-------------------------+ | output_index : int | 4 bytes | +--------------+----------+-------------------------+ | asset_id : [32]byte | 32 bytes | +--------------+----------+-------------------------+ | output : Output | size(output) bytes | +--------------+----------+-------------------------+ | 70 + size(output) bytes | +-------------------------+ ``` ### Proto UTXO Specification ```text message Utxo { uint32 codec_id = 1; // 02 bytes bytes tx_id = 2; // 32 bytes uint32 output_index = 3; // 04 bytes bytes asset_id = 4; // 32 bytes Output output = 5; // size(output) } ``` ### UTXO Example Let's make a UTXO from the signed transaction created above: - **`CodecID`**: `0` - **`TxID`**: `0xf966750f438867c3c9828ddcdbe660e21ccdbb36a9276958f011ba472f75d4e7` - **`UTXOIndex`**: 0x00000000 - **`AssetID`**: `0x000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f` - **`Output`**: `"Example SECP256K1 Transferable Output as defined above"` ```text [ CodecID <- 0x0000 TxID <- 0xf966750f438867c3c9828ddcdbe660e21ccdbb36a9276958f011ba472f75d4e7 UTXOIndex <- 0x00000000 AssetID <- 0x000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f Output <- 0x000000070000000000003039000000000000d431000000010000000251025c61fbcfc078f69334f834be6dd26d55a955c3344128e060128ede3523a24a461c8943ab0859 ] = [ // Codec ID: 0x00, 0x00, // txID: 0xf9, 0x66, 0x75, 0x0f, 0x43, 0x88, 0x67, 0xc3, 0xc9, 0x82, 0x8d, 0xdc, 0xdb, 0xe6, 0x60, 0xe2, 0x1c, 0xcd, 0xbb, 0x36, 0xa9, 0x27, 0x69, 0x58, 0xf0, 0x11, 0xba, 0x47, 0x2f, 0x75, 0xd4, 0xe7, // utxo index: 0x00, 0x00, 0x00, 0x00, // assetID: 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, // output: 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x30, 0x39, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xd4, 0x31, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x02, 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, 0x20, 0x21, 0x22, 0x23, 0x24, 0x25, 0x26, 0x27, ] ``` ## StakeableLockIn A StakeableLockIn is a staked and locked input. The StakeableLockIn can only fund StakeableLockOuts with the same address until its lock time has passed. ### What StakeableLockIn Contains A StakeableLockIn contains a `TypeID`, `Locktime` and `TransferableIn`. - **`TypeID`** is the ID for this output type. It is `0x00000015`. - **`Locktime`** is a long that contains the Unix timestamp before which the input can be consumed only to stake. The Unix timestamp is specific to the second. - **`TransferableIn`** is a transferable input object. ### Gantt StakeableLockIn Specification ```text +-----------------+-------------------+--------------------------------+ | type_id : int | 4 bytes | +-----------------+-------------------+--------------------------------+ | locktime : long | 8 bytes | +-----------------+-------------------+--------------------------------+ | transferable_in : TransferableInput | size(transferable_in) | +-----------------+-------------------+--------------------------------+ | 12 + size(transferable_in) bytes | +----------------------------------+ ``` ### Proto StakeableLockIn Specification ```text message StakeableLockIn { uint32 type_id = 1; // 04 bytes uint64 locktime = 2; // 08 bytes TransferableInput transferable_in = 3; // size(transferable_in) } ``` ### StakeableLockIn Example Let's make a StakeableLockIn with: - **`TypeID`**: 21 - **`Locktime`**: 54321 - **`TransferableIn`**: "Example SECP256K1 Transfer Input as defined above" ```text [ TypeID <- 0x00000015 Locktime <- 0x000000000000d431 TransferableIn <- [ f1e1d1c1b1a191817161514131211101f0e0d0c0b0a09080706050403020100000000005000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f0000000500000000075bcd150000000100000000, ] ] = [ // type_id: 0x00, 0x00, 0x00, 0x15, // locktime: 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xd4, 0x31, // transferable_in 0xf1, 0xe1, 0xd1, 0xc1, 0xb1, 0xa1, 0x91, 0x81, 0x71, 0x61, 0x51, 0x41, 0x31, 0x21, 0x11, 0x01, 0xf0, 0xe0, 0xd0, 0xc0, 0xb0, 0xa0, 0x90, 0x80, 0x70, 0x60, 0x50, 0x40, 0x30, 0x20, 0x10, 0x00, 0x00, 0x00, 0x00, 0x05, 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, 0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x00, 0x07, 0x5b, 0xcd, 0x15, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, ] ``` ## StakeableLockOut A StakeableLockOut is an output that is locked until its lock time, but can be staked in the meantime. ### What StakeableLockOut Contains A StakeableLockOut contains a `TypeID`, `Locktime` and `TransferableOut`. - **`TypeID`** is the ID for this output type. It is `0x00000016`. - **`Locktime`** is a long that contains the Unix timestamp before which the output can be consumed only to stake. The Unix timestamp is specific to the second. - **`transferableout`**: "Example SECP256K1 Transfer Output as defined above" ### Gantt StakeableLockOut Specification ```text +------------------+--------------------+--------------------------------+ | type_id : int | 4 bytes | +------------------+--------------------+--------------------------------+ | locktime : long | 8 bytes | +------------------+--------------------+--------------------------------+ | transferable_out : TransferableOutput | size(transferable_out) | +------------------+--------------------+--------------------------------+ | 12 + size(transferable_out) bytes | +-----------------------------------+ ``` ### Proto StakeableLockOut Specification ```text message StakeableLockOut { uint32 type_id = 1; // 04 bytes uint64 locktime = 2; // 08 bytes TransferableOutput transferable_out = 3; // size(transferable_out) } ``` ### StakeableLockOut Example Let's make a stakeablelockout with: - **`TypeID`**: 22 - **`Locktime`**: 54321 - **`TransferableOutput`**: `"Example SECP256K1 Transfer Output from above"` ```text [ TypeID <- 0x00000016 Locktime <- 0x000000000000d431 TransferableOutput <- 0x000000070000000000003039000000000000d431000000010000000251025c61fbcfc078f69334f834be6dd26d55a955c3344128e060128ede3523a24a461c8943ab0859, ] = [ // type_id: 0x00, 0x00, 0x00, 0x16, // locktime: 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xd4, 0x31, // transferable_out 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x30, 0x39, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xd4, 0x31, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x02, 0x51, 0x02, 0x5c, 0x61, 0xfb, 0xcf, 0xc0, 0x78, 0xf6, 0x93, 0x34, 0xf8, 0x34, 0xbe, 0x6d, 0xd2, 0x6d, 0x55, 0xa9, 0x55, 0xc3, 0x34, 0x41, 0x28, 0xe0, 0x60, 0x12, 0x8e, 0xde, 0x35, 0x23, 0xa2, 0x4a, 0x46, 0x1c, 0x89, 0x43, 0xab, 0x08, 0x59, ] ``` ## Avalanche L1 Auth ### What Avalanche L1 Auth Contains Specifies the addresses whose signatures will be provided to demonstrate that the owners of an Avalanche L1 approve something. - **`TypeID`** is the ID for this type. It is `0x0000000a`. - **`AddressIndices`** defines which addresses' signatures will be attached to this transaction. AddressIndices[i] is the index in an Avalanche L1 owner list that corresponds to the signature at index i in the signature list. Must be sorted low to high and not have duplicates. ### Gantt Avalanche L1 Auth Specification ```text +-----------------+------------------+-------------------------------------+ | type_id : int | 4 bytes | +-----------------+------------------+-------------------------------------+ | address_indices : []int | 4 + 4*len(address_indices) bytes | +-----------------+------------------+-------------------------------------+ | 8 + 4*len(address_indices) bytes | +-----------------+--------------------------------------------------------+ ``` ### Proto Avalanche L1 Auth Specification ```text message SubnetAuth { uint32 type_id = 1; // 04 bytes repeated AddressIndex address_indices = 2; // 04 + 4*len(address_indices) bytes } ``` ### Avalanche L1 Auth Example Let's make an Avalanche L1 auth: - **`TypeID`**: `10` - **`AddressIndices`**: [`0`] ```text [ TypeID <- 0x0000000a AddressIndices <- [ 0x00000000 ] ] = [ // type id 0x00, 0x00, 00x0, 0x0a, // num address indices 0x00, 0x00, 0x00, 0x01, // address index 1 0x00, 0x00, 0x00, 0x00 ] ``` ## Validator A validator verifies transactions on a blockchain. ### What Validator Contains A validator contains `NodeID`, `Start`, `End`, and `Wght` - **`NodeID`** is the ID of the validator - **`Start`** Unix time this validator starts validating - **`End`** Unix time this validator stops validating - **`Wght`** Weight of this validator used when sampling ### Gantt Validator Specification ```text +------------------+----------+ | node_id : string | 20 bytes | +------------------+----------+ | start : uint64 | 8 bytes | +------------------+----------+ | end : uint64 | 8 bytes | +------------------+----------+ | wght : uint64 | 8 bytes | +------------------+----------+ | | 44 bytes | +------------------+----------+ ``` ### Proto Validator Specification ```text message Validator { string node_id = 1; // 20 bytes uint64 start = 2; // 08 bytes uint64 end = 3; // 08 bytes uint64 wght = 4; // 08 bytes } ``` ### Validator Example Let's make a validator: - **`NodeID`**: `"NodeID-GWPcbFJZFfZreETSoWjPimr846mXEKCtu"` - **`Start`**: `1643068824` - **`End`**: `1644364767` - **`Wght`**: `20` ```text [ NodeID <- 0xaa18d3991cf637aa6c162f5e95cf163f69cd8291 Start <- 0x61ef3d98 End <- 0x620303df Wght <- 0x14 ] = [ // node id 0xaa, 0x18, 0xd3, 0x99, 0x1c, 0xf6, 0x37, 0xaa, 0x6c, 0x16, 0x2f, 0x5e, 0x95, 0xcf, 0x16, 0x3f, 0x69, 0xcd, 0x82, 0x91, // start 0x61, 0xef, 0x3d, 0x98, // end 0x62, 0x03, 0x03, 0xdf, // wght 0x14, ] ``` ## Rewards Owner Where to send staking rewards when done validating ### What Rewards Owner Contains A rewards owner contains a `TypeID`, `Locktime`, `Threshold`, and `Addresses`. - **`TypeID`** is the ID for this validator. It is `0x0000000b`. - **`Locktime`** is a long that contains the Unix timestamp that this output can be spent after. The Unix timestamp is specific to the second. - **`Threshold`** is an int that names the number of unique signatures required to spend the output. Must be less than or equal to the length of **`Addresses`**. If **`Addresses`** is empty, must be 0. - **`Addresses`** is a list of unique addresses that correspond to the private keys that can be used to spend this output. Addresses must be sorted lexicographically. ### Gantt Rewards Owner Specification ```text +------------------------+-------------------------------+ | type_id : int | 4 bytes | +------------------------+-------------------------------+ | locktime : long | 8 bytes | +------------------------+-------------------------------+ | threshold : int | 4 bytes | +------------------------+-------------------------------+ | addresses : [][20]byte | 4 + 20 * len(addresses) bytes | +------------------------+-------------------------------+ | | 40 bytes | +------------------------+-------------------------------+ ``` ### Proto Rewards Owner Specification ```text message RewardsOwner { string type_id = 1; // 4 bytes uint64 locktime = 2; // 08 bytes uint32 threshold = 3; // 04 bytes repeated bytes addresses = 4; // 04 bytes + 20 bytes * len(addresses) } ``` ### Rewards Owner Example Let's make a rewards owner: - **`TypeID`**: `11` - **`Locktime`**: `54321` - **`Threshold`**: `1` - **`Addresses`**: - `0x51025c61fbcfc078f69334f834be6dd26d55a955` - `0xc3344128e060128ede3523a24a461c8943ab0859` ```text [ TypeID <- 0x0000000b Locktime <- 0x000000000000d431 Threshold <- 0x00000001 Addresses <- [ 0x51025c61fbcfc078f69334f834be6dd26d55a955, 0xc3344128e060128ede3523a24a461c8943ab0859, ] ] = [ // type id 0x00, 0x00, 0x00, 0x0b, // locktime 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xd4, 0x31, // threshold: 0x00, 0x00, 0x00, 0x01, // number of addresses: 0x00, 0x00, 0x00, 0x02, // addrs[0]: 0x51, 0x02, 0x5c, 0x61, 0xfb, 0xcf, 0xc0, 0x78, 0xf6, 0x93, 0x34, 0xf8, 0x34, 0xbe, 0x6d, 0xd2, 0x6d, 0x55, 0xa9, 0x55, // addrs[1]: 0xc3, 0x34, 0x41, 0x28, 0xe0, 0x60, 0x12, 0x8e, 0xde, 0x35, 0x23, 0xa2, 0x4a, 0x46, 0x1c, 0x89, 0x43, 0xab, 0x08, 0x59, ] ``` ## Unsigned Convert Subnet To L1 TX ### What Unsigned Convert Subnet To L1 TX Contains An unsigned convert subnet to L1 TX contains a `BaseTx`, `Subnet`, `ChainID`, `Address`, `Validators`, and `SubnetAuth`. The `TypeID` for this type is `0x00000030`. - **`BaseTx`** - **`Subnet`** ID of the Subnet to transform into an L1. Must not be the Primary Network ID. - **`ChainID`** BlockchainID where the validator manager lives. - **`Address`** Address of the validator manager. - **`Validators`** Initial continuous-fee-paying validators for the L1. - **`SubnetAuth`** Authorizes this conversion. Must be signed by the Subnet's owner. ### Gantt Unsigned Convert Subnet To L1 TX Specification ```text +------------+------------------+----------------------------------+ | base_tx : BaseTx | size(base_tx) bytes | +------------+------------------+----------------------------------+ | subnet : [32]byte | 32 bytes | +------------+------------------+----------------------------------+ | chain_id : [32]byte | 32 bytes | +------------+------------------+----------------------------------+ | address : []byte | 4 + len(address) bytes | +------------+------------------+----------------------------------+ | validators : []L1Validator | 4 + size(validators) bytes | +------------+------------------+----------------------------------+ | subnet_auth: SubnetAuth | 4 bytes + len(sig_indices) bytes | +------------+------------------+----------------------------------+ | 76 + size(base_tx) + len(address) + size(validators) + len(sig_indices) bytes | +----------------------------------------------------------------------------+ ``` ### Proto Unsigned Convert Subnet To L1 TX Specification ```text message ConvertSubnetToL1Tx { BaseTx base_tx = 1; // size(base_tx) SubnetID subnet = 2; // 32 bytes ChainID chain_id = 3; // 32 bytes bytes address = 4; // 4 + len(address) bytes repeated L1Validator validators = 5; // 4 + size(validators) bytes SubnetAuth subnet_auth = 6; // 4 bytes + len(sig_indices) } ``` ## Unsigned Register L1 Validator TX ### What Unsigned Register L1 Validator TX Contains An unsigned register L1 validator TX contains a `BaseTx`, `Balance`, `Signer` and `Message`. The `TypeID` for this type is `0x00000031`. - **`BaseTx`** - **`Balance`** is the amount of AVAX being provided for fees, where `Balance <= sum($AVAX inputs) - sum($AVAX outputs) - TxFee`. - **`Signer`** is a BLS signature proving ownership of the BLS public key specified in the Message for this validator. - **`Message`** is a RegisterL1ValidatorMessage payload delivered as a Warp Message. ### Gantt Unsigned Register L1 Validator TX Specification ```text +------------+------------------+----------------------------------+ | base_tx : BaseTx | size(base_tx) bytes | +------------+------------------+----------------------------------+ | balance : uint64 | 8 bytes | +------------+------------------+----------------------------------+ | signer : [96]byte | 96 bytes | +------------+------------------+----------------------------------+ | message : WarpMessage | size(message) bytes | +------------+------------------+----------------------------------+ | 104 + size(base_tx) + size(message) bytes | +------------------------------------------------------------------+ ``` ### Proto Unsigned Register L1 Validator TX Specification ```text message RegisterL1ValidatorTx { BaseTx base_tx = 1; // size(base_tx) uint64 balance = 2; // 8 bytes bytes signer = 3; // 96 bytes WarpMessage message = 4; // size(message) bytes } ``` ## Unsigned Set L1 Validator Weight TX ### What Unsigned Set L1 Validator Weight TX Contains An unsigned set L1 validator weight TX contains a `BaseTx` and `Message`. The `TypeID` for this type is `0x00000032`. - **`BaseTx`** - **`Message`** An L1ValidatorWeightMessage payload delivered as a Warp Message. Contains the validationID, nonce, and new weight for a validator. ### Gantt Unsigned Set L1 Validator Weight TX Specification ```text +------------+------------------+----------------------------------+ | base_tx : BaseTx | size(base_tx) bytes | +------------+------------------+----------------------------------+ | message : WarpMessage | size(message) bytes | +------------+------------------+----------------------------------+ | size(base_tx) + size(message) bytes | +------------------------------------------------------------------+ ``` ### Proto Unsigned Set L1 Validator Weight TX Specification ```text message SetL1ValidatorWeightTx { BaseTx base_tx = 1; // size(base_tx) WarpMessage message = 2; // size(message) bytes } ``` ## Unsigned Disable L1 Validator TX ### What Unsigned Disable L1 Validator TX Contains An unsigned disable L1 validator TX contains a `BaseTx`, `ValidationID` and `DisableAuth`. The `TypeID` for this type is `0x00000033`. - **`BaseTx`** - **`ValidationID`** ID corresponding to the validator to be disabled. - **`DisableAuth`** Authorizes this validator to be disabled. Must be signed by the DisableOwner specified when the validator was added. ### Gantt Unsigned Disable L1 Validator TX Specification ```text +----------------+------------------+----------------------------------+ | base_tx : BaseTx | size(base_tx) bytes | +----------------+------------------+----------------------------------+ | validation_id : [32]byte | 32 bytes | +----------------+------------------+----------------------------------+ | disable_auth : Verifiable | size(disable_auth) bytes | +----------------+------------------+----------------------------------+ | 32 + size(base_tx) + size(disable_auth) bytes | +----------------------------------------------------------------------+ ``` ### Proto Unsigned Disable L1 Validator TX Specification ```text message DisableL1ValidatorTx { BaseTx base_tx = 1; // size(base_tx) bytes validation_id = 2; // 32 bytes Verifiable disable_auth = 3; // size(disable_auth) bytes } ``` ## Unsigned Increase L1 Validator Balance TX ### What Unsigned Increase L1 Validator Balance TX Contains An unsigned increase L1 validator balance TX contains a `BaseTx`, `ValidationID` and `Balance`. The `TypeID` for this type is `0x00000034`. - **`BaseTx`** - **`ValidationID`** ID corresponding to the validator. - **`Balance`** Additional AVAX amount to add to the validator's balance where `Balance <= sum($AVAX inputs) - sum($AVAX outputs) - TxFee`. ### Gantt Unsigned Increase L1 Validator Balance TX Specification ```text +----------------+------------------+----------------------------------+ | base_tx : BaseTx | size(base_tx) bytes | +----------------+------------------+----------------------------------+ | validation_id : [32]byte | 32 bytes | +----------------+------------------+----------------------------------+ | balance : uint64 | 8 bytes | +----------------+------------------+----------------------------------+ | 40 + size(base_tx) bytes | +----------------------------------------------------------------------+ ``` ### Proto Unsigned Increase L1 Validator Balance TX Specification ```text message IncreaseL1ValidatorBalanceTx { BaseTx base_tx = 1; // size(base_tx) bytes validation_id = 2; // 32 bytes uint64 balance = 3; // 8 bytes } ``` # C-Chain API (/docs/rpcs/c-chain/api) --- title: "C-Chain API" description: "This page is an overview of the C-Chain API associated with AvalancheGo." edit_url: https://github.com/ava-labs/avalanchego/edit/master/graft/coreth/plugin/evm/api.md --- Ethereum has its own notion of `networkID` and `chainID`. These have no relationship to Avalanche's view of networkID and chainID and are purely internal to the [C-Chain](https://github.com/docs/primary-network#c-chain). On Mainnet, the C-Chain uses `1` and `43114` for these values. On the Fuji Testnet, it uses `1` and `43113` for these values. `networkID` and `chainID` can also be obtained using the `net_version` and `eth_chainId` methods. ## Ethereum APIs ### Endpoints #### JSON-RPC Endpoints To interact with C-Chain via the JSON-RPC endpoint: ```sh /ext/bc/C/rpc ``` To interact with other instances of the EVM via the JSON-RPC endpoint: ```sh /ext/bc/blockchainID/rpc ``` where `blockchainID` is the ID of the blockchain running the EVM. #### WebSocket Endpoints On the [public API node](https://github.com/docs/rpcs), it only supports C-Chain websocket API calls for API methods that don't exist on the C-Chain's HTTP API To interact with C-Chain via the websocket endpoint: ```sh /ext/bc/C/ws ``` For example, to interact with the C-Chain's Ethereum APIs via websocket on localhost, you can use: ```sh ws://127.0.0.1:9650/ext/bc/C/ws ``` } > On localhost, use `ws://`. When using the [Public API](https://github.com/docs/rpcs) or another host that supports encryption, use `wss://`. To interact with other instances of the EVM via the websocket endpoint: ```sh /ext/bc/blockchainID/ws ``` where `blockchainID` is the ID of the blockchain running the EVM. ### Standard Ethereum APIs Avalanche offers an API interface identical to Geth's API except that it only supports the following services: - `web3_` - `net_` - `eth_` - `personal_` - `txpool_` - `debug_` (note: this is turned off on the public API node.) You can interact with these services the same exact way you'd interact with Geth (see exceptions below). See the [Ethereum Wiki's JSON-RPC Documentation](https://ethereum.org/en/developers/docs/apis/json-rpc/) and [Geth's JSON-RPC Documentation](https://geth.ethereum.org/docs/rpc/server) for a full description of this API. For batched requests on the [public API node](https://github.com/docs/rpcs) , the maximum number of items is 40. #### Exceptions Starting with release [`v0.12.2`](https://github.com/ava-labs/avalanchego/releases/tag/v1.12.2), `eth_getProof` has a different behavior compared to geth: - On archival nodes (nodes with`pruning-enabled` set to `false`), queries for state proofs older than 24 hours preceding the last accepted block will be rejected by default. This can be adjusted with `historical-proof-query-window`, which defines the number of blocks before the last accepted block that can be queried for state proofs. Set this option to `0` to accept a state query for any block number. - On pruning nodes (nodes with `pruning-enabled` set to `true`), queries for state proofs outside the 32 block window after the last accepted block are always rejected. ### Avalanche - Ethereum APIs In addition to the standard Ethereum APIs, Avalanche offers `eth_baseFee`, `eth_maxPriorityFeePerGas`, and `eth_getChainConfig`. They use the same endpoint as standard Ethereum APIs: ```sh /ext/bc/C/rpc ``` #### `eth_baseFee` Get the base fee for the next block. **Signature:** ```sh eth_baseFee() -> {} ``` `result` is the hex value of the base fee for the next block. **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"eth_baseFee", "params" :[] }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/C/rpc ``` **Example Response:** ```json { "jsonrpc": "2.0", "id": 1, "result": "0x34630b8a00" } ``` #### `eth_maxPriorityFeePerGas` Get the priority fee needed to be included in a block. **Signature:** ```sh eth_maxPriorityFeePerGas() -> {} ``` `result` is hex value of the priority fee needed to be included in a block. **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"eth_maxPriorityFeePerGas", "params" :[] }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/C/rpc ``` **Example Response:** ```json { "jsonrpc": "2.0", "id": 1, "result": "0x2540be400" } ``` For more information on dynamic fees see the [C-Chain section of the transaction fee documentation](https://github.com/docs/rpcs/other/guides/txn-fees#c-chain-fees). ## Admin APIs The Admin API provides administrative functionality for the EVM. ### Endpoint ```sh /ext/bc/C/admin ``` ### Methods #### `admin_startCPUProfiler` Starts a CPU profile that writes to the specified file. **Signature:** ```sh admin_startCPUProfiler() -> {} ``` **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"admin_startCPUProfiler", "params" :[] }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/C/admin ``` #### `admin_stopCPUProfiler` Stops the CPU profile. **Signature:** ```sh admin_stopCPUProfiler() -> {} ``` **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"admin_stopCPUProfiler", "params" :[] }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/C/admin ``` #### `admin_memoryProfile` Runs a memory profile writing to the specified file. **Signature:** ```sh admin_memoryProfile() -> {} ``` **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"admin_memoryProfile", "params" :[] }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/C/admin ``` #### `admin_lockProfile` Runs a mutex profile writing to the specified file. **Signature:** ```sh admin_lockProfile() -> {} ``` **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"admin_lockProfile", "params" :[] }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/C/admin ``` #### `admin_setLogLevel` Sets the log level for the EVM. **Signature:** ```sh admin_setLogLevel({ level: string }) -> {} ``` **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"admin_setLogLevel", "params" :[{ "level": "debug" }] }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/C/admin ``` #### `admin_getVMConfig` Returns the current VM configuration. **Signature:** ```sh admin_getVMConfig() -> { config: { // VM configuration fields } } ``` **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"admin_getVMConfig", "params" :[] }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/C/admin ``` ## Avalanche-Specific APIs ### Endpoint ```sh /ext/bc/C/avax ``` ### Methods #### `avax.getUTXOs` Gets all UTXOs for the specified addresses. **Signature:** ```sh avax.getUTXOs({ addresses: [string], sourceChain: string, startIndex: { address: string, utxo: string }, limit: number, encoding: string }) -> { utxos: [string], endIndex: { address: string, utxo: string }, numFetched: number, encoding: string } ``` **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"avax.getUTXOs", "params" :[{ "addresses": ["X-avax1..."], "sourceChain": "X", "limit": 100, "encoding": "hex" }] }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/C/avax ``` #### `avax.issueTx` Issues a transaction to the network. **Signature:** ```sh avax.issueTx({ tx: string, encoding: string }) -> { txID: string } ``` **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"avax.issueTx", "params" :[{ "tx": "0x...", "encoding": "hex" }] }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/C/avax ``` #### `avax.getAtomicTxStatus` Returns the status of the specified atomic transaction. **Signature:** ```sh avax.getAtomicTxStatus({ txID: string }) -> { status: string, blockHeight: number (optional) } ``` **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"avax.getAtomicTxStatus", "params" :[{ "txID": "2QouvNW..." }] }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/C/avax ``` #### `avax.getAtomicTx` Returns the specified atomic transaction. **Signature:** ```sh avax.getAtomicTx({ txID: string, encoding: string }) -> { tx: string, encoding: string, blockHeight: number (optional) } ``` **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"avax.getAtomicTx", "params" :[{ "txID": "2QouvNW...", "encoding": "hex" }] }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/C/avax ``` #### `avax.version` Returns the version of the VM. **Signature:** ```sh avax.version() -> { version: string } ``` **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"avax.version", "params" :[] }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/C/avax ``` # AvalancheGo C-Chain RPC (/docs/rpcs/c-chain) --- title: "AvalancheGo C-Chain RPC" description: "This page is an overview of the C-Chain RPC associated with AvalancheGo." edit_url: https://github.com/ava-labs/avalanchego/edit/master/graft/coreth/plugin/evm/service.md --- > **Note:** Ethereum has its own notion of `networkID` and `chainID`. These have no relationship to Avalanche's view of networkID and chainID and are purely internal to the [C-Chain](https://build.avax.network/docs/quick-start/primary-network#c-chain). On Mainnet, the C-Chain uses `1` and `43114` for these values. On the Fuji Testnet, it uses `1` and `43113` for these values. `networkID` and `chainID` can also be obtained using the `net_version` and `eth_chainId` methods. ## Ethereum APIs ### Endpoints #### JSON-RPC Endpoints To interact with C-Chain via the JSON-RPC endpoint: ```sh /ext/bc/C/rpc ``` To interact with other instances of the EVM via the JSON-RPC endpoint: ```sh /ext/bc/blockchainID/rpc ``` where `blockchainID` is the ID of the blockchain running the EVM. #### WebSocket Endpoints > **Info:** The [public API node](https://build.avax.network/integrations#RPC%20Providers) (api.avax.network) supports HTTP APIs for X-Chain, P-Chain, and C-Chain, but websocket connections are only available for C-Chain. Other EVM chains are not available via websocket on the public API node. To interact with C-Chain via the websocket endpoint: ```sh /ext/bc/C/ws ``` For example, to interact with the C-Chain's Ethereum APIs via websocket on localhost, you can use: ```sh ws://127.0.0.1:9650/ext/bc/C/ws ``` > **Tip:** On localhost, use `ws://`. When using the [Public API](https://build.avax.network/integrations#RPC%20Providers) or another host that supports encryption, use `wss://`. To interact with other instances of the EVM via the websocket endpoint: ```sh /ext/bc/blockchainID/ws ``` where `blockchainID` is the ID of the blockchain running the EVM. ### Standard Ethereum APIs Avalanche offers an API interface identical to Geth's API except that it only supports the following services: - `web3_` - `net_` - `eth_` - `personal_` - `txpool_` - `debug_` (note: this is turned off on the public API node.) You can interact with these services the same exact way you'd interact with Geth (see exceptions below). See the [Ethereum Wiki's JSON-RPC Documentation](https://ethereum.org/en/developers/docs/apis/json-rpc/) and [Geth's JSON-RPC Documentation](https://geth.ethereum.org/docs/rpc/server) for a full description of this API. > **Info:** For batched requests on the [public API node](https://build.avax.network/integrations#RPC%20Providers) , the maximum number of items is 40. #### Exceptions Starting with release [`v0.12.2`](https://github.com/ava-labs/avalanchego/releases/tag/v1.12.2), `eth_getProof` has a different behavior compared to geth: - On archival nodes (nodes with`pruning-enabled` set to `false`), queries for state proofs older than 24 hours preceding the last accepted block will be rejected by default. This can be adjusted with `historical-proof-query-window`, which defines the number of blocks before the last accepted block that can be queried for state proofs. Set this option to `0` to accept a state query for any block number. - On pruning nodes (nodes with `pruning-enabled` set to `true`), queries for state proofs outside the 32 block window after the last accepted block are always rejected. ### Avalanche - Ethereum APIs In addition to the standard Ethereum APIs, Avalanche offers `eth_baseFee`, `eth_getChainConfig`, `eth_callDetailed`, `eth_getBadBlocks`, `eth_suggestPriceOptions`, and the `eth_newAcceptedTransactions` subscription. They use the same endpoint as standard Ethereum APIs: ```sh /ext/bc/C/rpc ``` #### `eth_baseFee` Get the base fee for the next block. **Signature:** ```sh eth_baseFee() -> {} ``` `result` is the hex value of the base fee for the next block. **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"eth_baseFee", "params" :[] }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/C/rpc ``` **Example Response:** ```json { "jsonrpc": "2.0", "id": 1, "result": "0x34630b8a00" } ``` #### `eth_getChainConfig` Returns the chain configuration. **Signature:** ```sh eth_getChainConfig() -> {} ``` `result` is the chain configuration object. **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"eth_getChainConfig", "params" :[] }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/C/rpc ``` **Example Response:** ```json { "jsonrpc": "2.0", "id": 1, "result": { "apricotPhase1BlockTimestamp": 1607144400, "apricotPhase2BlockTimestamp": 1607144400, "apricotPhase3BlockTimestamp": 1607144400, "apricotPhase4BlockTimestamp": 1607144400, "apricotPhase5BlockTimestamp": 1607144400, "apricotPhase6BlockTimestamp": 1607144400, "apricotPhasePost6BlockTimestamp": 1607144400, "apricotPhasePre6BlockTimestamp": 1607144400, "banffBlockTimestamp": 1607144400, "berlinBlock": 0, "byzantiumBlock": 0, "cancunTime": 1607144400, "chainId": 43112, "constantinopleBlock": 0, "cortinaBlockTimestamp": 1607144400, "daoForkBlock": 0, "daoForkSupport": true, "durangoBlockTimestamp": 1607144400, "eip150Block": 0, "eip155Block": 0, "eip158Block": 0, "etnaTimestamp": 1607144400, "fortunaTimestamp": 1607144400, "graniteTimestamp": 253399622400, "homesteadBlock": 0, "istanbulBlock": 0, "londonBlock": 0, "muirGlacierBlock": 0, "petersburgBlock": 0, "shanghaiTime": 1607144400 } } ``` #### `eth_callDetailed` Performs the same operation as `eth_call`, but returns additional execution details including gas used, any EVM error code, and return data. **Signature:** ```sh eth_callDetailed({ from: address (optional), to: address, gas: quantity (optional), gasPrice: quantity (optional), maxFeePerGas: quantity (optional), maxPriorityFeePerGas: quantity (optional), value: quantity (optional), data: data (optional) }, blockNumberOrHash, stateOverrides (optional)) -> { gas: quantity, errCode: number, err: string, returnData: data } ``` **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"eth_callDetailed", "params" :[{ "to": "0x...", "data": "0x..." }, "latest"] }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/C/rpc ``` **Example Response:** ```json { "jsonrpc": "2.0", "id": 1, "result": { "gas": 21000, "errCode": 0, "err": "", "returnData": "0x" } } ``` #### `eth_getBadBlocks` Returns a list of the last bad blocks that the client has seen on the network. **Signature:** ```sh eth_getBadBlocks() -> []{ hash: hash, block: object, rlp: string, reason: object } ``` **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"eth_getBadBlocks", "params" :[] }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/C/rpc ``` #### `eth_suggestPriceOptions` Returns suggested gas price options (slow, normal, fast) for the current network conditions. Each option includes a `maxPriorityFeePerGas` and a `maxFeePerGas` value. **Signature:** ```sh eth_suggestPriceOptions() -> { slow: { maxPriorityFeePerGas: quantity, maxFeePerGas: quantity }, normal: { maxPriorityFeePerGas: quantity, maxFeePerGas: quantity }, fast: { maxPriorityFeePerGas: quantity, maxFeePerGas: quantity } } ``` **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"eth_suggestPriceOptions", "params" :[] }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/C/rpc ``` **Example Response:** ```json { "jsonrpc": "2.0", "id": 1, "result": { "slow": { "maxPriorityFeePerGas": "0x5d21dba00", "maxFeePerGas": "0x6fc23ac00" }, "normal": { "maxPriorityFeePerGas": "0x2540be400", "maxFeePerGas": "0x4a817c800" }, "fast": { "maxPriorityFeePerGas": "0x12a05f200", "maxFeePerGas": "0x37e11d600" } } } ``` #### `eth_newAcceptedTransactions` (Subscription) Creates a subscription that fires each time a transaction is accepted (finalized) on the chain. This is an Avalanche-specific subscription not available in standard Ethereum. If `fullTx` is `true`, the full transaction object is sent; otherwise only the transaction hash is sent. Available only via WebSocket. **Signature:** ```sh {"jsonrpc":"2.0", "id":1, "method":"eth_subscribe", "params":["newAcceptedTransactions", {"fullTx": false}]} ``` **Example Call:** ```sh wscat -c ws://127.0.0.1:9650/ext/bc/C/ws -x '{"jsonrpc":"2.0", "id":1, "method":"eth_subscribe", "params":["newAcceptedTransactions"]}' ``` **Example Notification:** ```json { "jsonrpc": "2.0", "method": "eth_subscription", "params": { "subscription": "0x...", "result": "0xtransactionhash..." } } ``` ## Warp APIs The Warp API enables interaction with Avalanche Warp Messaging (AWM), allowing cross-chain communication between Avalanche blockchains. It provides methods for retrieving warp messages and their BLS signatures, as well as aggregating signatures from validators. The Warp API is enabled when the `warp-api-enabled` flag is set to `true` in the node configuration. ### Warp API Endpoint ```sh /ext/bc/C/rpc ``` ### Warp API Methods #### `warp_getMessage` Returns the raw bytes of a warp message by its ID. **Signature:** ```sh warp_getMessage({messageID: string}) -> data ``` - `messageID` is the ID of the warp message. **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"warp_getMessage", "params" :["2PsAgvhGczsTNxCMgVJG39gKV4W9ETXWL1mMJk3xoSRHMBcyY"] }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/C/rpc ``` **Example Response:** ```json { "jsonrpc": "2.0", "id": 1, "result": "0x..." } ``` #### `warp_getMessageSignature` Returns BLS signature for the specified warp message. **Signature:** ```sh warp_getMessageSignature({messageID: string}) -> data ``` - `messageID` is the ID of the warp message. **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"warp_getMessageSignature", "params" :["2PsAgvhGczsTNxCMgVJG39gKV4W9ETXWL1mMJk3xoSRHMBcyY"] }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/C/rpc ``` **Example Response:** ```json { "jsonrpc": "2.0", "id": 1, "result": "0x..." } ``` #### `warp_getBlockSignature` Returns the BLS signature associated with a blockID. **Signature:** ```sh warp_getBlockSignature({blockID: string}) -> data ``` - `blockID` is the ID of the block in warp message. **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"warp_getBlockSignature", "params" :["2PsAgvhGczsTNxCMgVJG39gKV4W9ETXWL1mMJk3xoSRHMBcyY"] }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/C/rpc ``` **Example Response:** ```json { "jsonrpc": "2.0", "id": 1, "result": "0x..." } ``` #### `warp_getMessageAggregateSignature` Fetches BLS signatures from validators for the specified warp message and aggregates them into a single signed warp message. The caller specifies the quorum numerator (the denominator is 100). **Signature:** ```sh warp_getMessageAggregateSignature({messageID: string, quorumNum: number, subnetID: string (optional)}) -> data ``` - `messageID` is the ID of the warp message. - `quorumNum` is the quorum numerator (e.g., `67` for 67% quorum). The denominator is 100. - `subnetID` (optional) is the subnet to aggregate signatures from. Defaults to the current subnet. **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"warp_getMessageAggregateSignature", "params" :["2PsAgvhGczsTNxCMgVJG39gKV4W9ETXWL1mMJk3xoSRHMBcyY", 67, ""] }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/C/rpc ``` **Example Response:** ```json { "jsonrpc": "2.0", "id": 1, "result": "0x..." } ``` The returned bytes are the serialized signed warp message. #### `warp_getBlockAggregateSignature` Fetches BLS signatures from validators for the specified block and aggregates them into a single signed warp message. The caller specifies the quorum numerator (the denominator is 100). **Signature:** ```sh warp_getBlockAggregateSignature({blockID: string, quorumNum: number, subnetID: string (optional)}) -> data ``` - `blockID` is the ID of the block. - `quorumNum` is the quorum numerator (e.g., `67` for 67% quorum). The denominator is 100. - `subnetID` (optional) is the subnet to aggregate signatures from. Defaults to the current subnet. **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"warp_getBlockAggregateSignature", "params" :["2PsAgvhGczsTNxCMgVJG39gKV4W9ETXWL1mMJk3xoSRHMBcyY", 67, ""] }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/C/rpc ``` **Example Response:** ```json { "jsonrpc": "2.0", "id": 1, "result": "0x..." } ``` The returned bytes are the serialized signed warp message. ## Admin APIs The Admin API provides administrative functionality for the EVM. ### Admin API Endpoint ```sh /ext/bc/C/admin ``` ### Admin API Methods #### `admin_startCPUProfiler` Starts a CPU profile that writes to the specified file. **Signature:** ```sh admin_startCPUProfiler() -> {} ``` **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"admin_startCPUProfiler", "params" :[] }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/C/admin ``` #### `admin_stopCPUProfiler` Stops the CPU profile. **Signature:** ```sh admin_stopCPUProfiler() -> {} ``` **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"admin_stopCPUProfiler", "params" :[] }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/C/admin ``` #### `admin_memoryProfile` Runs a memory profile writing to the specified file. **Signature:** ```sh admin_memoryProfile() -> {} ``` **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"admin_memoryProfile", "params" :[] }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/C/admin ``` #### `admin_lockProfile` Runs a mutex profile writing to the specified file. **Signature:** ```sh admin_lockProfile() -> {} ``` **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"admin_lockProfile", "params" :[] }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/C/admin ``` #### `admin_setLogLevel` Sets the log level for the EVM. **Signature:** ```sh admin_setLogLevel({ level: string }) -> {} ``` **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"admin_setLogLevel", "params" :[{ "level": "debug" }] }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/C/admin ``` #### `admin_getVMConfig` Returns the current VM configuration. **Signature:** ```sh admin_getVMConfig() -> { config: { // VM configuration fields } } ``` **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"admin_getVMConfig", "params" :[] }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/C/admin ``` ## Avalanche-Specific APIs ### Avalanche-Specific API Endpoint ```sh /ext/bc/C/avax ``` ### Avalanche-Specific API Methods #### `avax.getUTXOs` Gets all UTXOs for the specified addresses. **Signature:** ```sh avax.getUTXOs({ addresses: [string], sourceChain: string, startIndex: { address: string, utxo: string }, limit: number, encoding: string }) -> { utxos: [string], endIndex: { address: string, utxo: string }, numFetched: string, encoding: string } ``` **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"avax.getUTXOs", "params" :[{ "addresses": ["X-avax1..."], "sourceChain": "X", "limit": 100, "encoding": "hex" }] }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/C/avax ``` #### `avax.issueTx` Issues a transaction to the network. **Signature:** ```sh avax.issueTx({ tx: string, encoding: string }) -> { txID: string } ``` **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"avax.issueTx", "params" :[{ "tx": "0x...", "encoding": "hex" }] }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/C/avax ``` #### `avax.getAtomicTxStatus` Returns the status of the specified atomic transaction. **Signature:** ```sh avax.getAtomicTxStatus({ txID: string }) -> { status: string, blockHeight: number (optional) } ``` **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"avax.getAtomicTxStatus", "params" :[{ "txID": "2QouvNW..." }] }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/C/avax ``` #### `avax.getAtomicTx` Returns the specified atomic transaction. **Signature:** ```sh avax.getAtomicTx({ txID: string, encoding: string }) -> { tx: string, encoding: string, blockHeight: number (optional) } ``` **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"avax.getAtomicTx", "params" :[{ "txID": "2QouvNW...", "encoding": "hex" }] }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/C/avax ``` # Transaction Format (/docs/rpcs/c-chain/txn-format) --- title: Transaction Format --- This page is meant to be the single source of truth for how we serialize atomic transactions in `Coreth`. This document uses the [primitive serialization](/docs/rpcs/other/standards/serialization-primitives) format for packing and [secp256k1](/docs/rpcs/other/standards/cryptographic-primitives#cryptography-in-the-avalanche-virtual-machine) for cryptographic user identification. ## Codec ID Some data is prepended with a codec ID (unt16) that denotes how the data should be deserialized. Right now, the only valid codec ID is 0 (`0x00 0x00`). ## Inputs Inputs to Coreth Atomic Transactions are either an `EVMInput` from this chain or a `TransferableInput` (which contains a `SECP256K1TransferInput`) from another chain. The `EVMInput` will be used in `ExportTx` to spend funds from this chain, while the `TransferableInput` will be used to import atomic UTXOs from another chain. ## EVM Input Input type that specifies an EVM account to deduct the funds from as part of an `ExportTx`. ### What EVM Input Contains An EVM Input contains an `address`, `amount`, `assetID`, and `nonce`. - **`Address`** is the EVM address from which to transfer funds. - **`Amount`** is the amount of the asset to be transferred (specified in nAVAX for AVAX and the smallest denomination for all other assets). - **`AssetID`** is the ID of the asset to transfer. - **`Nonce`** is the nonce of the EVM account exporting funds. ### Gantt EVM Input Specification ```text +----------+----------+-------------------------+ | address : [20]byte | 20 bytes | +----------+----------+-------------------------+ | amount : uint64 | 08 bytes | +----------+----------+-------------------------+ | asset_id : [32]byte | 32 bytes | +----------+----------+-------------------------+ | nonce : uint64 | 08 bytes | +----------+----------+-------------------------+ | 68 bytes | +-------------------------+ ``` ### Proto EVM Input Specification ```text message { bytes address = 1; // 20 bytes uint64 amount = 2; // 08 bytes bytes assetID = 3; // 32 bytes uint64 nonce = 4; // 08 bytes } ``` ### EVM Input Example Let's make an EVM Input: - `Address: 0x8db97c7cece249c2b98bdc0226cc4c2a57bf52fc` - `Amount: 2000000` - `AssetID: 0x000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f` - `Nonce: 0` ```text [ Address <- 0x8db97c7cece249c2b98bdc0226cc4c2a57bf52fc, Amount <- 0x00000000001e8480 AssetID <- 0xdbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db Nonce <- 0x0000000000000000 ] = [ // address: 0x8d, 0xb9, 0x7c, 0x7c, 0xec, 0xe2, 0x49, 0xc2, 0xb9, 0x8b, 0xdc, 0x02, 0x26, 0xcc, 0x4c, 0x2a, 0x57, 0xbf, 0x52, 0xfc, // amount: 0x00, 0x00, 0x00, 0x00, 0x00, 0x1e, 0x84, 0x80, // assetID: 0xdb, 0xcf, 0x89, 0x0f, 0x77, 0xf4, 0x9b, 0x96, 0x85, 0x76, 0x48, 0xb7, 0x2b, 0x77, 0xf9, 0xf8, 0x29, 0x37, 0xf2, 0x8a, 0x68, 0x70, 0x4a, 0xf0, 0x5d, 0xa0, 0xdc, 0x12, 0xba, 0x53, 0xf2, 0xdb, // nonce: 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, ] ``` ## Transferable Input Transferable Input wraps a `SECP256K1TransferInput`. Transferable inputs describe a specific UTXO with a provided transfer input. ### What Transferable Input Contains A transferable input contains a `TxID`, `UTXOIndex` `AssetID` and an `Input`. - **`TxID`** is a 32-byte array that defines which transaction this input is consuming an output from. - **`UTXOIndex`** is an int that defines which utxo this input is consuming in the specified transaction. - **`AssetID`** is a 32-byte array that defines which asset this input references. - **`Input`** is a `SECP256K1TransferInput`, as defined below. ### Gantt Transferable Input Specification ```text +------------+----------+------------------------+ | tx_id : [32]byte | 32 bytes | +------------+----------+------------------------+ | utxo_index : int | 04 bytes | +------------+----------+------------------------+ | asset_id : [32]byte | 32 bytes | +------------+----------+------------------------+ | input : Input | size(input) bytes | +------------+----------+------------------------+ | 68 + size(input) bytes | +------------------------+ ``` ### Proto Transferable Input Specification ```text message TransferableInput { bytes tx_id = 1; // 32 bytes uint32 utxo_index = 2; // 04 bytes bytes asset_id = 3; // 32 bytes Input input = 4; // size(input) } ``` ### Transferable Input Example Let's make a transferable input: - `TxID: 0x6613a40dcdd8d22ea4aa99a4c84349056317cf550b6685e045e459954f258e59` - `UTXOIndex: 1` - `AssetID: 0xdbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db` - `Input: "Example SECP256K1 Transfer Input from below"` ```text [ TxID <- 0x6613a40dcdd8d22ea4aa99a4c84349056317cf550b6685e045e459954f258e59 UTXOIndex <- 0x00000001 AssetID <- 0xdbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db Input <- 0x0000000500000000075bcd15000000020000000700000003 ] = [ // txID: 0x66, 0x13, 0xa4, 0x0d, 0xcd, 0xd8, 0xd2, 0x2e, 0xa4, 0xaa, 0x99, 0xa4, 0xc8, 0x43, 0x49, 0x05, 0x63, 0x17, 0xcf, 0x55, 0x0b, 0x66, 0x85, 0xe0, 0x45, 0xe4, 0x59, 0x95, 0x4f, 0x25, 0x8e, 0x59, // utxoIndex: 0x00, 0x00, 0x00, 0x01, // assetID: 0xdb, 0xcf, 0x89, 0x0f, 0x77, 0xf4, 0x9b, 0x96, 0x85, 0x76, 0x48, 0xb7, 0x2b, 0x77, 0xf9, 0xf8, 0x29, 0x37, 0xf2, 0x8a, 0x68, 0x70, 0x4a, 0xf0, 0x5d, 0xa0, 0xdc, 0x12, 0xba, 0x53, 0xf2, 0xdb, // input: 0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x74, 0x6a, 0x52, 0x88, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, ] ``` ## SECP256K1 Transfer Input A [secp256k1](/docs/rpcs/other/standards/cryptographic-primitives#cryptography-in-the-avalanche-virtual-machine) transfer input allows for spending an unspent secp256k1 transfer output. ### What SECP256K1 Transfer Input Contains A secp256k1 transfer input contains an `Amount` and `AddressIndices`. - **`TypeID`** is the ID for this input type. It is `0x00000005`. - **`Amount`** is a long that specifies the quantity that this input should be consuming from the UTXO. Must be positive. Must be equal to the amount specified in the UTXO. - **`AddressIndices`** is a list of unique ints that define the private keys that are being used to spend the UTXO. Each UTXO has an array of addresses that can spend the UTXO. Each int represents the index in this address array that will sign this transaction. The array must be sorted low to high. ### Gantt SECP256K1 Transfer Input Specification ```text +-------------------------+-------------------------------------+ | type_id : int | 4 bytes | +-----------------+-------+-------------------------------------+ | amount : long | 8 bytes | +-----------------+-------+-------------------------------------+ | address_indices : []int | 4 + 4 * len(address_indices) bytes | +-----------------+-------+-------------------------------------+ | 16 + 4 * len(address_indices) bytes | +-------------------------------------+ ``` ### Proto SECP256K1 Transfer Input Specification ```text message SECP256K1TransferInput { uint32 typeID = 1; // 04 bytes uint64 amount = 2; // 08 bytes repeated uint32 address_indices = 3; // 04 bytes + 04 bytes * len(address_indices) } ``` ### SECP256K1 Transfer Input Example Let's make a payment input with: - **`TypeId`**: 5 - **`Amount`**: 500000000000 - **`AddressIndices`**: \[0\] ```text [ TypeID <- 0x00000005 Amount <- 500000000000 = 0x000000746a528800, AddressIndices <- [0x00000000] ] = [ // type id: 0x00, 0x00, 0x00, 0x05, // amount: 0x00, 0x00, 0x00, 0x74, 0x6a, 0x52, 0x88, 0x00, // length: 0x00, 0x00, 0x00, 0x01, // sig[0] 0x00, 0x00, 0x00, 0x00, ] ``` ## Outputs Outputs to Coreth Atomic Transactions are either an `EVMOutput` to be added to the balance of an address on this chain or a `TransferableOutput` (which contains a `SECP256K1TransferOutput`) to be moved to another chain. The EVM Output will be used in `ImportTx` to add funds to this chain, while the `TransferableOutput` will be used to export atomic UTXOs to another chain. ## EVM Output Output type specifying a state change to be applied to an EVM account as part of an `ImportTx`. ### What EVM Output Contains An EVM Output contains an `address`, `amount`, and `assetID`. - **`Address`** is the EVM address that will receive the funds. - **`Amount`** is the amount of the asset to be transferred (specified in nAVAX for AVAX and the smallest denomination for all other assets). - **`AssetID`** is the ID of the asset to transfer. ### Gantt EVM Output Specification ```text +----------+----------+-------------------------+ | address : [20]byte | 20 bytes | +----------+----------+-------------------------+ | amount : uin64 | 08 bytes | +----------+----------+-------------------------+ | asset_id : [32]byte | 32 bytes | +----------+----------+-------------------------+ | 60 bytes | +-------------------------+ ``` ### Proto EVM Output Specification ```text message { bytes address = 1; // 20 bytes uint64 amount = 2; // 08 bytes bytes assetID = 3; // 32 bytes } ``` ### EVM Output Example Let's make an EVM Output: - `Address: 0x0eb5ccb85c29009b6060decb353a38ea3b52cd20` - `Amount: 500000000000` - `AssetID: 0xdbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db` ```text [ Address <- 0x0eb5ccb85c29009b6060decb353a38ea3b52cd20, Amount <- 0x000000746a528800 AssetID <- 0xdbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db ] = [ // address: 0xc3, 0x34, 0x41, 0x28, 0xe0, 0x60, 0x12, 0x8e, 0xde, 0x35, 0x23, 0xa2, 0x4a, 0x46, 0x1c, 0x89, 0x43, 0xab, 0x08, 0x59, // amount: 0x00, 0x00, 0x00, 0x74, 0x6a, 0x52, 0x88, 0x00, // assetID: 0xdb, 0xcf, 0x89, 0x0f, 0x77, 0xf4, 0x9b, 0x96, 0x85, 0x76, 0x48, 0xb7, 0x2b, 0x77, 0xf9, 0xf8, 0x29, 0x37, 0xf2, 0x8a, 0x68, 0x70, 0x4a, 0xf0, 0x5d, 0xa0, 0xdc, 0x12, 0xba, 0x53, 0xf2, 0xdb, ] ``` ## Transferable Output Transferable outputs wrap a `SECP256K1TransferOutput` with an asset ID. ### What Transferable Output Contains A transferable output contains an `AssetID` and an `Output` which is a `SECP256K1TransferOutput`. - **`AssetID`** is a 32-byte array that defines which asset this output references. - **`Output`** is a `SECP256K1TransferOutput` as defined below. ### Gantt Transferable Output Specification ```text +----------+----------+-------------------------+ | asset_id : [32]byte | 32 bytes | +----------+----------+-------------------------+ | output : Output | size(output) bytes | +----------+----------+-------------------------+ | 32 + size(output) bytes | +-------------------------+ ``` ### Proto Transferable Output Specification ```text message TransferableOutput { bytes asset_id = 1; // 32 bytes Output output = 2; // size(output) } ``` ### Transferable Output Example Let's make a transferable output: - `AssetID: 0xdbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db` - `Output: "Example SECP256K1 Transfer Output from below"` ```text [ AssetID <- 0xdbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db Output <- 0x000000070000000000003039000000000000d431000000010000000251025c61fbcfc078f69334f834be6dd26d55a955c3344128e060128ede3523a24a461c8943ab0859, ] = [ // assetID: 0xdb, 0xcf, 0x89, 0x0f, 0x77, 0xf4, 0x9b, 0x96, 0x85, 0x76, 0x48, 0xb7, 0x2b, 0x77, 0xf9, 0xf8, 0x29, 0x37, 0xf2, 0x8a, 0x68, 0x70, 0x4a, 0xf0, 0x5d, 0xa0, 0xdc, 0x12, 0xba, 0x53, 0xf2, 0xdb, // output: 0xdb, 0xcf, 0x89, 0x0f, 0x77, 0xf4, 0x9b, 0x96, 0x85, 0x76, 0x48, 0xb7, 0x2b, 0x77, 0xf9, 0xf8, 0x29, 0x37, 0xf2, 0x8a, 0x68, 0x70, 0x4a, 0xf0, 0x5d, 0xa0, 0xdc, 0x12, 0xba, 0x53, 0xf2, 0xdb, 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0f, 0x42, 0x40, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x01, 0x66, 0xf9, 0x0d, 0xb6, 0x13, 0x7a, 0x78, 0xf7, 0x6b, 0x36, 0x93, 0xf7, 0xf2, 0xbc, 0x50, 0x79, 0x56, 0xda, 0xe5, 0x63, ] ``` ## SECP256K1 Transfer Output A [secp256k1](/docs/rpcs/other/standards/cryptographic-primitives#cryptography-in-the-avalanche-virtual-machine) transfer output allows for sending a quantity of an asset to a collection of addresses after a specified Unix time. ### What SECP256K1 Transfer Output Contains A secp256k1 transfer output contains a `TypeID`, `Amount`, `Locktime`, `Threshold`, and `Addresses`. - **`TypeID`** is the ID for this output type. It is `0x00000007`. - **`Amount`** is a long that specifies the quantity of the asset that this output owns. Must be positive. - **`Locktime`** is a long that contains the Unix timestamp that this output can be spent after. The Unix timestamp is specific to the second. - **`Threshold`** is an int that names the number of unique signatures required to spend the output. Must be less than or equal to the length of **`Addresses`**. If **`Addresses`** is empty, must be 0. - **`Addresses`** is a list of unique addresses that correspond to the private keys that can be used to spend this output. Addresses must be sorted lexicographically. ### Gantt SECP256K1 Transfer Output Specification ```text +-----------+------------+--------------------------------+ | type_id : int | 4 bytes | +-----------+------------+--------------------------------+ | amount : long | 8 bytes | +-----------+------------+--------------------------------+ | locktime : long | 8 bytes | +-----------+------------+--------------------------------+ | threshold : int | 4 bytes | +-----------+------------+--------------------------------+ | addresses : [][20]byte | 4 + 20 * len(addresses) bytes | +-----------+------------+--------------------------------+ | 28 + 20 * len(addresses) bytes | +--------------------------------+ ``` ### Proto SECP256K1 Transfer Output Specification ```text message SECP256K1TransferOutput { uint32 typeID = 1; // 04 bytes uint64 amount = 2; // 08 bytes uint64 locktime = 3; // 08 bytes uint32 threshold = 4; // 04 bytes repeated bytes addresses = 5; // 04 bytes + 20 bytes * len(addresses) } ``` ### SECP256K1 Transfer Output Example Let's make a secp256k1 transfer output with: - **`TypeID`**: 7 - **`Amount`**: 1000000 - **`Locktime`**: 0 - **`Threshold`**: 1 - **`Addresses`**: - 0x66f90db6137a78f76b3693f7f2bc507956dae563 ```text [ TypeID <- 0x00000007 Amount <- 0x00000000000f4240 Locktime <- 0x0000000000000000 Threshold <- 0x00000001 Addresses <- [ 0x66f90db6137a78f76b3693f7f2bc507956dae563 ] ] = [ // typeID: 0x00, 0x00, 0x00, 0x07, // amount: 0x00, 0x00, 0x00, 0x00, 0x00, 0x0f, 0x42, 0x40, // locktime: 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, // threshold: 0x00, 0x00, 0x00, 0x01, // number of addresses: 0x00, 0x00, 0x00, 0x01, // addrs[0]: 0x66, 0xf9, 0x0d, 0xb6, 0x13, 0x7a, 0x78, 0xf7, 0x6b, 0x36, 0x93, 0xf7, 0xf2, 0xbc, 0x50, 0x79, 0x56, 0xda, 0xe5, 0x63, ] ``` ## Atomic Transactions Atomic Transactions are used to move funds between chains. There are two types `ImportTx` and `ExportTx`. ## ExportTx ExportTx is a transaction to export funds from Coreth to a different chain. ### What ExportTx Contains An ExportTx contains an `typeID`, `networkID`, `blockchainID`, `destinationChain`, `inputs`, and `exportedOutputs`. - **`typeID`** is an int that the type for an ExportTx. The typeID for an exportTx is 1. - **`networkID`** is an int that defines which Avalanche network this transaction is meant to be issued to. This could refer to Mainnet, Fuji, etc. and is different than the EVM's network ID. - **`blockchainID`** is a 32-byte array that defines which blockchain this transaction was issued to. - **`destinationChain`** is a 32-byte array that defines which blockchain this transaction exports funds to. - **`inputs`** is an array of EVM Inputs to fund the ExportTx. - **`exportedOutputs`** is an array of TransferableOutputs to be transferred to `destinationChain`. ### Gantt ExportTx Specification ```text +---------------------+----------------------+-------------------------------------------------+ | typeID : int | 04 bytes | +---------------------+----------------------+-------------------------------------------------+ | networkID : int | 04 bytes | +---------------------+----------------------+-------------------------------------------------+ | blockchainID : [32]byte | 32 bytes | +---------------------+----------------------+-------------------------------------------------+ | destinationChain : [32]byte | 32 bytes | +---------------------+----------------------+-------------------------------------------------+ | inputs : []EvmInput | 4 + size(inputs) bytes | +---------------------+----------------------+-------------------------------------------------+ | exportedOutputs : []TransferableOutput | 4 + size(exportedOutputs) bytes | +----------+----------+----------------------+-------------------------------------------------+ | 80 + size(inputs) + size(exportedOutputs) bytes | +-------------------------------------------------+ ``` ### ExportTx Example Let's make an EVM Output: - **`TypeID`**: `1` - **`NetworkID`**: `12345` - **`BlockchainID`**: `0x91060eabfb5a571720109b5896e5ff00010a1cfe6b103d585e6ebf27b97a1735` - **`DestinationChain`**: `0xd891ad56056d9c01f18f43f58b5c784ad07a4a49cf3d1f11623804b5cba2c6bf` - **`Inputs`**: - `"Example EVMInput as defined above"` - **`Exportedoutputs`**: - `"Example TransferableOutput as defined above"` ```text [ TypeID <- 0x00000001 NetworkID <- 0x00003039 BlockchainID <- 0x91060eabfb5a571720109b5896e5ff00010a1cfe6b103d585e6ebf27b97a1735 DestinationChain <- 0xd891ad56056d9c01f18f43f58b5c784ad07a4a49cf3d1f11623804b5cba2c6bf Inputs <- [ 0xc3344128e060128ede3523a24a461c8943ab08590000000000003039000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f0000000000000001 ] ExportedOutputs <- [ 0xdbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2dbdbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db0000000700000000000f42400000000000000000000000010000000166f90db6137a78f76b3693f7f2bc507956dae563 ] ] = [ // typeID: 0x00, 0x00, 0x00, 0x01, // networkID: 0x00, 0x00, 0x00, 0x04, // blockchainID: 0x91, 0x06, 0x0e, 0xab, 0xfb, 0x5a, 0x57, 0x17, 0x20, 0x10, 0x9b, 0x58, 0x96, 0xe5, 0xff, 0x00, 0x01, 0x0a, 0x1c, 0xfe, 0x6b, 0x10, 0x3d, 0x58, 0x5e, 0x6e, 0xbf, 0x27, 0xb9, 0x7a, 0x17, 0x35, // destination_chain: 0xd8, 0x91, 0xad, 0x56, 0x05, 0x6d, 0x9c, 0x01, 0xf1, 0x8f, 0x43, 0xf5, 0x8b, 0x5c, 0x78, 0x4a, 0xd0, 0x7a, 0x4a, 0x49, 0xcf, 0x3d, 0x1f, 0x11, 0x62, 0x38, 0x04, 0xb5, 0xcb, 0xa2, 0xc6, 0xbf, // inputs[] count: 0x00, 0x00, 0x00, 0x01, // inputs[0] 0x8d, 0xb9, 0x7c, 0x7c, 0xec, 0xe2, 0x49, 0xc2, 0xb9, 0x8b, 0xdc, 0x02, 0x26, 0xcc, 0x4c, 0x2a, 0x57, 0xbf, 0x52, 0xfc, 0x00, 0x00, 0x00, 0x00, 0x00, 0x1e, 0x84, 0x80, 0xdb, 0xcf, 0x89, 0x0f, 0x77, 0xf4, 0x9b, 0x96, 0x85, 0x76, 0x48, 0xb7, 0x2b, 0x77, 0xf9, 0xf8, 0x29, 0x37, 0xf2, 0x8a, 0x68, 0x70, 0x4a, 0xf0, 0x5d, 0xa0, 0xdc, 0x12, 0xba, 0x53, 0xf2, 0xdb, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, // exportedOutputs[] count 0x00, 0x00, 0x00, 0x01, // exportedOutputs[0] 0xdb, 0xcf, 0x89, 0x0f, 0x77, 0xf4, 0x9b, 0x96, 0x85, 0x76, 0x48, 0xb7, 0x2b, 0x77, 0xf9, 0xf8, 0x29, 0x37, 0xf2, 0x8a, 0x68, 0x70, 0x4a, 0xf0, 0x5d, 0xa0, 0xdc, 0x12, 0xba, 0x53, 0xf2, 0xdb, 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0f, 0x42, 0x40, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x01, 0x66, 0xf9, 0x0d, 0xb6, 0x13, 0x7a, 0x78, 0xf7, 0x6b, 0x36, 0x93, 0xf7, 0xf2, 0xbc, 0x50, 0x79, 0x56, 0xda, 0xe5, 0x63, ] ``` ## ImportTx ImportTx is a transaction to import funds to Coreth from another chain. ### What ImportTx Contains An ImportTx contains an `typeID`, `networkID`, `blockchainID`, `destinationChain`, `importedInputs`, and `Outs`. - **`typeID`** is an int that the type for an ImportTx. The typeID for an `ImportTx` is 0. - **`networkID`** is an int that defines which Avalanche network this transaction is meant to be issued to. This could refer to Mainnet, Fuji, etc. and is different than the EVM's network ID. - **`blockchainID`** is a 32-byte array that defines which blockchain this transaction was issued to. - **`sourceChain`** is a 32-byte array that defines which blockchain from which to import funds. - **`importedInputs`** is an array of TransferableInputs to fund the ImportTx. - **`Outs`** is an array of EVM Outputs to be imported to this chain. ### Gantt ImportTx Specification ```text +---------------------+----------------------+-------------------------------------------------+ | typeID : int | 04 bytes | +---------------------+----------------------+-------------------------------------------------+ | networkID : int | 04 bytes | +---------------------+----------------------+-------------------------------------------------+ | blockchainID : [32]byte | 32 bytes | +---------------------+----------------------+-------------------------------------------------+ | sourceChain : [32]byte | 32 bytes | +---------------------+----------------------+-------------------------------------------------+ | importedInputs : []TransferableInput | 4 + size(importedInputs) bytes | +---------------------+----------------------+-------------------------------------------------+ | outs : []EVMOutput | 4 + size(outs) bytes | +----------+----------+----------------------+-------------------------------------------------+ | 80 + size(importedInputs) + size(outs) bytes | +-------------------------------------------------+ ``` ### ImportTx Example Let's make an ImportTx: - **`TypeID`**: `0` - **`NetworkID`**: `12345` - **`BlockchainID`**: `0x91060eabfb5a571720109b5896e5ff00010a1cfe6b103d585e6ebf27b97a1735` - **`SourceChain`**: `0xd891ad56056d9c01f18f43f58b5c784ad07a4a49cf3d1f11623804b5cba2c6bf` - **`ImportedInputs`**: - `"Example TransferableInput as defined above"` - **`Outs`**: - `"Example EVMOutput as defined above"` ```text [ TypeID <- 0x00000000 NetworkID <- 0x00003039 BlockchainID <- 0x91060eabfb5a571720109b5896e5ff00010a1cfe6b103d585e6ebf27b97a1735 SourceChain <- 0xd891ad56056d9c01f18f43f58b5c784ad07a4a49cf3d1f11623804b5cba2c6bf ImportedInputs <- [ 0x6613a40dcdd8d22ea4aa99a4c84349056317cf550b6685e045e459954f258e5900000001dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db00000005000000746a5288000000000100000000 ] Outs <- [ 0x0eb5ccb85c29009b6060decb353a38ea3b52cd20000000746a528800dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db ] ] = [ // typeID: 0x00, 0x00, 0x00, 0x00, // networkID: 0x00, 0x00, 0x00, 0x04, // blockchainID: 0x91, 0x06, 0x0e, 0xab, 0xfb, 0x5a, 0x57, 0x17, 0x20, 0x10, 0x9b, 0x58, 0x96, 0xe5, 0xff, 0x00, 0x01, 0x0a, 0x1c, 0xfe, 0x6b, 0x10, 0x3d, 0x58, 0x5e, 0x6e, 0xbf, 0x27, 0xb9, 0x7a, 0x17, 0x35, // sourceChain: 0xd8, 0x91, 0xad, 0x56, 0x05, 0x6d, 0x9c, 0x01, 0xf1, 0x8f, 0x43, 0xf5, 0x8b, 0x5c, 0x78, 0x4a, 0xd0, 0x7a, 0x4a, 0x49, 0xcf, 0x3d, 0x1f, 0x11, 0x62, 0x38, 0x04, 0xb5, 0xcb, 0xa2, 0xc6, 0xbf, // importedInputs[] count: 0x00, 0x00, 0x00, 0x01, // importedInputs[0] 0x66, 0x13, 0xa4, 0x0d, 0xcd, 0xd8, 0xd2, 0x2e, 0xa4, 0xaa, 0x99, 0xa4, 0xc8, 0x43, 0x49, 0x05, 0x63, 0x17, 0xcf, 0x55, 0x0b, 0x66, 0x85, 0xe0, 0x45, 0xe4, 0x59, 0x95, 0x4f, 0x25, 0x8e, 0x59, 0x00, 0x00, 0x00, 0x01, 0xdb, 0xcf, 0x89, 0x0f, 0x77, 0xf4, 0x9b, 0x96, 0x85, 0x76, 0x48, 0xb7, 0x2b, 0x77, 0xf9, 0xf8, 0x29, 0x37, 0xf2, 0x8a, 0x68, 0x70, 0x4a, 0xf0, 0x5d, 0xa0, 0xdc, 0x12, 0xba, 0x53, 0xf2, 0xdb, 0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x74, 0x6a, 0x52, 0x88, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, // outs[] count 0x00, 0x00, 0x00, 0x01, // outs[0] 0x0e, 0xb5, 0xcc, 0xb8, 0x5c, 0x29, 0x00, 0x9b, 0x60, 0x60, 0xde, 0xcb, 0x35, 0x3a, 0x38, 0xea, 0x3b, 0x52, 0xcd, 0x20, 0x00, 0x00, 0x00, 0x74, 0x6a, 0x52, 0x88, 0x00, 0xdb, 0xcf, 0x89, 0x0f, 0x77, 0xf4, 0x9b, 0x96, 0x85, 0x76, 0x48, 0xb7, 0x2b, 0x77, 0xf9, 0xf8, 0x29, 0x37, 0xf2, 0x8a, 0x68, 0x70, 0x4a, 0xf0, 0x5d, 0xa0, 0xdc, 0x12, 0xba, 0x53, 0xf2, 0xdb, ] ``` ## Credentials Credentials have one possible type: `SECP256K1Credential`. Each credential is paired with an Input. The order of the credentials match the order of the inputs. ## SECP256K1 Credential A [secp256k1](/docs/rpcs/other/standards/cryptographic-primitives#cryptography-in-the-avalanche-virtual-machine) credential contains a list of 65-byte recoverable signatures. ### What SECP256K1 Credential Contains - **`TypeID`** is the ID for this type. It is `0x00000009`. - **`Signatures`** is an array of 65-byte recoverable signatures. The order of the signatures must match the input's signature indices. ### Gantt SECP256K1 Credential Specification ```text +------------------------------+---------------------------------+ | type_id : int | 4 bytes | +-----------------+------------+---------------------------------+ | signatures : [][65]byte | 4 + 65 * len(signatures) bytes | +-----------------+------------+---------------------------------+ | 8 + 65 * len(signatures) bytes | +---------------------------------+ ``` ### Proto SECP256K1 Credential Specification ```text message SECP256K1Credential { uint32 typeID = 1; // 4 bytes repeated bytes signatures = 2; // 4 bytes + 65 bytes * len(signatures) } ``` ### SECP256K1 Credential Example Let's make a payment input with: - **`TypeID`**: 9 - **`signatures`**: - `0x0acccf47a820549a84428440e2421975138790e41be262f7197f3d93faa26cc8741060d743ffaf025782c8c86b862d2b9febebe7d352f0b4591afbd1a737f8a30010199dbf` ```text [ TypeID <- 0x00000009 Signatures <- [ 0x0acccf47a820549a84428440e2421975138790e41be262f7197f3d93faa26cc8741060d743ffaf025782c8c86b862d2b9febebe7d352f0b4591afbd1a737f8a30010199dbf, ] ] = [ // Type ID 0x00, 0x00, 0x00, 0x09, // length: 0x00, 0x00, 0x00, 0x01, // sig[0] 0x0a, 0xcc, 0xcf, 0x47, 0xa8, 0x20, 0x54, 0x9a, 0x84, 0x42, 0x84, 0x40, 0xe2, 0x42, 0x19, 0x75, 0x13, 0x87, 0x90, 0xe4, 0x1b, 0xe2, 0x62, 0xf7, 0x19, 0x7f, 0x3d, 0x93, 0xfa, 0xa2, 0x6c, 0xc8, 0x74, 0x10, 0x60, 0xd7, 0x43, 0xff, 0xaf, 0x02, 0x57, 0x82, 0xc8, 0xc8, 0x6b, 0x86, 0x2d, 0x2b, 0x9f, 0xeb, 0xeb, 0xe7, 0xd3, 0x52, 0xf0, 0xb4, 0x59, 0x1a, 0xfb, 0xd1, 0xa7, 0x37, 0xf8, 0xa3, 0x00, 0x10, 0x19, 0x9d, 0xbf, ] ``` ## Signed Transaction A signed transaction contains an unsigned `AtomicTx` and credentials. ### What Signed Transaction Contains A signed transaction contains a `CodecID`, `AtomicTx`, and `Credentials`. - **`CodecID`** The only current valid codec id is `00 00`. - **`AtomicTx`** is an atomic transaction, as described above. - **`Credentials`** is an array of credentials. Each credential corresponds to the input at the same index in the AtomicTx ### Gantt Signed Transaction Specification ```text +---------------------+--------------+------------------------------------------------+ | codec_id : uint16 | 2 bytes | +---------------------+--------------+------------------------------------------------+ | atomic_tx : AtomicTx | size(atomic_tx) bytes | +---------------------+--------------+------------------------------------------------+ | credentials : []Credential | 4 + size(credentials) bytes | +---------------------+--------------+------------------------------------------------+ | 6 + size(atomic_tx) + len(credentials) bytes | +------------------------------------------------+ ``` ### Proto Signed Transaction Specification ```text message Tx { uint16 codec_id = 1; // 2 bytes AtomicTx atomic_tx = 2; // size(atomic_tx) repeated Credential credentials = 3; // 4 bytes + size(credentials) } ``` ### Signed Transaction Example Let's make a signed transaction that uses the unsigned transaction and credential from the previous examples. - **`CodecID`**: `0` - **`UnsignedTx`**: `0x000000000000303991060eabfb5a571720109b5896e5ff00010a1cfe6b103d585e6ebf27b97a1735d891ad56056d9c01f18f43f58b5c784ad07a4a49cf3d1f11623804b5cba2c6bf000000016613a40dcdd8d22ea4aa99a4c84349056317cf550b6685e045e459954f258e5900000001dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db00000005000000746a5288000000000100000000000000010eb5ccb85c29009b6060decb353a38ea3b52cd20000000746a528800dbcf890f77f49b96857648b72b77f9f82937f28a68704af05da0dc12ba53f2db` - **`Credentials`** `0x00000009000000010acccf47a820549a84428440e2421975138790e41be262f7197f3d93faa26cc8741060d743ffaf025782c8c86b862d2b9febebe7d352f0b4591afbd1a737f8a300` ```text [ CodecID <- 0x0000 UnsignedAtomic Tx <- 0x0000000100000004ffffffffeeeeeeeeddddddddccccccccbbbbbbbbaaaaaaaa999999998888888800000001000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f000000070000000000003039000000000000d431000000010000000251025c61fbcfc078f69334f834be6dd26d55a955c3344128e060128ede3523a24a461c8943ab085900000001f1e1d1c1b1a191817161514131211101f0e0d0c0b0a09080706050403020100000000005000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f0000000500000000075bcd150000000200000007000000030000000400010203 Credentials <- [ 0x00000009000000010acccf47a820549a84428440e2421975138790e41be262f7197f3d93faa26cc8741060d743ffaf025782c8c86b862d2b9febebe7d352f0b4591afbd1a737f8a300, ] ] = [ // Codec ID 0x00, 0x00, // unsigned atomic transaction: 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x30, 0x39, 0x91, 0x06, 0x0e, 0xab, 0xfb, 0x5a, 0x57, 0x17, 0x20, 0x10, 0x9b, 0x58, 0x96, 0xe5, 0xff, 0x00, 0x01, 0x0a, 0x1c, 0xfe, 0x6b, 0x10, 0x3d, 0x58, 0x5e, 0x6e, 0xbf, 0x27, 0xb9, 0x7a, 0x17, 0x35, 0xd8, 0x91, 0xad, 0x56, 0x05, 0x6d, 0x9c, 0x01, 0xf1, 0x8f, 0x43, 0xf5, 0x8b, 0x5c, 0x78, 0x4a, 0xd0, 0x7a, 0x4a, 0x49, 0xcf, 0x3d, 0x1f, 0x11, 0x62, 0x38, 0x04, 0xb5, 0xcb, 0xa2, 0xc6, 0xbf, 0x00, 0x00, 0x00, 0x01, 0x66, 0x13, 0xa4, 0x0d, 0xcd, 0xd8, 0xd2, 0x2e, 0xa4, 0xaa, 0x99, 0xa4, 0xc8, 0x43, 0x49, 0x05, 0x63, 0x17, 0xcf, 0x55, 0x0b, 0x66, 0x85, 0xe0, 0x45, 0xe4, 0x59, 0x95, 0x4f, 0x25, 0x8e, 0x59, 0x00, 0x00, 0x00, 0x01, 0xdb, 0xcf, 0x89, 0x0f, 0x77, 0xf4, 0x9b, 0x96, 0x85, 0x76, 0x48, 0xb7, 0x2b, 0x77, 0xf9, 0xf8, 0x29, 0x37, 0xf2, 0x8a, 0x68, 0x70, 0x4a, 0xf0, 0x5d, 0xa0, 0xdc, 0x12, 0xba, 0x53, 0xf2, 0xdb, 0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x74, 0x6a, 0x52, 0x88, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x0e, 0xb5, 0xcc, 0xb8, 0x5c, 0x29, 0x00, 0x9b, 0x60, 0x60, 0xde, 0xcb, 0x35, 0x3a, 0x38, 0xea, 0x3b, 0x52, 0xcd, 0x20, 0x00, 0x00, 0x00, 0x74, 0x6a, 0x52, 0x88, 0x00, 0xdb, 0xcf, 0x89, 0x0f, 0x77, 0xf4, 0x9b, 0x96, 0x85, 0x76, 0x48, 0xb7, 0x2b, 0x77, 0xf9, 0xf8, 0x29, 0x37, 0xf2, 0x8a, 0x68, 0x70, 0x4a, 0xf0, 0x5d, 0xa0, 0xdc, 0x12, 0xba, 0x53, 0xf2, 0xdb, // number of credentials: 0x00, 0x00, 0x00, 0x01, // credential[0]: 0x00, 0x00, 0x00, 0x09, 0x00, 0x00, 0x00, 0x01, 0x0a, 0xcc, 0xcf, 0x47, 0xa8, 0x20, 0x54, 0x9a, 0x84, 0x42, 0x84, 0x40, 0xe2, 0x42, 0x19, 0x75, 0x13, 0x87, 0x90, 0xe4, 0x1b, 0xe2, 0x62, 0xf7, 0x19, 0x7f, 0x3d, 0x93, 0xfa, 0xa2, 0x6c, 0xc8, 0x74, 0x10, 0x60, 0xd7, 0x43, 0xff, 0xaf, 0x02, 0x57, 0x82, 0xc8, 0xc8, 0x6b, 0x86, 0x2d, 0x2b, 0x9f, 0xeb, 0xeb, 0xe7, 0xd3, 0x52, 0xf0, 0xb4, 0x59, 0x1a, 0xfb, 0xd1, 0xa7, 0x37, 0xf8, 0xa3, 0x00, ``` ## UTXO A UTXO is a standalone representation of a transaction output. ### What UTXO Contains A UTXO contains a `CodecID`, `TxID`, `UTXOIndex`, `AssetID`, and `Output`. - **`CodecID`** The only valid `CodecID` is `00 00` - **`TxID`** is a 32-byte transaction ID. Transaction IDs are calculated by taking sha256 of the bytes of the signed transaction. - **`UTXOIndex`** is an int that specifies which output in the transaction specified by **`TxID`** that this utxo was created by. - **`AssetID`** is a 32-byte array that defines which asset this utxo references. - **`Output`** is the output object that created this utxo. The serialization of Outputs was defined above. ### Gantt UTXO Specification ```text +--------------+----------+-------------------------+ | codec_id : uint16 | 2 bytes | +--------------+----------+-------------------------+ | tx_id : [32]byte | 32 bytes | +--------------+----------+-------------------------+ | output_index : int | 4 bytes | +--------------+----------+-------------------------+ | asset_id : [32]byte | 32 bytes | +--------------+----------+-------------------------+ | output : Output | size(output) bytes | +--------------+----------+-------------------------+ | 70 + size(output) bytes | +-------------------------+ ``` ### Proto UTXO Specification ```text message Utxo { uint16 codec_id = 1; // 02 bytes bytes tx_id = 2; // 32 bytes uint32 output_index = 3; // 04 bytes bytes asset_id = 4; // 32 bytes Output output = 5; // size(output) } ``` ### UTXO Example Let's make a UTXO from the signed transaction created above: - **`CodecID`**: `0` - **`TxID`**: `0xf966750f438867c3c9828ddcdbe660e21ccdbb36a9276958f011ba472f75d4e7` - **`UTXOIndex`**: 0 = 0x00000000 - **`AssetID`**: `0x000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f` - **`Output`**: `"Example EVMOutput as defined above"` ```text [ CodecID <- 0x0000 TxID <- 0xf966750f438867c3c9828ddcdbe660e21ccdbb36a9276958f011ba472f75d4e7 UTXOIndex <- 0x00000000 AssetID <- 0x000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f Output <- 0x000000070000000000003039000000000000d431000000010000000251025c61fbcfc078f69334f834be6dd26d55a955c3344128e060128ede3523a24a461c8943ab0859 ] = [ // Codec ID: 0x00, 0x00, // txID: 0xf9, 0x66, 0x75, 0x0f, 0x43, 0x88, 0x67, 0xc3, 0xc9, 0x82, 0x8d, 0xdc, 0xdb, 0xe6, 0x60, 0xe2, 0x1c, 0xcd, 0xbb, 0x36, 0xa9, 0x27, 0x69, 0x58, 0xf0, 0x11, 0xba, 0x47, 0x2f, 0x75, 0xd4, 0xe7, // utxo index: 0x00, 0x00, 0x00, 0x00, // assetID: 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, // output: 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x30, 0x39, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xd4, 0x31, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x02, 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, 0x20, 0x21, 0x22, 0x23, 0x24, 0x25, 0x26, 0x27, ] ``` # CLI Commands (/docs/tooling/avalanche-cli/cli-commands) --- title: "CLI Commands" description: "Complete list of Avalanche CLI commands and their usage." edit_url: https://github.com/ava-labs/avalanche-cli/edit/main/cmd/commands.md --- ## avalanche blockchain The blockchain command suite provides a collection of tools for developing and deploying Blockchains. To get started, use the blockchain create command wizard to walk through the configuration of your very first Blockchain. Then, go ahead and deploy it with the blockchain deploy command. You can use the rest of the commands to manage your Blockchain configurations and live deployments. **Usage:** ```bash avalanche blockchain [subcommand] [flags] ``` **Subcommands:** - [`addValidator`](#avalanche-blockchain-addvalidator): The blockchain addValidator command adds a node as a validator to an L1 of the user provided deployed network. If the network is proof of authority, the owner of the validator manager contract must sign the transaction. If the network is proof of stake, the node must stake the L1's staking token. Both processes will issue a RegisterL1ValidatorTx on the P-Chain. This command currently only works on Blockchains deployed to either the Fuji Testnet or Mainnet. - [`changeOwner`](#avalanche-blockchain-changeowner): The blockchain changeOwner changes the owner of the deployed Blockchain. - [`changeWeight`](#avalanche-blockchain-changeweight): The blockchain changeWeight command changes the weight of a L1 Validator. The L1 has to be a Proof of Authority L1. - [`configure`](#avalanche-blockchain-configure): AvalancheGo nodes support several different configuration files. Each network (a Subnet or an L1) has their own config which applies to all blockchains/VMs in the network (see https://build.avax.network/docs/nodes/configure/avalanche-l1-configs) Each blockchain within the network can have its own chain config (see https://build.avax.network/docs/nodes/chain-configs/primary-network/c-chain https://github.com/ava-labs/subnet-evm/blob/master/plugin/evm/config/config.go for subnet-evm options). A chain can also have special requirements for the AvalancheGo node configuration itself (see https://build.avax.network/docs/nodes/configure/configs-flags). This command allows you to set all those files. - [`create`](#avalanche-blockchain-create): The blockchain create command builds a new genesis file to configure your Blockchain. By default, the command runs an interactive wizard. It walks you through all the steps you need to create your first Blockchain. The tool supports deploying Subnet-EVM, and custom VMs. You can create a custom, user-generated genesis with a custom VM by providing the path to your genesis and VM binaries with the --genesis and --vm flags. By default, running the command with a blockchainName that already exists causes the command to fail. If you'd like to overwrite an existing configuration, pass the -f flag. - [`delete`](#avalanche-blockchain-delete): The blockchain delete command deletes an existing blockchain configuration. - [`deploy`](#avalanche-blockchain-deploy): The blockchain deploy command deploys your Blockchain configuration locally, to Fuji Testnet, or to Mainnet. At the end of the call, the command prints the RPC URL you can use to interact with the Subnet. Avalanche-CLI only supports deploying an individual Blockchain once per network. Subsequent attempts to deploy the same Blockchain to the same network (local, Fuji, Mainnet) aren't allowed. If you'd like to redeploy a Blockchain locally for testing, you must first call avalanche network clean to reset all deployed chain state. Subsequent local deploys redeploy the chain with fresh state. You can deploy the same Blockchain to multiple networks, so you can take your locally tested Blockchain and deploy it on Fuji or Mainnet. - [`describe`](#avalanche-blockchain-describe): The blockchain describe command prints the details of a Blockchain configuration to the console. By default, the command prints a summary of the configuration. By providing the --genesis flag, the command instead prints out the raw genesis file. - [`export`](#avalanche-blockchain-export): The blockchain export command write the details of an existing Blockchain deploy to a file. The command prompts for an output path. You can also provide one with the --output flag. - [`import`](#avalanche-blockchain-import): Import blockchain configurations into avalanche-cli. This command suite supports importing from a file created on another computer, or importing from blockchains running public networks (e.g. created manually or with the deprecated subnet-cli) - [`join`](#avalanche-blockchain-join): The blockchain join command configures your validator node to begin validating a new Blockchain. To complete this process, you must have access to the machine running your validator. If the CLI is running on the same machine as your validator, it can generate or update your node's config file automatically. Alternatively, the command can print the necessary instructions to update your node manually. To complete the validation process, the Blockchain's admins must add the NodeID of your validator to the Blockchain's allow list by calling addValidator with your NodeID. After you update your validator's config, you need to restart your validator manually. If you provide the --avalanchego-config flag, this command attempts to edit the config file at that path. This command currently only supports Blockchains deployed on the Fuji Testnet and Mainnet. - [`list`](#avalanche-blockchain-list): The blockchain list command prints the names of all created Blockchain configurations. Without any flags, it prints some general, static information about the Blockchain. With the --deployed flag, the command shows additional information including the VMID, BlockchainID and SubnetID. - [`publish`](#avalanche-blockchain-publish): The blockchain publish command publishes the Blockchain's VM to a repository. - [`removeValidator`](#avalanche-blockchain-removevalidator): The blockchain removeValidator command stops a whitelisted blockchain network validator from validating your deployed Blockchain. To remove the validator from the Subnet's allow list, provide the validator's unique NodeID. You can bypass these prompts by providing the values with flags. - [`stats`](#avalanche-blockchain-stats): The blockchain stats command prints validator statistics for the given Blockchain. - [`upgrade`](#avalanche-blockchain-upgrade): The blockchain upgrade command suite provides a collection of tools for updating your developmental and deployed Blockchains. - [`validators`](#avalanche-blockchain-validators): The blockchain validators command lists the validators of a blockchain and provides several statistics about them. - [`vmid`](#avalanche-blockchain-vmid): The blockchain vmid command prints the virtual machine ID (VMID) for the given Blockchain. **Flags:** ```bash -h, --help help for blockchain --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### addValidator The blockchain addValidator command adds a node as a validator to an L1 of the user provided deployed network. If the network is proof of authority, the owner of the validator manager contract must sign the transaction. If the network is proof of stake, the node must stake the L1's staking token. Both processes will issue a RegisterL1ValidatorTx on the P-Chain. This command currently only works on Blockchains deployed to either the Fuji Testnet or Mainnet. **Usage:** ```bash avalanche blockchain addValidator [subcommand] [flags] ``` **Flags:** ```bash --aggregator-allow-private-peers allow the signature aggregator to connect to peers with private IP (default true) --aggregator-extra-endpoints strings endpoints for extra nodes that are needed in signature aggregation --aggregator-log-level string log level to use with signature aggregator (default "Debug") --aggregator-log-to-stdout use stdout for signature aggregator logs --balance float set the AVAX balance of the validator that will be used for continuous fee on P-Chain --blockchain-genesis-key use genesis allocated key to pay fees for completing the validator's registration (blockchain gas token) --blockchain-key string CLI stored key to use to pay fees for completing the validator's registration (blockchain gas token) --blockchain-private-key string private key to use to pay fees for completing the validator's registration (blockchain gas token) --bls-proof-of-possession string set the BLS proof of possession of the validator to add --bls-public-key string set the BLS public key of the validator to add --cluster string operate on the given cluster --create-local-validator create additional local validator and add it to existing running local node --default-duration (for Subnets, not L1s) set duration so as to validate until primary validator ends its period --default-start-time (for Subnets, not L1s) use default start time for subnet validator (5 minutes later for fuji & mainnet, 30 seconds later for devnet) --default-validator-params (for Subnets, not L1s) use default weight/start/duration params for subnet validator --delegation-fee uint16 (PoS only) delegation fee (in bips) (default 100) --devnet operate on a devnet network --disable-owner string P-Chain address that will able to disable the validator with a P-Chain transaction --endpoint string use the given endpoint for network operations -e, --ewoq use ewoq key [fuji/devnet only] -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for addValidator -k, --key string select the key to use [fuji/devnet only] -g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji/devnet) --ledger-addrs strings use the given ledger addresses -l, --local operate on a local network -m, --mainnet operate on mainnet --node-endpoint string gather node id/bls from publicly available avalanchego apis on the given endpoint --node-id string node-id of the validator to add --output-tx-path string (for Subnets, not L1s) file path of the add validator tx --partial-sync set primary network partial sync for new validators (default true) --remaining-balance-owner string P-Chain address that will receive any leftover AVAX from the validator when it is removed from Subnet --rpc string connect to validator manager at the given rpc endpoint --stake-amount uint (PoS only) amount of tokens to stake --staking-period duration how long this validator will be staking --start-time string (for Subnets, not L1s) UTC start time when this validator starts validating, in 'YYYY-MM-DD HH:MM:SS' format --subnet-auth-keys strings (for Subnets, not L1s) control keys that will be used to authenticate add validator tx -t, --testnet fuji operate on testnet (alias to fuji) --wait-for-tx-acceptance (for Subnets, not L1s) just issue the add validator tx, without waiting for its acceptance (default true) --weight uint set the staking weight of the validator to add (default 20) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### changeOwner The blockchain changeOwner changes the owner of the deployed Blockchain. **Usage:** ```bash avalanche blockchain changeOwner [subcommand] [flags] ``` **Flags:** ```bash --auth-keys strings control keys that will be used to authenticate transfer blockchain ownership tx --cluster string operate on the given cluster --control-keys strings addresses that may make blockchain changes --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -e, --ewoq use ewoq key [fuji/devnet] -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for changeOwner -k, --key string select the key to use [fuji/devnet] -g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji/devnet) --ledger-addrs strings use the given ledger addresses -l, --local operate on a local network -m, --mainnet operate on mainnet --output-tx-path string file path of the transfer blockchain ownership tx -s, --same-control-key use the fee-paying key as control key -t, --testnet fuji operate on testnet (alias to fuji) --threshold uint32 required number of control key signatures to make blockchain changes --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### changeWeight The blockchain changeWeight command changes the weight of a L1 Validator. The L1 has to be a Proof of Authority L1. **Usage:** ```bash avalanche blockchain changeWeight [subcommand] [flags] ``` **Flags:** ```bash --cluster string operate on the given cluster --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -e, --ewoq use ewoq key [fuji/devnet only] -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for changeWeight -k, --key string select the key to use [fuji/devnet only] -g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji/devnet) --ledger-addrs strings use the given ledger addresses -l, --local operate on a local network -m, --mainnet operate on mainnet --node-endpoint string gather node id/bls from publicly available avalanchego apis on the given endpoint --node-id string node-id of the validator -t, --testnet fuji operate on testnet (alias to fuji) --weight uint set the new staking weight of the validator --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### configure AvalancheGo nodes support several different configuration files. Each network (a Subnet or an L1) has their own config which applies to all blockchains/VMs in the network (see https://build.avax.network/docs/nodes/configure/avalanche-l1-configs) Each blockchain within the network can have its own chain config (see https://build.avax.network/docs/nodes/chain-configs/primary-network/c-chain https://github.com/ava-labs/subnet-evm/blob/master/plugin/evm/config/config.go for subnet-evm options). A chain can also have special requirements for the AvalancheGo node configuration itself (see https://build.avax.network/docs/nodes/configure/configs-flags). This command allows you to set all those files. **Usage:** ```bash avalanche blockchain configure [subcommand] [flags] ``` **Flags:** ```bash --chain-config string path to the chain configuration -h, --help help for configure --node-config string path to avalanchego node configuration --per-node-chain-config string path to per node chain configuration for local network --subnet-config string path to the subnet configuration --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### create The blockchain create command builds a new genesis file to configure your Blockchain. By default, the command runs an interactive wizard. It walks you through all the steps you need to create your first Blockchain. The tool supports deploying Subnet-EVM, and custom VMs. You can create a custom, user-generated genesis with a custom VM by providing the path to your genesis and VM binaries with the --genesis and --vm flags. By default, running the command with a blockchainName that already exists causes the command to fail. If you'd like to overwrite an existing configuration, pass the -f flag. **Usage:** ```bash avalanche blockchain create [subcommand] [flags] ``` **Flags:** ```bash --custom use a custom VM template --custom-vm-branch string custom vm branch or commit --custom-vm-build-script string custom vm build-script --custom-vm-path string file path of custom vm to use --custom-vm-repo-url string custom vm repository url --debug enable blockchain debugging (default true) --evm use the Subnet-EVM as the base template --evm-chain-id uint chain ID to use with Subnet-EVM --evm-defaults deprecation notice: use '--production-defaults' --evm-token string token symbol to use with Subnet-EVM --external-gas-token use a gas token from another blockchain -f, --force overwrite the existing configuration if one exists --from-github-repo generate custom VM binary from github repository --genesis string file path of genesis to use -h, --help help for create --icm interoperate with other blockchains using ICM --icm-registry-at-genesis setup ICM registry smart contract on genesis [experimental] --latest use latest Subnet-EVM released version, takes precedence over --vm-version --pre-release use latest Subnet-EVM pre-released version, takes precedence over --vm-version --production-defaults use default production settings for your blockchain --proof-of-authority use proof of authority(PoA) for validator management --proof-of-stake use proof of stake(PoS) for validator management --proxy-contract-owner string EVM address that controls ProxyAdmin for TransparentProxy of ValidatorManager contract --reward-basis-points uint (PoS only) reward basis points for PoS Reward Calculator (default 100) --sovereign set to false if creating non-sovereign blockchain (default true) --teleporter interoperate with other blockchains using ICM --test-defaults use default test settings for your blockchain --validator-manager-owner string EVM address that controls Validator Manager Owner --vm string file path of custom vm to use. alias to custom-vm-path --vm-version string version of Subnet-EVM template to use --warp generate a vm with warp support (needed for ICM) (default true) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### delete The blockchain delete command deletes an existing blockchain configuration. **Usage:** ```bash avalanche blockchain delete [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for delete --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### deploy The blockchain deploy command deploys your Blockchain configuration to Local Network, to Fuji Testnet, DevNet or to Mainnet. At the end of the call, the command prints the RPC URL you can use to interact with the L1 / Subnet. When deploying an L1, Avalanche-CLI lets you use your local machine as a bootstrap validator, so you don't need to run separate Avalanche nodes. This is controlled by the --use-local-machine flag (enabled by default on Local Network). If --use-local-machine is set to true: - Avalanche-CLI will call CreateSubnetTx, CreateChainTx, ConvertSubnetToL1Tx, followed by syncing the local machine bootstrap validator to the L1 and initialize Validator Manager Contract on the L1 If using your own Avalanche Nodes as bootstrap validators: - Avalanche-CLI will call CreateSubnetTx, CreateChainTx, ConvertSubnetToL1Tx - You will have to sync your bootstrap validators to the L1 - Next, Initialize Validator Manager contract on the L1 using avalanche contract initValidatorManager [L1_Name] Avalanche-CLI only supports deploying an individual Blockchain once per network. Subsequent attempts to deploy the same Blockchain to the same network (Local Network, Fuji, Mainnet) aren't allowed. If you'd like to redeploy a Blockchain locally for testing, you must first call avalanche network clean to reset all deployed chain state. Subsequent local deploys redeploy the chain with fresh state. You can deploy the same Blockchain to multiple networks, so you can take your locally tested Blockchain and deploy it on Fuji or Mainnet. **Usage:** ```bash avalanche blockchain deploy [subcommand] [flags] ``` **Flags:** ```bash --convert-only avoid node track, restart and poa manager setup -e, --ewoq use ewoq key [local/devnet deploy only] -h, --help help for deploy -k, --key string select the key to use [fuji/devnet deploy only] -g, --ledger use ledger instead of key --ledger-addrs strings use the given ledger addresses --mainnet-chain-id uint32 use different ChainID for mainnet deployment --output-tx-path string file path of the blockchain creation tx (for multi-sig signing) -u, --subnet-id string do not create a subnet, deploy the blockchain into the given subnet id --subnet-only command stops after CreateSubnetTx and returns SubnetID Network Flags (Select One): --cluster string operate on the given cluster --devnet operate on a devnet network --endpoint string use the given endpoint for network operations --fuji operate on fuji (alias to `testnet`) --local operate on a local network --mainnet operate on mainnet --testnet operate on testnet (alias to `fuji`) Bootstrap Validators Flags: --balance float64 set the AVAX balance of each bootstrap validator that will be used for continuous fee on P-Chain (setting balance=1 equals to 1 AVAX for each bootstrap validator) --bootstrap-endpoints stringSlice take validator node info from the given endpoints --bootstrap-filepath string JSON file path that provides details about bootstrap validators --change-owner-address string address that will receive change if node is no longer L1 validator --generate-node-id set to true to generate Node IDs for bootstrap validators when none are set up. Use these Node IDs to set up your Avalanche Nodes. --num-bootstrap-validators int number of bootstrap validators to set up in sovereign L1 validator) Local Machine Flags (Use Local Machine as Bootstrap Validator): --avalanchego-path string use this avalanchego binary path --avalanchego-version string use this version of avalanchego (ex: v1.17.12) --http-port uintSlice http port for node(s) --partial-sync set primary network partial sync for new validators --staking-cert-key-path stringSlice path to provided staking cert key for node(s) --staking-port uintSlice staking port for node(s) --staking-signer-key-path stringSlice path to provided staking signer key for node(s) --staking-tls-key-path stringSlice path to provided staking TLS key for node(s) --use-local-machine use local machine as a blockchain validator Local Network Flags: --avalanchego-path string use this avalanchego binary path --avalanchego-version string use this version of avalanchego (ex: v1.17.12) --num-nodes uint32 number of nodes to be created on local network deploy Non Subnet-Only-Validators (Non-SOV) Flags: --auth-keys stringSlice control keys that will be used to authenticate chain creation --control-keys stringSlice addresses that may make blockchain changes --same-control-key use the fee-paying key as control key --threshold uint32 required number of control key signatures to make blockchain changes ICM Flags: --cchain-funding-key string key to be used to fund relayer account on cchain --cchain-icm-key string key to be used to pay for ICM deploys on C-Chain --icm-key string key to be used to pay for ICM deploys --icm-version string ICM version to deploy --relay-cchain relay C-Chain as source and destination --relayer-allow-private-ips allow relayer to connec to private ips --relayer-amount float64 automatically fund relayer fee payments with the given amount --relayer-key string key to be used by default both for rewards and to pay fees --relayer-log-level string log level to be used for relayer logs --relayer-path string relayer binary to use --relayer-version string relayer version to deploy --skip-icm-deploy Skip automatic ICM deploy --skip-relayer skip relayer deploy --teleporter-messenger-contract-address-path string path to an ICM Messenger contract address file --teleporter-messenger-deployer-address-path string path to an ICM Messenger deployer address file --teleporter-messenger-deployer-tx-path string path to an ICM Messenger deployer tx file --teleporter-registry-bytecode-path string path to an ICM Registry bytecode file Proof Of Stake Flags: --pos-maximum-stake-amount uint64 maximum stake amount --pos-maximum-stake-multiplier uint8 maximum stake multiplier --pos-minimum-delegation-fee uint16 minimum delegation fee --pos-minimum-stake-amount uint64 minimum stake amount --pos-minimum-stake-duration uint64 minimum stake duration (in seconds) --pos-weight-to-value-factor uint64 weight to value factor Signature Aggregator Flags: --aggregator-log-level string log level to use with signature aggregator --aggregator-log-to-stdout use stdout for signature aggregator logs ``` ### describe The blockchain describe command prints the details of a Blockchain configuration to the console. By default, the command prints a summary of the configuration. By providing the --genesis flag, the command instead prints out the raw genesis file. **Usage:** ```bash avalanche blockchain describe [subcommand] [flags] ``` **Flags:** ```bash -g, --genesis Print the genesis to the console directly instead of the summary -h, --help help for describe --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### export The blockchain export command write the details of an existing Blockchain deploy to a file. The command prompts for an output path. You can also provide one with the --output flag. **Usage:** ```bash avalanche blockchain export [subcommand] [flags] ``` **Flags:** ```bash --custom-vm-branch string custom vm branch --custom-vm-build-script string custom vm build-script --custom-vm-repo-url string custom vm repository url -h, --help help for export -o, --output string write the export data to the provided file path --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### import Import blockchain configurations into avalanche-cli. This command suite supports importing from a file created on another computer, or importing from blockchains running public networks (e.g. created manually or with the deprecated subnet-cli) **Usage:** ```bash avalanche blockchain import [subcommand] [flags] ``` **Subcommands:** - [`file`](#avalanche-blockchain-import-file): The blockchain import command will import a blockchain configuration from a file or a git repository. To import from a file, you can optionally provide the path as a command-line argument. Alternatively, running the command without any arguments triggers an interactive wizard. To import from a repository, go through the wizard. By default, an imported Blockchain doesn't overwrite an existing Blockchain with the same name. To allow overwrites, provide the --force flag. - [`public`](#avalanche-blockchain-import-public): The blockchain import public command imports a Blockchain configuration from a running network. By default, an imported Blockchain doesn't overwrite an existing Blockchain with the same name. To allow overwrites, provide the --force flag. **Flags:** ```bash -h, --help help for import --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### import file The blockchain import command will import a blockchain configuration from a file or a git repository. To import from a file, you can optionally provide the path as a command-line argument. Alternatively, running the command without any arguments triggers an interactive wizard. To import from a repository, go through the wizard. By default, an imported Blockchain doesn't overwrite an existing Blockchain with the same name. To allow overwrites, provide the --force flag. **Usage:** ```bash avalanche blockchain import file [subcommand] [flags] ``` **Flags:** ```bash --blockchain string the blockchain configuration to import from the provided repo --branch string the repo branch to use if downloading a new repo -f, --force overwrite the existing configuration if one exists -h, --help help for file --repo string the repo to import (ex: ava-labs/avalanche-plugins-core) or url to download the repo from --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### import public The blockchain import public command imports a Blockchain configuration from a running network. By default, an imported Blockchain doesn't overwrite an existing Blockchain with the same name. To allow overwrites, provide the --force flag. **Usage:** ```bash avalanche blockchain import public [subcommand] [flags] ``` **Flags:** ```bash --blockchain-id string the blockchain ID --cluster string operate on the given cluster --custom use a custom VM template --devnet operate on a devnet network --endpoint string use the given endpoint for network operations --evm import a subnet-evm --force overwrite the existing configuration if one exists -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for public -l, --local operate on a local network -m, --mainnet operate on mainnet --node-url string [optional] URL of an already running validator -t, --testnet fuji operate on testnet (alias to fuji) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### join The blockchain join command configures your validator node to begin validating a new Blockchain. To complete this process, you must have access to the machine running your validator. If the CLI is running on the same machine as your validator, it can generate or update your node's config file automatically. Alternatively, the command can print the necessary instructions to update your node manually. To complete the validation process, the Blockchain's admins must add the NodeID of your validator to the Blockchain's allow list by calling addValidator with your NodeID. After you update your validator's config, you need to restart your validator manually. If you provide the --avalanchego-config flag, this command attempts to edit the config file at that path. This command currently only supports Blockchains deployed on the Fuji Testnet and Mainnet. **Usage:** ```bash avalanche blockchain join [subcommand] [flags] ``` **Flags:** ```bash --avalanchego-config string file path of the avalanchego config file --cluster string operate on the given cluster --data-dir string path of avalanchego's data dir directory --devnet operate on a devnet network --endpoint string use the given endpoint for network operations --force-write if true, skip to prompt to overwrite the config file -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for join -k, --key string select the key to use [fuji only] -g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji) --ledger-addrs strings use the given ledger addresses -l, --local operate on a local network -m, --mainnet operate on mainnet --node-id string set the NodeID of the validator to check --plugin-dir string file path of avalanchego's plugin directory --print if true, print the manual config without prompting --stake-amount uint amount of tokens to stake on validator --staking-period duration how long validator validates for after start time --start-time string start time that validator starts validating -t, --testnet fuji operate on testnet (alias to fuji) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### list The blockchain list command prints the names of all created Blockchain configurations. Without any flags, it prints some general, static information about the Blockchain. With the --deployed flag, the command shows additional information including the VMID, BlockchainID and SubnetID. **Usage:** ```bash avalanche blockchain list [subcommand] [flags] ``` **Flags:** ```bash --deployed show additional deploy information -h, --help help for list --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### publish The blockchain publish command publishes the Blockchain's VM to a repository. **Usage:** ```bash avalanche blockchain publish [subcommand] [flags] ``` **Flags:** ```bash --alias string We publish to a remote repo, but identify the repo locally under a user-provided alias (e.g. myrepo). --force If true, ignores if the blockchain has been published in the past, and attempts a forced publish. -h, --help help for publish --no-repo-path string Do not let the tool manage file publishing, but have it only generate the files and put them in the location given by this flag. --repo-url string The URL of the repo where we are publishing --subnet-file-path string Path to the Blockchain description file. If not given, a prompting sequence will be initiated. --vm-file-path string Path to the VM description file. If not given, a prompting sequence will be initiated. --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### removeValidator The blockchain removeValidator command stops a whitelisted blockchain network validator from validating your deployed Blockchain. To remove the validator from the Subnet's allow list, provide the validator's unique NodeID. You can bypass these prompts by providing the values with flags. **Usage:** ```bash avalanche blockchain removeValidator [subcommand] [flags] ``` **Flags:** ```bash --aggregator-allow-private-peers allow the signature aggregator to connect to peers with private IP (default true) --aggregator-extra-endpoints strings endpoints for extra nodes that are needed in signature aggregation --aggregator-log-level string log level to use with signature aggregator (default "Debug") --aggregator-log-to-stdout use stdout for signature aggregator logs --auth-keys strings (for non-SOV blockchain only) control keys that will be used to authenticate the removeValidator tx --blockchain-genesis-key use genesis allocated key to pay fees for completing the validator's removal (blockchain gas token) --blockchain-key string CLI stored key to use to pay fees for completing the validator's removal (blockchain gas token) --blockchain-private-key string private key to use to pay fees for completing the validator's removal (blockchain gas token) --cluster string operate on the given cluster --devnet operate on a devnet network --endpoint string use the given endpoint for network operations --force force validator removal even if it's not getting rewarded -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for removeValidator -k, --key string select the key to use [fuji deploy only] -g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji) --ledger-addrs strings use the given ledger addresses -l, --local operate on a local network -m, --mainnet operate on mainnet --node-endpoint string remove validator that responds to the given endpoint --node-id string node-id of the validator --output-tx-path string (for non-SOV blockchain only) file path of the removeValidator tx --rpc string connect to validator manager at the given rpc endpoint -t, --testnet fuji operate on testnet (alias to fuji) --uptime uint validator's uptime in seconds. If not provided, it will be automatically calculated --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### stats The blockchain stats command prints validator statistics for the given Blockchain. **Usage:** ```bash avalanche blockchain stats [subcommand] [flags] ``` **Flags:** ```bash --cluster string operate on the given cluster --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for stats -l, --local operate on a local network -m, --mainnet operate on mainnet -t, --testnet fuji operate on testnet (alias to fuji) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### upgrade The blockchain upgrade command suite provides a collection of tools for updating your developmental and deployed Blockchains. **Usage:** ```bash avalanche blockchain upgrade [subcommand] [flags] ``` **Subcommands:** - [`apply`](#avalanche-blockchain-upgrade-apply): Apply generated upgrade bytes to running Blockchain nodes to trigger a network upgrade. For public networks (Fuji Testnet or Mainnet), to complete this process, you must have access to the machine running your validator. If the CLI is running on the same machine as your validator, it can manipulate your node's configuration automatically. Alternatively, the command can print the necessary instructions to upgrade your node manually. After you update your validator's configuration, you need to restart your validator manually. If you provide the --avalanchego-chain-config-dir flag, this command attempts to write the upgrade file at that path. Refer to https://docs.avax.network/nodes/maintain/chain-config-flags#subnet-chain-configs for related documentation. - [`export`](#avalanche-blockchain-upgrade-export): Export the upgrade bytes file to a location of choice on disk - [`generate`](#avalanche-blockchain-upgrade-generate): The blockchain upgrade generate command builds a new upgrade.json file to customize your Blockchain. It guides the user through the process using an interactive wizard. - [`import`](#avalanche-blockchain-upgrade-import): Import the upgrade bytes file into the local environment - [`print`](#avalanche-blockchain-upgrade-print): Print the upgrade.json file content - [`vm`](#avalanche-blockchain-upgrade-vm): The blockchain upgrade vm command enables the user to upgrade their Blockchain's VM binary. The command can upgrade both local Blockchains and publicly deployed Blockchains on Fuji and Mainnet. The command walks the user through an interactive wizard. The user can skip the wizard by providing command line flags. **Flags:** ```bash -h, --help help for upgrade --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### upgrade apply Apply generated upgrade bytes to running Blockchain nodes to trigger a network upgrade. For public networks (Fuji Testnet or Mainnet), to complete this process, you must have access to the machine running your validator. If the CLI is running on the same machine as your validator, it can manipulate your node's configuration automatically. Alternatively, the command can print the necessary instructions to upgrade your node manually. After you update your validator's configuration, you need to restart your validator manually. If you provide the --avalanchego-chain-config-dir flag, this command attempts to write the upgrade file at that path. Refer to https://docs.avax.network/nodes/maintain/chain-config-flags#subnet-chain-configs for related documentation. **Usage:** ```bash avalanche blockchain upgrade apply [subcommand] [flags] ``` **Flags:** ```bash --avalanchego-chain-config-dir string avalanchego's chain config file directory (default "/home/runner/.avalanchego/chains") --config create upgrade config for future subnet deployments (same as generate) --force If true, don't prompt for confirmation of timestamps in the past --fuji fuji apply upgrade existing fuji deployment (alias for `testnet`) -h, --help help for apply --local local apply upgrade existing local deployment --mainnet mainnet apply upgrade existing mainnet deployment --print if true, print the manual config without prompting (for public networks only) --testnet testnet apply upgrade existing testnet deployment (alias for `fuji`) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### upgrade export Export the upgrade bytes file to a location of choice on disk **Usage:** ```bash avalanche blockchain upgrade export [subcommand] [flags] ``` **Flags:** ```bash --force If true, overwrite a possibly existing file without prompting -h, --help help for export --upgrade-filepath string Export upgrade bytes file to location of choice on disk --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### upgrade generate The blockchain upgrade generate command builds a new upgrade.json file to customize your Blockchain. It guides the user through the process using an interactive wizard. **Usage:** ```bash avalanche blockchain upgrade generate [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for generate --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### upgrade import Import the upgrade bytes file into the local environment **Usage:** ```bash avalanche blockchain upgrade import [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for import --upgrade-filepath string Import upgrade bytes file into local environment --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### upgrade print Print the upgrade.json file content **Usage:** ```bash avalanche blockchain upgrade print [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for print --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### upgrade vm The blockchain upgrade vm command enables the user to upgrade their Blockchain's VM binary. The command can upgrade both local Blockchains and publicly deployed Blockchains on Fuji and Mainnet. The command walks the user through an interactive wizard. The user can skip the wizard by providing command line flags. **Usage:** ```bash avalanche blockchain upgrade vm [subcommand] [flags] ``` **Flags:** ```bash --binary string Upgrade to custom binary --config upgrade config for future subnet deployments --fuji fuji upgrade existing fuji deployment (alias for `testnet`) -h, --help help for vm --latest upgrade to latest version --local local upgrade existing local deployment --mainnet mainnet upgrade existing mainnet deployment --plugin-dir string plugin directory to automatically upgrade VM --print print instructions for upgrading --testnet testnet upgrade existing testnet deployment (alias for `fuji`) --version string Upgrade to custom version --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### validators The blockchain validators command lists the validators of a blockchain and provides several statistics about them. **Usage:** ```bash avalanche blockchain validators [subcommand] [flags] ``` **Flags:** ```bash --cluster string operate on the given cluster --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for validators -l, --local operate on a local network -m, --mainnet operate on mainnet -t, --testnet fuji operate on testnet (alias to fuji) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### vmid The blockchain vmid command prints the virtual machine ID (VMID) for the given Blockchain. **Usage:** ```bash avalanche blockchain vmid [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for vmid --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche config Customize configuration for Avalanche-CLI **Usage:** ```bash avalanche config [subcommand] [flags] ``` **Subcommands:** - [`authorize-cloud-access`](#avalanche-config-authorize-cloud-access): set preferences to authorize access to cloud resources - [`metrics`](#avalanche-config-metrics): set user metrics collection preferences - [`migrate`](#avalanche-config-migrate): migrate command migrates old ~/.avalanche-cli.json and ~/.avalanche-cli/config to /.avalanche-cli/config.json.. - [`snapshotsAutoSave`](#avalanche-config-snapshotsautosave): set user preference between auto saving local network snapshots or not - [`update`](#avalanche-config-update): set user preference between update check or not **Flags:** ```bash -h, --help help for config --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### authorize-cloud-access set preferences to authorize access to cloud resources **Usage:** ```bash avalanche config authorize-cloud-access [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for authorize-cloud-access --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### metrics set user metrics collection preferences **Usage:** ```bash avalanche config metrics [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for metrics --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### migrate migrate command migrates old ~/.avalanche-cli.json and ~/.avalanche-cli/config to /.avalanche-cli/config.json.. **Usage:** ```bash avalanche config migrate [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for migrate --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### snapshotsAutoSave set user preference between auto saving local network snapshots or not **Usage:** ```bash avalanche config snapshotsAutoSave [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for snapshotsAutoSave --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### update set user preference between update check or not **Usage:** ```bash avalanche config update [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for update --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche contract The contract command suite provides a collection of tools for deploying and interacting with smart contracts. **Usage:** ```bash avalanche contract [subcommand] [flags] ``` **Subcommands:** - [`deploy`](#avalanche-contract-deploy): The contract command suite provides a collection of tools for deploying smart contracts. - [`initValidatorManager`](#avalanche-contract-initvalidatormanager): Initializes Proof of Authority(PoA) or Proof of Stake(PoS)Validator Manager contract on a Blockchain and sets up initial validator set on the Blockchain. For more info on Validator Manager, please head to https://github.com/ava-labs/icm-contracts/tree/main/contracts/validator-manager **Flags:** ```bash -h, --help help for contract --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### deploy The contract command suite provides a collection of tools for deploying smart contracts. **Usage:** ```bash avalanche contract deploy [subcommand] [flags] ``` **Subcommands:** - [`erc20`](#avalanche-contract-deploy-erc20): Deploy an ERC20 token into a given Network and Blockchain **Flags:** ```bash -h, --help help for deploy --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### deploy erc20 Deploy an ERC20 token into a given Network and Blockchain **Usage:** ```bash avalanche contract deploy erc20 [subcommand] [flags] ``` **Flags:** ```bash --blockchain string deploy the ERC20 contract into the given CLI blockchain --blockchain-id string deploy the ERC20 contract into the given blockchain ID/Alias --c-chain deploy the ERC20 contract into C-Chain --cluster string operate on the given cluster --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet --funded string set the funded address --genesis-key use genesis allocated key as contract deployer -h, --help help for erc20 --key string CLI stored key to use as contract deployer -l, --local operate on a local network -m, --mainnet operate on mainnet --private-key string private key to use as contract deployer --rpc string deploy the contract into the given rpc endpoint --supply uint set the token supply --symbol string set the token symbol -t, --testnet fuji operate on testnet (alias to fuji) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### initValidatorManager Initializes Proof of Authority(PoA) or Proof of Stake(PoS)Validator Manager contract on a Blockchain and sets up initial validator set on the Blockchain. For more info on Validator Manager, please head to https://github.com/ava-labs/icm-contracts/tree/main/contracts/validator-manager **Usage:** ```bash avalanche contract initValidatorManager [subcommand] [flags] ``` **Flags:** ```bash --aggregator-allow-private-peers allow the signature aggregator to connect to peers with private IP (default true) --aggregator-extra-endpoints strings endpoints for extra nodes that are needed in signature aggregation --aggregator-log-level string log level to use with signature aggregator (default "Debug") --aggregator-log-to-stdout dump signature aggregator logs to stdout --cluster string operate on the given cluster --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet --genesis-key use genesis allocated key as contract deployer -h, --help help for initValidatorManager --key string CLI stored key to use as contract deployer -l, --local operate on a local network -m, --mainnet operate on mainnet --pos-maximum-stake-amount uint (PoS only) maximum stake amount (default 1000) --pos-maximum-stake-multiplier uint8 (PoS only )maximum stake multiplier (default 1) --pos-minimum-delegation-fee uint16 (PoS only) minimum delegation fee (default 1) --pos-minimum-stake-amount uint (PoS only) minimum stake amount (default 1) --pos-minimum-stake-duration uint (PoS only) minimum stake duration (in seconds) (default 100) --pos-reward-calculator-address string (PoS only) initialize the ValidatorManager with reward calculator address --pos-weight-to-value-factor uint (PoS only) weight to value factor (default 1) --private-key string private key to use as contract deployer --rpc string deploy the contract into the given rpc endpoint -t, --testnet fuji operate on testnet (alias to fuji) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche help Help provides help for any command in the application. Simply type avalanche help [path to command] for full details. **Usage:** ```bash avalanche help [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for help --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche icm The messenger command suite provides a collection of tools for interacting with ICM messenger contracts. **Usage:** ```bash avalanche icm [subcommand] [flags] ``` **Subcommands:** - [`deploy`](#avalanche-icm-deploy): Deploys ICM Messenger and Registry into a given L1. - [`sendMsg`](#avalanche-icm-sendmsg): Sends and wait reception for a ICM msg between two blockchains. **Flags:** ```bash -h, --help help for icm --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### deploy Deploys ICM Messenger and Registry into a given L1. For Local Networks, it also deploys into C-Chain. **Usage:** ```bash avalanche icm deploy [subcommand] [flags] ``` **Flags:** ```bash --blockchain string deploy ICM into the given CLI blockchain --blockchain-id string deploy ICM into the given blockchain ID/Alias --c-chain deploy ICM into C-Chain --cchain-key string key to be used to pay fees to deploy ICM to C-Chain --cluster string operate on the given cluster --deploy-messenger deploy ICM Messenger (default true) --deploy-registry deploy ICM Registry (default true) --devnet operate on a devnet network --endpoint string use the given endpoint for network operations --force-registry-deploy deploy ICM Registry even if Messenger has already been deployed -f, --fuji testnet operate on fuji (alias to testnet --genesis-key use genesis allocated key to fund ICM deploy -h, --help help for deploy --include-cchain deploy ICM also to C-Chain --key string CLI stored key to use to fund ICM deploy -l, --local operate on a local network -m, --mainnet operate on mainnet --messenger-contract-address-path string path to a messenger contract address file --messenger-deployer-address-path string path to a messenger deployer address file --messenger-deployer-tx-path string path to a messenger deployer tx file --private-key string private key to use to fund ICM deploy --registry-bytecode-path string path to a registry bytecode file --rpc-url string use the given RPC URL to connect to the subnet -t, --testnet fuji operate on testnet (alias to fuji) --version string version to deploy (default "latest") --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### sendMsg Sends and wait reception for a ICM msg between two blockchains. **Usage:** ```bash avalanche icm sendMsg [subcommand] [flags] ``` **Flags:** ```bash --cluster string operate on the given cluster --dest-rpc string use the given destination blockchain rpc endpoint --destination-address string deliver the message to the given contract destination address --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet --genesis-key use genesis allocated key as message originator and to pay source blockchain fees -h, --help help for sendMsg --hex-encoded given message is hex encoded --key string CLI stored key to use as message originator and to pay source blockchain fees -l, --local operate on a local network -m, --mainnet operate on mainnet --private-key string private key to use as message originator and to pay source blockchain fees --source-rpc string use the given source blockchain rpc endpoint -t, --testnet fuji operate on testnet (alias to fuji) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche ictt The ictt command suite provides tools to deploy and manage Interchain Token Transferrers. **Usage:** ```bash avalanche ictt [subcommand] [flags] ``` **Subcommands:** - [`deploy`](#avalanche-ictt-deploy): Deploys a Token Transferrer into a given Network and Subnets **Flags:** ```bash -h, --help help for ictt --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### deploy Deploys a Token Transferrer into a given Network and Subnets **Usage:** ```bash avalanche ictt deploy [subcommand] [flags] ``` **Flags:** ```bash --c-chain-home set the Transferrer's Home Chain into C-Chain --c-chain-remote set the Transferrer's Remote Chain into C-Chain --cluster string operate on the given cluster --deploy-erc20-home string deploy a Transferrer Home for the given Chain's ERC20 Token --deploy-native-home deploy a Transferrer Home for the Chain's Native Token --deploy-native-remote deploy a Transferrer Remote for the Chain's Native Token --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for deploy --home-blockchain string set the Transferrer's Home Chain into the given CLI blockchain --home-genesis-key use genesis allocated key to deploy Transferrer Home --home-key string CLI stored key to use to deploy Transferrer Home --home-private-key string private key to use to deploy Transferrer Home --home-rpc string use the given RPC URL to connect to the home blockchain -l, --local operate on a local network -m, --mainnet operate on mainnet --remote-blockchain string set the Transferrer's Remote Chain into the given CLI blockchain --remote-genesis-key use genesis allocated key to deploy Transferrer Remote --remote-key string CLI stored key to use to deploy Transferrer Remote --remote-private-key string private key to use to deploy Transferrer Remote --remote-rpc string use the given RPC URL to connect to the remote blockchain --remote-token-decimals uint8 use the given number of token decimals for the Transferrer Remote [defaults to token home's decimals (18 for a new wrapped native home token)] --remove-minter-admin remove the native minter precompile admin found on remote blockchain genesis -t, --testnet fuji operate on testnet (alias to fuji) --use-home string use the given Transferrer's Home Address --version string tag/branch/commit of Avalanche Interchain Token Transfer (ICTT) to be used (defaults to main branch) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche interchain The interchain command suite provides a collection of tools to set and manage interoperability between blockchains. **Usage:** ```bash avalanche interchain [subcommand] [flags] ``` **Subcommands:** - [`messenger`](#avalanche-interchain-messenger): The messenger command suite provides a collection of tools for interacting with ICM messenger contracts. - [`relayer`](#avalanche-interchain-relayer): The relayer command suite provides a collection of tools for deploying and configuring an ICM relayers. - [`tokenTransferrer`](#avalanche-interchain-tokentransferrer): The tokenTransfer command suite provides tools to deploy and manage Token Transferrers. **Flags:** ```bash -h, --help help for interchain --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### messenger The messenger command suite provides a collection of tools for interacting with ICM messenger contracts. **Usage:** ```bash avalanche interchain messenger [subcommand] [flags] ``` **Subcommands:** - [`deploy`](#avalanche-interchain-messenger-deploy): Deploys ICM Messenger and Registry into a given L1. - [`sendMsg`](#avalanche-interchain-messenger-sendmsg): Sends and wait reception for a ICM msg between two blockchains. **Flags:** ```bash -h, --help help for messenger --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### messenger deploy Deploys ICM Messenger and Registry into a given L1. For Local Networks, it also deploys into C-Chain. **Usage:** ```bash avalanche interchain messenger deploy [subcommand] [flags] ``` **Flags:** ```bash --blockchain string deploy ICM into the given CLI blockchain --blockchain-id string deploy ICM into the given blockchain ID/Alias --c-chain deploy ICM into C-Chain --cchain-key string key to be used to pay fees to deploy ICM to C-Chain --cluster string operate on the given cluster --deploy-messenger deploy ICM Messenger (default true) --deploy-registry deploy ICM Registry (default true) --devnet operate on a devnet network --endpoint string use the given endpoint for network operations --force-registry-deploy deploy ICM Registry even if Messenger has already been deployed -f, --fuji testnet operate on fuji (alias to testnet --genesis-key use genesis allocated key to fund ICM deploy -h, --help help for deploy --include-cchain deploy ICM also to C-Chain --key string CLI stored key to use to fund ICM deploy -l, --local operate on a local network -m, --mainnet operate on mainnet --messenger-contract-address-path string path to a messenger contract address file --messenger-deployer-address-path string path to a messenger deployer address file --messenger-deployer-tx-path string path to a messenger deployer tx file --private-key string private key to use to fund ICM deploy --registry-bytecode-path string path to a registry bytecode file --rpc-url string use the given RPC URL to connect to the subnet -t, --testnet fuji operate on testnet (alias to fuji) --version string version to deploy (default "latest") --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### messenger sendMsg Sends and wait reception for a ICM msg between two blockchains. **Usage:** ```bash avalanche interchain messenger sendMsg [subcommand] [flags] ``` **Flags:** ```bash --cluster string operate on the given cluster --dest-rpc string use the given destination blockchain rpc endpoint --destination-address string deliver the message to the given contract destination address --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet --genesis-key use genesis allocated key as message originator and to pay source blockchain fees -h, --help help for sendMsg --hex-encoded given message is hex encoded --key string CLI stored key to use as message originator and to pay source blockchain fees -l, --local operate on a local network -m, --mainnet operate on mainnet --private-key string private key to use as message originator and to pay source blockchain fees --source-rpc string use the given source blockchain rpc endpoint -t, --testnet fuji operate on testnet (alias to fuji) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### relayer The relayer command suite provides a collection of tools for deploying and configuring an ICM relayers. **Usage:** ```bash avalanche interchain relayer [subcommand] [flags] ``` **Subcommands:** - [`deploy`](#avalanche-interchain-relayer-deploy): Deploys an ICM Relayer for the given Network. - [`logs`](#avalanche-interchain-relayer-logs): Shows pretty formatted AWM relayer logs - [`start`](#avalanche-interchain-relayer-start): Starts AWM relayer on the specified network (Currently only for local network). - [`stop`](#avalanche-interchain-relayer-stop): Stops AWM relayer on the specified network (Currently only for local network, cluster). **Flags:** ```bash -h, --help help for relayer --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### relayer deploy Deploys an ICM Relayer for the given Network. **Usage:** ```bash avalanche interchain relayer deploy [subcommand] [flags] ``` **Flags:** ```bash --allow-private-ips allow relayer to connec to private ips (default true) --amount float automatically fund l1s fee payments with the given amount --bin-path string use the given relayer binary --blockchain-funding-key string key to be used to fund relayer account on all l1s --blockchains strings blockchains to relay as source and destination --cchain relay C-Chain as source and destination --cchain-amount float automatically fund cchain fee payments with the given amount --cchain-funding-key string key to be used to fund relayer account on cchain --cluster string operate on the given cluster --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for deploy --key string key to be used by default both for rewards and to pay fees -l, --local operate on a local network --log-level string log level to use for relayer logs -t, --testnet fuji operate on testnet (alias to fuji) --version string version to deploy (default "latest-prerelease") --config string config file (default is $HOME/.avalanche-cli/config.json) --skip-update-check skip check for new versions ``` #### relayer logs Shows pretty formatted AWM relayer logs **Usage:** ```bash avalanche interchain relayer logs [subcommand] [flags] ``` **Flags:** ```bash --endpoint string use the given endpoint for network operations --first uint output first N log lines -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for logs --last uint output last N log lines -l, --local operate on a local network --raw raw logs output -t, --testnet fuji operate on testnet (alias to fuji) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### relayer start Starts AWM relayer on the specified network (Currently only for local network). **Usage:** ```bash avalanche interchain relayer start [subcommand] [flags] ``` **Flags:** ```bash --bin-path string use the given relayer binary --cluster string operate on the given cluster --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for start -l, --local operate on a local network -t, --testnet fuji operate on testnet (alias to fuji) --version string version to use (default "latest-prerelease") --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### relayer stop Stops AWM relayer on the specified network (Currently only for local network, cluster). **Usage:** ```bash avalanche interchain relayer stop [subcommand] [flags] ``` **Flags:** ```bash --cluster string operate on the given cluster --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for stop -l, --local operate on a local network -t, --testnet fuji operate on testnet (alias to fuji) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### tokenTransferrer The tokenTransfer command suite provides tools to deploy and manage Token Transferrers. **Usage:** ```bash avalanche interchain tokenTransferrer [subcommand] [flags] ``` **Subcommands:** - [`deploy`](#avalanche-interchain-tokentransferrer-deploy): Deploys a Token Transferrer into a given Network and Subnets **Flags:** ```bash -h, --help help for tokenTransferrer --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### tokenTransferrer deploy Deploys a Token Transferrer into a given Network and Subnets **Usage:** ```bash avalanche interchain tokenTransferrer deploy [subcommand] [flags] ``` **Flags:** ```bash --c-chain-home set the Transferrer's Home Chain into C-Chain --c-chain-remote set the Transferrer's Remote Chain into C-Chain --cluster string operate on the given cluster --deploy-erc20-home string deploy a Transferrer Home for the given Chain's ERC20 Token --deploy-native-home deploy a Transferrer Home for the Chain's Native Token --deploy-native-remote deploy a Transferrer Remote for the Chain's Native Token --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for deploy --home-blockchain string set the Transferrer's Home Chain into the given CLI blockchain --home-genesis-key use genesis allocated key to deploy Transferrer Home --home-key string CLI stored key to use to deploy Transferrer Home --home-private-key string private key to use to deploy Transferrer Home --home-rpc string use the given RPC URL to connect to the home blockchain -l, --local operate on a local network -m, --mainnet operate on mainnet --remote-blockchain string set the Transferrer's Remote Chain into the given CLI blockchain --remote-genesis-key use genesis allocated key to deploy Transferrer Remote --remote-key string CLI stored key to use to deploy Transferrer Remote --remote-private-key string private key to use to deploy Transferrer Remote --remote-rpc string use the given RPC URL to connect to the remote blockchain --remote-token-decimals uint8 use the given number of token decimals for the Transferrer Remote [defaults to token home's decimals (18 for a new wrapped native home token)] --remove-minter-admin remove the native minter precompile admin found on remote blockchain genesis -t, --testnet fuji operate on testnet (alias to fuji) --use-home string use the given Transferrer's Home Address --version string tag/branch/commit of Avalanche Interchain Token Transfer (ICTT) to be used (defaults to main branch) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche key The key command suite provides a collection of tools for creating and managing signing keys. You can use these keys to deploy Subnets to the Fuji Testnet, but these keys are NOT suitable to use in production environments. DO NOT use these keys on Mainnet. To get started, use the key create command. **Usage:** ```bash avalanche key [subcommand] [flags] ``` **Subcommands:** - [`create`](#avalanche-key-create): The key create command generates a new private key to use for creating and controlling test Subnets. Keys generated by this command are NOT cryptographically secure enough to use in production environments. DO NOT use these keys on Mainnet. The command works by generating a secp256 key and storing it with the provided keyName. You can use this key in other commands by providing this keyName. If you'd like to import an existing key instead of generating one from scratch, provide the --file flag. - [`delete`](#avalanche-key-delete): The key delete command deletes an existing signing key. To delete a key, provide the keyName. The command prompts for confirmation before deleting the key. To skip the confirmation, provide the --force flag. - [`export`](#avalanche-key-export): The key export command exports a created signing key. You can use an exported key in other applications or import it into another instance of Avalanche-CLI. By default, the tool writes the hex encoded key to stdout. If you provide the --output flag, the command writes the key to a file of your choosing. - [`list`](#avalanche-key-list): The key list command prints information for all stored signing keys or for the ledger addresses associated to certain indices. - [`transfer`](#avalanche-key-transfer): The key transfer command allows to transfer funds between stored keys or ledger addresses. **Flags:** ```bash -h, --help help for key --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### create The key create command generates a new private key to use for creating and controlling test Subnets. Keys generated by this command are NOT cryptographically secure enough to use in production environments. DO NOT use these keys on Mainnet. The command works by generating a secp256 key and storing it with the provided keyName. You can use this key in other commands by providing this keyName. If you'd like to import an existing key instead of generating one from scratch, provide the --file flag. **Usage:** ```bash avalanche key create [subcommand] [flags] ``` **Flags:** ```bash --file string import the key from an existing key file -f, --force overwrite an existing key with the same name -h, --help help for create --skip-balances do not query public network balances for an imported key --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### delete The key delete command deletes an existing signing key. To delete a key, provide the keyName. The command prompts for confirmation before deleting the key. To skip the confirmation, provide the --force flag. **Usage:** ```bash avalanche key delete [subcommand] [flags] ``` **Flags:** ```bash -f, --force delete the key without confirmation -h, --help help for delete --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### export The key export command exports a created signing key. You can use an exported key in other applications or import it into another instance of Avalanche-CLI. By default, the tool writes the hex encoded key to stdout. If you provide the --output flag, the command writes the key to a file of your choosing. **Usage:** ```bash avalanche key export [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for export -o, --output string write the key to the provided file path --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### list The key list command prints information for all stored signing keys or for the ledger addresses associated to certain indices. **Usage:** ```bash avalanche key list [subcommand] [flags] ``` **Flags:** ```bash -a, --all-networks list all network addresses --blockchains strings blockchains to show information about (p=p-chain, x=x-chain, c=c-chain, and blockchain names) (default p,x,c) -c, --cchain list C-Chain addresses (default true) --cluster string operate on the given cluster --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for list --keys strings list addresses for the given keys -g, --ledger uints list ledger addresses for the given indices (default []) -l, --local operate on a local network -m, --mainnet operate on mainnet --pchain list P-Chain addresses (default true) --subnets strings subnets to show information about (p=p-chain, x=x-chain, c=c-chain, and blockchain names) (default p,x,c) -t, --testnet fuji operate on testnet (alias to fuji) --tokens strings provide balance information for the given token contract addresses (Evm only) (default [Native]) --use-gwei use gwei for EVM balances -n, --use-nano-avax use nano Avax for balances --xchain list X-Chain addresses (default true) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### transfer The key transfer command allows to transfer funds between stored keys or ledger addresses. **Usage:** ```bash avalanche key transfer [subcommand] [flags] ``` **Flags:** ```bash -o, --amount float amount to send or receive (AVAX or TOKEN units) --c-chain-receiver receive at C-Chain --c-chain-sender send from C-Chain --cluster string operate on the given cluster -a, --destination-addr string destination address --destination-key string key associated to a destination address --destination-subnet string subnet where the funds will be sent (token transferrer experimental) --destination-transferrer-address string token transferrer address at the destination subnet (token transferrer experimental) --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for transfer -k, --key string key associated to the sender or receiver address -i, --ledger uint32 ledger index associated to the sender or receiver address (default 32768) -l, --local operate on a local network -m, --mainnet operate on mainnet --origin-subnet string subnet where the funds belong (token transferrer experimental) --origin-transferrer-address string token transferrer address at the origin subnet (token transferrer experimental) --p-chain-receiver receive at P-Chain --p-chain-sender send from P-Chain --receiver-blockchain string receive at the given CLI blockchain --receiver-blockchain-id string receive at the given blockchain ID/Alias --sender-blockchain string send from the given CLI blockchain --sender-blockchain-id string send from the given blockchain ID/Alias -t, --testnet fuji operate on testnet (alias to fuji) --x-chain-receiver receive at X-Chain --x-chain-sender send from X-Chain --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche network The network command suite provides a collection of tools for managing local Blockchain deployments. When you deploy a Blockchain locally, it runs on a local, multi-node Avalanche network. The blockchain deploy command starts this network in the background. This command suite allows you to shutdown, restart, and clear that network. This network currently supports multiple, concurrently deployed Blockchains. **Usage:** ```bash avalanche network [subcommand] [flags] ``` **Subcommands:** - [`clean`](#avalanche-network-clean): The network clean command shuts down your local, multi-node network. All deployed Subnets shutdown and delete their state. You can restart the network by deploying a new Subnet configuration. - [`start`](#avalanche-network-start): The network start command starts a local, multi-node Avalanche network on your machine. By default, the command loads the default snapshot. If you provide the --snapshot-name flag, the network loads that snapshot instead. The command fails if the local network is already running. - [`status`](#avalanche-network-status): The network status command prints whether or not a local Avalanche network is running and some basic stats about the network. - [`stop`](#avalanche-network-stop): The network stop command shuts down your local, multi-node network. All deployed Subnets shutdown gracefully and save their state. If you provide the --snapshot-name flag, the network saves its state under this named snapshot. You can reload this snapshot with network start --snapshot-name `snapshotName`. Otherwise, the network saves to the default snapshot, overwriting any existing state. You can reload the default snapshot with network start. **Flags:** ```bash -h, --help help for network --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### clean The network clean command shuts down your local, multi-node network. All deployed Subnets shutdown and delete their state. You can restart the network by deploying a new Subnet configuration. **Usage:** ```bash avalanche network clean [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for clean --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### start The network start command starts a local, multi-node Avalanche network on your machine. By default, the command loads the default snapshot. If you provide the --snapshot-name flag, the network loads that snapshot instead. The command fails if the local network is already running. **Usage:** ```bash avalanche network start [subcommand] [flags] ``` **Flags:** ```bash --avalanchego-path string use this avalanchego binary path --avalanchego-version string use this version of avalanchego (ex: v1.17.12) (default "latest-prerelease") -h, --help help for start --num-nodes uint32 number of nodes to be created on local network (default 2) --relayer-path string use this relayer binary path --relayer-version string use this relayer version (default "latest-prerelease") --snapshot-name string name of snapshot to use to start the network from (default "default") --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### status The network status command prints whether or not a local Avalanche network is running and some basic stats about the network. **Usage:** ```bash avalanche network status [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for status --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### stop The network stop command shuts down your local, multi-node network. All deployed Subnets shutdown gracefully and save their state. If you provide the --snapshot-name flag, the network saves its state under this named snapshot. You can reload this snapshot with network start --snapshot-name `snapshotName`. Otherwise, the network saves to the default snapshot, overwriting any existing state. You can reload the default snapshot with network start. **Usage:** ```bash avalanche network stop [subcommand] [flags] ``` **Flags:** ```bash --dont-save do not save snapshot, just stop the network -h, --help help for stop --snapshot-name string name of snapshot to use to save network state into (default "default") --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche node The node command suite provides a collection of tools for creating and maintaining validators on Avalanche Network. To get started, use the node create command wizard to walk through the configuration to make your node a primary validator on Avalanche public network. You can use the rest of the commands to maintain your node and make your node a Subnet Validator. **Usage:** ```bash avalanche node [subcommand] [flags] ``` **Subcommands:** - [`addDashboard`](#avalanche-node-adddashboard): (ALPHA Warning) This command is currently in experimental mode. The node addDashboard command adds custom dashboard to the Grafana monitoring dashboard for the cluster. - [`create`](#avalanche-node-create): (ALPHA Warning) This command is currently in experimental mode. The node create command sets up a validator on a cloud server of your choice. The validator will be validating the Avalanche Primary Network and Subnet of your choice. By default, the command runs an interactive wizard. It walks you through all the steps you need to set up a validator. Once this command is completed, you will have to wait for the validator to finish bootstrapping on the primary network before running further commands on it, e.g. validating a Subnet. You can check the bootstrapping status by running avalanche node status The created node will be part of group of validators called `clusterName` and users can call node commands with `clusterName` so that the command will apply to all nodes in the cluster - [`destroy`](#avalanche-node-destroy): (ALPHA Warning) This command is currently in experimental mode. The node destroy command terminates all running nodes in cloud server and deletes all storage disks. If there is a static IP address attached, it will be released. - [`devnet`](#avalanche-node-devnet): (ALPHA Warning) This command is currently in experimental mode. The node devnet command suite provides a collection of commands related to devnets. You can check the updated status by calling avalanche node status `clusterName` - [`export`](#avalanche-node-export): (ALPHA Warning) This command is currently in experimental mode. The node export command exports cluster configuration and its nodes config to a text file. If no file is specified, the configuration is printed to the stdout. Use --include-secrets to include keys in the export. In this case please keep the file secure as it contains sensitive information. Exported cluster configuration without secrets can be imported by another user using node import command. - [`import`](#avalanche-node-import): (ALPHA Warning) This command is currently in experimental mode. The node import command imports cluster configuration and its nodes configuration from a text file created from the node export command. Prior to calling this command, call node whitelist command to have your SSH public key and IP whitelisted by the cluster owner. This will enable you to use avalanche-cli commands to manage the imported cluster. Please note, that this imported cluster will be considered as EXTERNAL by avalanche-cli, so some commands affecting cloud nodes like node create or node destroy will be not applicable to it. - [`list`](#avalanche-node-list): (ALPHA Warning) This command is currently in experimental mode. The node list command lists all clusters together with their nodes. - [`loadtest`](#avalanche-node-loadtest): (ALPHA Warning) This command is currently in experimental mode. The node loadtest command suite starts and stops a load test for an existing devnet cluster. - [`local`](#avalanche-node-local): The node local command suite provides a collection of commands related to local nodes - [`refresh-ips`](#avalanche-node-refresh-ips): (ALPHA Warning) This command is currently in experimental mode. The node refresh-ips command obtains the current IP for all nodes with dynamic IPs in the cluster, and updates the local node information used by CLI commands. - [`resize`](#avalanche-node-resize): (ALPHA Warning) This command is currently in experimental mode. The node resize command can change the amount of CPU, memory and disk space available for the cluster nodes. - [`scp`](#avalanche-node-scp): (ALPHA Warning) This command is currently in experimental mode. The node scp command securely copies files to and from nodes. Remote source or destionation can be specified using the following format: [clusterName|nodeID|instanceID|IP]:/path/to/file. Regular expressions are supported for the source files like /tmp/*.txt. File transfer to the nodes are parallelized. IF source or destination is cluster, the other should be a local file path. If both destinations are remote, they must be nodes for the same cluster and not clusters themselves. For example: $ avalanche node scp [cluster1|node1]:/tmp/file.txt /tmp/file.txt $ avalanche node scp /tmp/file.txt [cluster1|NodeID-XXXX]:/tmp/file.txt $ avalanche node scp node1:/tmp/file.txt NodeID-XXXX:/tmp/file.txt - [`ssh`](#avalanche-node-ssh): (ALPHA Warning) This command is currently in experimental mode. The node ssh command execute a given command [cmd] using ssh on all nodes in the cluster if ClusterName is given. If no command is given, just prints the ssh command to be used to connect to each node in the cluster. For provided NodeID or InstanceID or IP, the command [cmd] will be executed on that node. If no [cmd] is provided for the node, it will open ssh shell there. - [`status`](#avalanche-node-status): (ALPHA Warning) This command is currently in experimental mode. The node status command gets the bootstrap status of all nodes in a cluster with the Primary Network. If no cluster is given, defaults to node list behaviour. To get the bootstrap status of a node with a Blockchain, use --blockchain flag - [`sync`](#avalanche-node-sync): (ALPHA Warning) This command is currently in experimental mode. The node sync command enables all nodes in a cluster to be bootstrapped to a Blockchain. You can check the blockchain bootstrap status by calling avalanche node status `clusterName` --blockchain `blockchainName` - [`update`](#avalanche-node-update): (ALPHA Warning) This command is currently in experimental mode. The node update command suite provides a collection of commands for nodes to update their avalanchego or VM config. You can check the status after update by calling avalanche node status - [`upgrade`](#avalanche-node-upgrade): (ALPHA Warning) This command is currently in experimental mode. The node update command suite provides a collection of commands for nodes to update their avalanchego or VM version. You can check the status after upgrade by calling avalanche node status - [`validate`](#avalanche-node-validate): (ALPHA Warning) This command is currently in experimental mode. The node validate command suite provides a collection of commands for nodes to join the Primary Network and Subnets as validators. If any of the commands is run before the nodes are bootstrapped on the Primary Network, the command will fail. You can check the bootstrap status by calling avalanche node status `clusterName` - [`whitelist`](#avalanche-node-whitelist): (ALPHA Warning) The whitelist command suite provides a collection of tools for granting access to the cluster. Command adds IP if --ip params provided to cloud security access rules allowing it to access all nodes in the cluster via ssh or http. It also command adds SSH public key to all nodes in the cluster if --ssh params is there. If no params provided it detects current user IP automaticaly and whitelists it **Flags:** ```bash -h, --help help for node --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### addDashboard (ALPHA Warning) This command is currently in experimental mode. The node addDashboard command adds custom dashboard to the Grafana monitoring dashboard for the cluster. **Usage:** ```bash avalanche node addDashboard [subcommand] [flags] ``` **Flags:** ```bash --add-grafana-dashboard string path to additional grafana dashboard json file -h, --help help for addDashboard --subnet string subnet that the dasbhoard is intended for (if any) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### create (ALPHA Warning) This command is currently in experimental mode. The node create command sets up a validator on a cloud server of your choice. The validator will be validating the Avalanche Primary Network and Subnet of your choice. By default, the command runs an interactive wizard. It walks you through all the steps you need to set up a validator. Once this command is completed, you will have to wait for the validator to finish bootstrapping on the primary network before running further commands on it, e.g. validating a Subnet. You can check the bootstrapping status by running avalanche node status The created node will be part of group of validators called `clusterName` and users can call node commands with `clusterName` so that the command will apply to all nodes in the cluster **Usage:** ```bash avalanche node create [subcommand] [flags] ``` **Flags:** ```bash --add-grafana-dashboard string path to additional grafana dashboard json file --alternative-key-pair-name string key pair name to use if default one generates conflicts --authorize-access authorize CLI to create cloud resources --auto-replace-keypair automatically replaces key pair to access node if previous key pair is not found --avalanchego-version-from-subnet string install latest avalanchego version, that is compatible with the given subnet, on node/s --aws create node/s in AWS cloud --aws-profile string aws profile to use (default "default") --aws-volume-iops int AWS iops (for gp3, io1, and io2 volume types only) (default 3000) --aws-volume-size int AWS volume size in GB (default 1000) --aws-volume-throughput int AWS throughput in MiB/s (for gp3 volume type only) (default 125) --aws-volume-type string AWS volume type (default "gp3") --bootstrap-ids stringArray nodeIDs of bootstrap nodes --bootstrap-ips stringArray IP:port pairs of bootstrap nodes --cluster string operate on the given cluster --custom-avalanchego-version string install given avalanchego version on node/s --devnet operate on a devnet network --enable-monitoring set up Prometheus monitoring for created nodes. This option creates a separate monitoring cloud instance and incures additional cost --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet --gcp create node/s in GCP cloud --gcp-credentials string use given GCP credentials --gcp-project string use given GCP project --genesis string path to genesis file --grafana-pkg string use grafana pkg instead of apt repo(by default), for example https://dl.grafana.com/oss/release/grafana_10.4.1_amd64.deb -h, --help help for create --latest-avalanchego-pre-release-version install latest avalanchego pre-release version on node/s --latest-avalanchego-version install latest avalanchego release version on node/s -m, --mainnet operate on mainnet --node-type string cloud instance type. Use 'default' to use recommended default instance type --num-apis ints number of API nodes(nodes without stake) to create in the new Devnet --num-validators ints number of nodes to create per region(s). Use comma to separate multiple numbers for each region in the same order as --region flag --partial-sync primary network partial sync (default true) --public-http-port allow public access to avalanchego HTTP port --region strings create node(s) in given region(s). Use comma to separate multiple regions --ssh-agent-identity string use given ssh identity(only for ssh agent). If not set, default will be used -t, --testnet fuji operate on testnet (alias to fuji) --upgrade string path to upgrade file --use-ssh-agent use ssh agent(ex: Yubikey) for ssh auth --use-static-ip attach static Public IP on cloud servers (default true) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### destroy (ALPHA Warning) This command is currently in experimental mode. The node destroy command terminates all running nodes in cloud server and deletes all storage disks. If there is a static IP address attached, it will be released. **Usage:** ```bash avalanche node destroy [subcommand] [flags] ``` **Flags:** ```bash --all destroy all existing clusters created by Avalanche CLI --authorize-access authorize CLI to release cloud resources -y, --authorize-all authorize all CLI requests --authorize-remove authorize CLI to remove all local files related to cloud nodes --aws-profile string aws profile to use (default "default") -h, --help help for destroy --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### devnet (ALPHA Warning) This command is currently in experimental mode. The node devnet command suite provides a collection of commands related to devnets. You can check the updated status by calling avalanche node status `clusterName` **Usage:** ```bash avalanche node devnet [subcommand] [flags] ``` **Subcommands:** - [`deploy`](#avalanche-node-devnet-deploy): (ALPHA Warning) This command is currently in experimental mode. The node devnet deploy command deploys a subnet into a devnet cluster, creating subnet and blockchain txs for it. It saves the deploy info both locally and remotely. - [`wiz`](#avalanche-node-devnet-wiz): (ALPHA Warning) This command is currently in experimental mode. The node wiz command creates a devnet and deploys, sync and validate a subnet into it. It creates the subnet if so needed. **Flags:** ```bash -h, --help help for devnet --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### devnet deploy (ALPHA Warning) This command is currently in experimental mode. The node devnet deploy command deploys a subnet into a devnet cluster, creating subnet and blockchain txs for it. It saves the deploy info both locally and remotely. **Usage:** ```bash avalanche node devnet deploy [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for deploy --no-checks do not check for healthy status or rpc compatibility of nodes against subnet --subnet-aliases strings additional subnet aliases to be used for RPC calls in addition to subnet blockchain name --subnet-only only create a subnet --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### devnet wiz (ALPHA Warning) This command is currently in experimental mode. The node wiz command creates a devnet and deploys, sync and validate a subnet into it. It creates the subnet if so needed. **Usage:** ```bash avalanche node devnet wiz [subcommand] [flags] ``` **Flags:** ```bash --add-grafana-dashboard string path to additional grafana dashboard json file --alternative-key-pair-name string key pair name to use if default one generates conflicts --authorize-access authorize CLI to create cloud resources --auto-replace-keypair automatically replaces key pair to access node if previous key pair is not found --aws create node/s in AWS cloud --aws-profile string aws profile to use (default "default") --aws-volume-iops int AWS iops (for gp3, io1, and io2 volume types only) (default 3000) --aws-volume-size int AWS volume size in GB (default 1000) --aws-volume-throughput int AWS throughput in MiB/s (for gp3 volume type only) (default 125) --aws-volume-type string AWS volume type (default "gp3") --chain-config string path to the chain configuration for subnet --custom-avalanchego-version string install given avalanchego version on node/s --custom-subnet use a custom VM as the subnet virtual machine --custom-vm-branch string custom vm branch or commit --custom-vm-build-script string custom vm build-script --custom-vm-repo-url string custom vm repository url --default-validator-params use default weight/start/duration params for subnet validator --deploy-icm-messenger deploy Interchain Messenger (default true) --deploy-icm-registry deploy Interchain Registry (default true) --deploy-teleporter-messenger deploy Interchain Messenger (default true) --deploy-teleporter-registry deploy Interchain Registry (default true) --enable-monitoring set up Prometheus monitoring for created nodes. Please note that this option creates a separate monitoring instance and incures additional cost --evm-chain-id uint chain ID to use with Subnet-EVM --evm-defaults use default production settings with Subnet-EVM --evm-production-defaults use default production settings for your blockchain --evm-subnet use Subnet-EVM as the subnet virtual machine --evm-test-defaults use default test settings for your blockchain --evm-token string token name to use with Subnet-EVM --evm-version string version of Subnet-EVM to use --force-subnet-create overwrite the existing subnet configuration if one exists --gcp create node/s in GCP cloud --gcp-credentials string use given GCP credentials --gcp-project string use given GCP project --grafana-pkg string use grafana pkg instead of apt repo(by default), for example https://dl.grafana.com/oss/release/grafana_10.4.1_amd64.deb -h, --help help for wiz --icm generate an icm-ready vm --icm-messenger-contract-address-path string path to an icm messenger contract address file --icm-messenger-deployer-address-path string path to an icm messenger deployer address file --icm-messenger-deployer-tx-path string path to an icm messenger deployer tx file --icm-registry-bytecode-path string path to an icm registry bytecode file --icm-version string icm version to deploy (default "latest") --latest-avalanchego-pre-release-version install latest avalanchego pre-release version on node/s --latest-avalanchego-version install latest avalanchego release version on node/s --latest-evm-version use latest Subnet-EVM released version --latest-pre-released-evm-version use latest Subnet-EVM pre-released version --node-config string path to avalanchego node configuration for subnet --node-type string cloud instance type. Use 'default' to use recommended default instance type --num-apis ints number of API nodes(nodes without stake) to create in the new Devnet --num-validators ints number of nodes to create per region(s). Use comma to separate multiple numbers for each region in the same order as --region flag --public-http-port allow public access to avalanchego HTTP port --region strings create node/s in given region(s). Use comma to separate multiple regions --relayer run AWM relayer when deploying the vm --ssh-agent-identity string use given ssh identity(only for ssh agent). If not set, default will be used. --subnet-aliases strings additional subnet aliases to be used for RPC calls in addition to subnet blockchain name --subnet-config string path to the subnet configuration for subnet --subnet-genesis string file path of the subnet genesis --teleporter generate an icm-ready vm --teleporter-messenger-contract-address-path string path to an icm messenger contract address file --teleporter-messenger-deployer-address-path string path to an icm messenger deployer address file --teleporter-messenger-deployer-tx-path string path to an icm messenger deployer tx file --teleporter-registry-bytecode-path string path to an icm registry bytecode file --teleporter-version string icm version to deploy (default "latest") --use-ssh-agent use ssh agent for ssh --use-static-ip attach static Public IP on cloud servers (default true) --validators strings deploy subnet into given comma separated list of validators. defaults to all cluster nodes --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### export (ALPHA Warning) This command is currently in experimental mode. The node export command exports cluster configuration and its nodes config to a text file. If no file is specified, the configuration is printed to the stdout. Use --include-secrets to include keys in the export. In this case please keep the file secure as it contains sensitive information. Exported cluster configuration without secrets can be imported by another user using node import command. **Usage:** ```bash avalanche node export [subcommand] [flags] ``` **Flags:** ```bash --file string specify the file to export the cluster configuration to --force overwrite the file if it exists -h, --help help for export --include-secrets include keys in the export --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### import (ALPHA Warning) This command is currently in experimental mode. The node import command imports cluster configuration and its nodes configuration from a text file created from the node export command. Prior to calling this command, call node whitelist command to have your SSH public key and IP whitelisted by the cluster owner. This will enable you to use avalanche-cli commands to manage the imported cluster. Please note, that this imported cluster will be considered as EXTERNAL by avalanche-cli, so some commands affecting cloud nodes like node create or node destroy will be not applicable to it. **Usage:** ```bash avalanche node import [subcommand] [flags] ``` **Flags:** ```bash --file string specify the file to export the cluster configuration to -h, --help help for import --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### list (ALPHA Warning) This command is currently in experimental mode. The node list command lists all clusters together with their nodes. **Usage:** ```bash avalanche node list [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for list --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### loadtest (ALPHA Warning) This command is currently in experimental mode. The node loadtest command suite starts and stops a load test for an existing devnet cluster. **Usage:** ```bash avalanche node loadtest [subcommand] [flags] ``` **Subcommands:** - [`start`](#avalanche-node-loadtest-start): (ALPHA Warning) This command is currently in experimental mode. The node loadtest command starts load testing for an existing devnet cluster. If the cluster does not have an existing load test host, the command creates a separate cloud server and builds the load test binary based on the provided load test Git Repo URL and load test binary build command. The command will then run the load test binary based on the provided load test run command. - [`stop`](#avalanche-node-loadtest-stop): (ALPHA Warning) This command is currently in experimental mode. The node loadtest stop command stops load testing for an existing devnet cluster and terminates the separate cloud server created to host the load test. **Flags:** ```bash -h, --help help for loadtest --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### loadtest start (ALPHA Warning) This command is currently in experimental mode. The node loadtest command starts load testing for an existing devnet cluster. If the cluster does not have an existing load test host, the command creates a separate cloud server and builds the load test binary based on the provided load test Git Repo URL and load test binary build command. The command will then run the load test binary based on the provided load test run command. **Usage:** ```bash avalanche node loadtest start [subcommand] [flags] ``` **Flags:** ```bash --authorize-access authorize CLI to create cloud resources --aws create loadtest node in AWS cloud --aws-profile string aws profile to use (default "default") --gcp create loadtest in GCP cloud -h, --help help for start --load-test-branch string load test branch or commit --load-test-build-cmd string command to build load test binary --load-test-cmd string command to run load test --load-test-repo string load test repo url to use --node-type string cloud instance type for loadtest script --region string create load test node in a given region --ssh-agent-identity string use given ssh identity(only for ssh agent). If not set, default will be used --use-ssh-agent use ssh agent(ex: Yubikey) for ssh auth --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### loadtest stop (ALPHA Warning) This command is currently in experimental mode. The node loadtest stop command stops load testing for an existing devnet cluster and terminates the separate cloud server created to host the load test. **Usage:** ```bash avalanche node loadtest stop [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for stop --load-test strings stop specified load test node(s). Use comma to separate multiple load test instance names --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### local The node local command suite provides a collection of commands related to local nodes **Usage:** ```bash avalanche node local [subcommand] [flags] ``` **Subcommands:** - [`destroy`](#avalanche-node-local-destroy): Cleanup local node. - [`start`](#avalanche-node-local-start): The node local start command creates Avalanche nodes on the local machine. Once this command is completed, you will have to wait for the Avalanche node to finish bootstrapping on the primary network before running further commands on it, e.g. validating a Subnet. You can check the bootstrapping status by running avalanche node status local. - [`status`](#avalanche-node-local-status): Get status of local node. - [`stop`](#avalanche-node-local-stop): Stop local node. - [`track`](#avalanche-node-local-track): Track specified blockchain with local node - [`validate`](#avalanche-node-local-validate): Use Avalanche Node set up on local machine to set up specified L1 by providing the RPC URL of the L1. This command can only be used to validate Proof of Stake L1. **Flags:** ```bash -h, --help help for local --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### local destroy Cleanup local node. **Usage:** ```bash avalanche node local destroy [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for destroy --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### local start The node local start command creates Avalanche nodes on the local machine. Once this command is completed, you will have to wait for the Avalanche node to finish bootstrapping on the primary network before running further commands on it, e.g. validating a Subnet. You can check the bootstrapping status by running avalanche node status local. **Usage:** ```bash avalanche node local start [subcommand] [flags] ``` **Flags:** ```bash --avalanchego-path string use this avalanchego binary path --bootstrap-id stringArray nodeIDs of bootstrap nodes --bootstrap-ip stringArray IP:port pairs of bootstrap nodes --cluster string operate on the given cluster --custom-avalanchego-version string install given avalanchego version on node/s --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet --genesis string path to genesis file -h, --help help for start --latest-avalanchego-pre-release-version install latest avalanchego pre-release version on node/s (default true) --latest-avalanchego-version install latest avalanchego release version on node/s -l, --local operate on a local network -m, --mainnet operate on mainnet --node-config string path to common avalanchego config settings for all nodes --num-nodes uint32 number of Avalanche nodes to create on local machine (default 1) --partial-sync primary network partial sync (default true) --staking-cert-key-path string path to provided staking cert key for node --staking-signer-key-path string path to provided staking signer key for node --staking-tls-key-path string path to provided staking tls key for node -t, --testnet fuji operate on testnet (alias to fuji) --upgrade string path to upgrade file --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### local status Get status of local node. **Usage:** ```bash avalanche node local status [subcommand] [flags] ``` **Flags:** ```bash --blockchain string specify the blockchain the node is syncing with -h, --help help for status --l1 string specify the blockchain the node is syncing with --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### local stop Stop local node. **Usage:** ```bash avalanche node local stop [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for stop --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### local track Track specified blockchain with local node **Usage:** ```bash avalanche node local track [subcommand] [flags] ``` **Flags:** ```bash --avalanchego-path string use this avalanchego binary path --custom-avalanchego-version string install given avalanchego version on node/s -h, --help help for track --latest-avalanchego-pre-release-version install latest avalanchego pre-release version on node/s (default true) --latest-avalanchego-version install latest avalanchego release version on node/s --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### local validate Use Avalanche Node set up on local machine to set up specified L1 by providing the RPC URL of the L1. This command can only be used to validate Proof of Stake L1. **Usage:** ```bash avalanche node local validate [subcommand] [flags] ``` **Flags:** ```bash --aggregator-log-level string log level to use with signature aggregator (default "Debug") --aggregator-log-to-stdout use stdout for signature aggregator logs --balance float amount of AVAX to increase validator's balance by --blockchain string specify the blockchain the node is syncing with --delegation-fee uint16 delegation fee (in bips) (default 100) --disable-owner string P-Chain address that will able to disable the validator with a P-Chain transaction -h, --help help for validate --l1 string specify the blockchain the node is syncing with --minimum-stake-duration uint minimum stake duration (in seconds) (default 100) --remaining-balance-owner string P-Chain address that will receive any leftover AVAX from the validator when it is removed from Subnet --rpc string connect to validator manager at the given rpc endpoint --stake-amount uint amount of tokens to stake --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### refresh-ips (ALPHA Warning) This command is currently in experimental mode. The node refresh-ips command obtains the current IP for all nodes with dynamic IPs in the cluster, and updates the local node information used by CLI commands. **Usage:** ```bash avalanche node refresh-ips [subcommand] [flags] ``` **Flags:** ```bash --aws-profile string aws profile to use (default "default") -h, --help help for refresh-ips --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### resize (ALPHA Warning) This command is currently in experimental mode. The node resize command can change the amount of CPU, memory and disk space available for the cluster nodes. **Usage:** ```bash avalanche node resize [subcommand] [flags] ``` **Flags:** ```bash --aws-profile string aws profile to use (default "default") --disk-size string Disk size to resize in Gb (e.g. 1000Gb) -h, --help help for resize --node-type string Node type to resize (e.g. t3.2xlarge) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### scp (ALPHA Warning) This command is currently in experimental mode. The node scp command securely copies files to and from nodes. Remote source or destionation can be specified using the following format: [clusterName|nodeID|instanceID|IP]:/path/to/file. Regular expressions are supported for the source files like /tmp/*.txt. File transfer to the nodes are parallelized. IF source or destination is cluster, the other should be a local file path. If both destinations are remote, they must be nodes for the same cluster and not clusters themselves. For example: $ avalanche node scp [cluster1|node1]:/tmp/file.txt /tmp/file.txt $ avalanche node scp /tmp/file.txt [cluster1|NodeID-XXXX]:/tmp/file.txt $ avalanche node scp node1:/tmp/file.txt NodeID-XXXX:/tmp/file.txt **Usage:** ```bash avalanche node scp [subcommand] [flags] ``` **Flags:** ```bash --compress use compression for ssh -h, --help help for scp --recursive copy directories recursively --with-loadtest include loadtest node for scp cluster operations --with-monitor include monitoring node for scp cluster operations --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### ssh (ALPHA Warning) This command is currently in experimental mode. The node ssh command execute a given command [cmd] using ssh on all nodes in the cluster if ClusterName is given. If no command is given, just prints the ssh command to be used to connect to each node in the cluster. For provided NodeID or InstanceID or IP, the command [cmd] will be executed on that node. If no [cmd] is provided for the node, it will open ssh shell there. **Usage:** ```bash avalanche node ssh [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for ssh --parallel run ssh command on all nodes in parallel --with-loadtest include loadtest node for ssh cluster operations --with-monitor include monitoring node for ssh cluster operations --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### status (ALPHA Warning) This command is currently in experimental mode. The node status command gets the bootstrap status of all nodes in a cluster with the Primary Network. If no cluster is given, defaults to node list behaviour. To get the bootstrap status of a node with a Blockchain, use --blockchain flag **Usage:** ```bash avalanche node status [subcommand] [flags] ``` **Flags:** ```bash --blockchain string specify the blockchain the node is syncing with -h, --help help for status --subnet string specify the blockchain the node is syncing with --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### sync (ALPHA Warning) This command is currently in experimental mode. The node sync command enables all nodes in a cluster to be bootstrapped to a Blockchain. You can check the blockchain bootstrap status by calling avalanche node status `clusterName` --blockchain `blockchainName` **Usage:** ```bash avalanche node sync [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for sync --no-checks do not check for bootstrapped/healthy status or rpc compatibility of nodes against subnet --subnet-aliases strings subnet alias to be used for RPC calls. defaults to subnet blockchain ID --validators strings sync subnet into given comma separated list of validators. defaults to all cluster nodes --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### update (ALPHA Warning) This command is currently in experimental mode. The node update command suite provides a collection of commands for nodes to update their avalanchego or VM config. You can check the status after update by calling avalanche node status **Usage:** ```bash avalanche node update [subcommand] [flags] ``` **Subcommands:** - [`subnet`](#avalanche-node-update-subnet): (ALPHA Warning) This command is currently in experimental mode. The node update subnet command updates all nodes in a cluster with latest Subnet configuration and VM for custom VM. You can check the updated subnet bootstrap status by calling avalanche node status `clusterName` --subnet `subnetName` **Flags:** ```bash -h, --help help for update --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### update subnet (ALPHA Warning) This command is currently in experimental mode. The node update subnet command updates all nodes in a cluster with latest Subnet configuration and VM for custom VM. You can check the updated subnet bootstrap status by calling avalanche node status `clusterName` --subnet `subnetName` **Usage:** ```bash avalanche node update subnet [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for subnet --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### upgrade (ALPHA Warning) This command is currently in experimental mode. The node update command suite provides a collection of commands for nodes to update their avalanchego or VM version. You can check the status after upgrade by calling avalanche node status **Usage:** ```bash avalanche node upgrade [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for upgrade --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### validate (ALPHA Warning) This command is currently in experimental mode. The node validate command suite provides a collection of commands for nodes to join the Primary Network and Subnets as validators. If any of the commands is run before the nodes are bootstrapped on the Primary Network, the command will fail. You can check the bootstrap status by calling avalanche node status `clusterName` **Usage:** ```bash avalanche node validate [subcommand] [flags] ``` **Subcommands:** - [`primary`](#avalanche-node-validate-primary): (ALPHA Warning) This command is currently in experimental mode. The node validate primary command enables all nodes in a cluster to be validators of Primary Network. - [`subnet`](#avalanche-node-validate-subnet): (ALPHA Warning) This command is currently in experimental mode. The node validate subnet command enables all nodes in a cluster to be validators of a Subnet. If the command is run before the nodes are Primary Network validators, the command will first make the nodes Primary Network validators before making them Subnet validators. If The command is run before the nodes are bootstrapped on the Primary Network, the command will fail. You can check the bootstrap status by calling avalanche node status `clusterName` If The command is run before the nodes are synced to the subnet, the command will fail. You can check the subnet sync status by calling avalanche node status `clusterName` --subnet `subnetName` **Flags:** ```bash -h, --help help for validate --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### validate primary (ALPHA Warning) This command is currently in experimental mode. The node validate primary command enables all nodes in a cluster to be validators of Primary Network. **Usage:** ```bash avalanche node validate primary [subcommand] [flags] ``` **Flags:** ```bash -e, --ewoq use ewoq key [fuji/devnet only] -h, --help help for primary -k, --key string select the key to use [fuji only] -g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji/devnet) --ledger-addrs strings use the given ledger addresses --stake-amount uint how many AVAX to stake in the validator --staking-period duration how long validator validates for after start time --start-time string UTC start time when this validator starts validating, in 'YYYY-MM-DD HH:MM:SS' format --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` #### validate subnet (ALPHA Warning) This command is currently in experimental mode. The node validate subnet command enables all nodes in a cluster to be validators of a Subnet. If the command is run before the nodes are Primary Network validators, the command will first make the nodes Primary Network validators before making them Subnet validators. If The command is run before the nodes are bootstrapped on the Primary Network, the command will fail. You can check the bootstrap status by calling avalanche node status `clusterName` If The command is run before the nodes are synced to the subnet, the command will fail. You can check the subnet sync status by calling avalanche node status `clusterName` --subnet `subnetName` **Usage:** ```bash avalanche node validate subnet [subcommand] [flags] ``` **Flags:** ```bash --default-validator-params use default weight/start/duration params for subnet validator -e, --ewoq use ewoq key [fuji/devnet only] -h, --help help for subnet -k, --key string select the key to use [fuji/devnet only] -g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji/devnet) --ledger-addrs strings use the given ledger addresses --no-checks do not check for bootstrapped status or healthy status --no-validation-checks do not check if subnet is already synced or validated (default true) --stake-amount uint how many AVAX to stake in the validator --staking-period duration how long validator validates for after start time --start-time string UTC start time when this validator starts validating, in 'YYYY-MM-DD HH:MM:SS' format --validators strings validate subnet for the given comma separated list of validators. defaults to all cluster nodes --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### whitelist (ALPHA Warning) The whitelist command suite provides a collection of tools for granting access to the cluster. Command adds IP if --ip params provided to cloud security access rules allowing it to access all nodes in the cluster via ssh or http. It also command adds SSH public key to all nodes in the cluster if --ssh params is there. If no params provided it detects current user IP automaticaly and whitelists it **Usage:** ```bash avalanche node whitelist [subcommand] [flags] ``` **Flags:** ```bash -y, --current-ip whitelist current host ip -h, --help help for whitelist --ip string ip address to whitelist --ssh string ssh public key to whitelist --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche primary The primary command suite provides a collection of tools for interacting with the Primary Network **Usage:** ```bash avalanche primary [subcommand] [flags] ``` **Subcommands:** - [`addValidator`](#avalanche-primary-addvalidator): The primary addValidator command adds a node as a validator in the Primary Network - [`describe`](#avalanche-primary-describe): The subnet describe command prints details of the primary network configuration to the console. **Flags:** ```bash -h, --help help for primary --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### addValidator The primary addValidator command adds a node as a validator in the Primary Network **Usage:** ```bash avalanche primary addValidator [subcommand] [flags] ``` **Flags:** ```bash --cluster string operate on the given cluster --delegation-fee uint32 set the delegation fee (20 000 is equivalent to 2%) --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for addValidator -k, --key string select the key to use [fuji only] -g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji) --ledger-addrs strings use the given ledger addresses -m, --mainnet operate on mainnet --nodeID string set the NodeID of the validator to add --proof-of-possession string set the BLS proof of possession of the validator to add --public-key string set the BLS public key of the validator to add --staking-period duration how long this validator will be staking --start-time string UTC start time when this validator starts validating, in 'YYYY-MM-DD HH:MM:SS' format -t, --testnet fuji operate on testnet (alias to fuji) --weight uint set the staking weight of the validator to add --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### describe The subnet describe command prints details of the primary network configuration to the console. **Usage:** ```bash avalanche primary describe [subcommand] [flags] ``` **Flags:** ```bash --cluster string operate on the given cluster -h, --help help for describe -l, --local operate on a local network --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche transaction The transaction command suite provides all of the utilities required to sign multisig transactions. **Usage:** ```bash avalanche transaction [subcommand] [flags] ``` **Subcommands:** - [`commit`](#avalanche-transaction-commit): The transaction commit command commits a transaction by submitting it to the P-Chain. - [`sign`](#avalanche-transaction-sign): The transaction sign command signs a multisig transaction. **Flags:** ```bash -h, --help help for transaction --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### commit The transaction commit command commits a transaction by submitting it to the P-Chain. **Usage:** ```bash avalanche transaction commit [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for commit --input-tx-filepath string Path to the transaction signed by all signatories --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### sign The transaction sign command signs a multisig transaction. **Usage:** ```bash avalanche transaction sign [subcommand] [flags] ``` **Flags:** ```bash -h, --help help for sign --input-tx-filepath string Path to the transaction file for signing -k, --key string select the key to use [fuji only] -g, --ledger use ledger instead of key (always true on mainnet, defaults to false on fuji) --ledger-addrs strings use the given ledger addresses --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche update Check if an update is available, and prompt the user to install it **Usage:** ```bash avalanche update [subcommand] [flags] ``` **Flags:** ```bash -c, --confirm Assume yes for installation -h, --help help for update -v, --version version for update --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ## avalanche validator The validator command suite provides a collection of tools for managing validator balance on P-Chain. Validator's balance is used to pay for continuous fee to the P-Chain. When this Balance reaches 0, the validator will be considered inactive and will no longer participate in validating the L1 **Usage:** ```bash avalanche validator [subcommand] [flags] ``` **Subcommands:** - [`getBalance`](#avalanche-validator-getbalance): This command gets the remaining validator P-Chain balance that is available to pay P-Chain continuous fee - [`increaseBalance`](#avalanche-validator-increasebalance): This command increases the validator P-Chain balance - [`list`](#avalanche-validator-list): This command gets a list of the validators of the L1 **Flags:** ```bash -h, --help help for validator --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### getBalance This command gets the remaining validator P-Chain balance that is available to pay P-Chain continuous fee **Usage:** ```bash avalanche validator getBalance [subcommand] [flags] ``` **Flags:** ```bash --cluster string operate on the given cluster --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for getBalance --l1 string name of L1 -l, --local operate on a local network -m, --mainnet operate on mainnet --node-id string node ID of the validator -t, --testnet fuji operate on testnet (alias to fuji) --validation-id string validation ID of the validator --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### increaseBalance This command increases the validator P-Chain balance **Usage:** ```bash avalanche validator increaseBalance [subcommand] [flags] ``` **Flags:** ```bash --balance float amount of AVAX to increase validator's balance by --cluster string operate on the given cluster --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for increaseBalance -k, --key string select the key to use [fuji/devnet deploy only] --l1 string name of L1 (to increase balance of bootstrap validators only) -l, --local operate on a local network -m, --mainnet operate on mainnet --node-id string node ID of the validator -t, --testnet fuji operate on testnet (alias to fuji) --validation-id string validationIDStr of the validator --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` ### list This command gets a list of the validators of the L1 **Usage:** ```bash avalanche validator list [subcommand] [flags] ``` **Flags:** ```bash --cluster string operate on the given cluster --devnet operate on a devnet network --endpoint string use the given endpoint for network operations -f, --fuji testnet operate on fuji (alias to testnet -h, --help help for list -l, --local operate on a local network -m, --mainnet operate on mainnet -t, --testnet fuji operate on testnet (alias to fuji) --config string config file (default is $HOME/.avalanche-cli/config.json) --log-level string log level for the application (default "ERROR") --skip-update-check skip check for new versions ``` # Create Avalanche L1 (/docs/tooling/avalanche-cli/create-avalanche-l1) --- title: Create Avalanche L1 description: This page demonstrates how to create an Avalanche L1 using Avalanche-CLI. --- This tutorial walks you through the process of using Avalanche-CLI to create an Avalanche L1, deploy it to a local network, and connect to it with Core wallet. The first step of learning Avalanche L1 development is learning to use [Avalanche-CLI](https://github.com/ava-labs/avalanche-cli). Installation[​](#installation "Direct link to heading") ------------------------------------------------------- The fastest way to install the latest Avalanche-CLI binary is by running the install script: ```bash curl -sSfL https://raw.githubusercontent.com/ava-labs/avalanche-cli/main/scripts/install.sh | sh -s ``` The binary installs inside the `~/bin` directory. If the directory doesn't exist, it will be created. You can run all of the commands in this tutorial by calling `~/bin/avalanche`. You can also add the command to your system path by running: ```bash export PATH=~/bin:$PATH ``` To make this change permanent, add this line to your shell’s initialization file (e.g., `~/.bashrc` or `~/.zshrc`). For example: ```bash echo 'export PATH=~/bin:$PATH' >> ~/.bashrc source ~/.bashrc ``` Once you add it to your path, you should be able to call the program anywhere with just: `avalanche` For more detailed installation instructions, see [Avalanche-CLI Installation](/docs/tooling/avalanche-cli). Create Your Avalanche L1 Configuration[​](#create-your-avalanche-l1-configuration "Direct link to heading") ----------------------------------------------------------------------------------------------- This tutorial teaches you how to create an Ethereum Virtual Machine (EVM) based Avalanche L1. To do so, you use Subnet-EVM, Avalanche's L1 fork of the EVM. It supports airdrops, custom fee tokens, configurable gas parameters, and multiple stateful precompiles. To learn more, take a look at [Subnet-EVM](https://github.com/ava-labs/subnet-evm). The goal of your first command is to create a Subnet-EVM configuration. The `avalanche-cli` command suite provides a collection of tools for developing and deploying Avalanche L1s. The Creation Wizard walks you through the process of creating your Avalanche L1. To get started, first pick a name for your Avalanche L1. This tutorial uses `myblockchain`, but feel free to substitute that with any name you like. Once you've picked your name, run: ```bash avalanche blockchain create myblockchain ``` The following sections walk through each question in the wizard. ### Choose Your VM ```bash ? Which Virtual Machine would you like to use?: ▸ Subnet-EVM Custom VM Explain the difference ``` Select `Subnet-EVM`. ### Choose Validator Manager ```text ? Which validator management type would you like to use in your blockchain?: ▸ Proof Of Authority Proof Of Stake Explain the difference ``` Select `Proof Of Authority`. ```text ? Which address do you want to enable as controller of ValidatorManager contract?: ▸ Get address from an existing stored key (created from avalanche key create or avalanche key import) Custom ``` Select `Get address from an existing stored key`. ```text ? Which stored key should be used enable as controller of ValidatorManager contract?: ▸ ewoq cli-awm-relayer cli-teleporter-deployer ``` Select `ewoq`. This key is used to manage (add/remove) the validator set. Do not use EWOQ key in a testnet or production setup. The EWOQ private key is publicly exposed. To learn more about different validator management types, see [PoA vs PoS](/docs/avalanche-l1s/validator-manager/contract). ### Choose Blockchain Configuration ```text ? Do you want to use default values for the Blockchain configuration?: ▸ I want to use defaults for a test environment I want to use defaults for a production environment I don't want to use default values Explain the difference ``` Select `I want to use defaults for a test environment`. This will automatically setup the configuration for a test environment, including an airdrop to the EWOQ key and Avalanche ICM. ### Enter Your Avalanche L1's ChainID ```text ✗ Chain ID: ``` Choose a positive integer for your EVM-style ChainID. In production environments, this ChainID needs to be unique and not shared with any other chain. You can visit [chainlist](https://chainlist.org/) to verify that your selection is unique. Because this is a development Avalanche L1, feel free to pick any number. Stay away from well-known ChainIDs such as 1 (Ethereum) or 43114 (Avalanche C-Chain) as those may cause issues with other tools. ### Token Symbol ```text ✗ Token Symbol: ``` Enter a string to name your Avalanche L1's native token. The token symbol doesn't necessarily need to be unique. Example token symbols are AVAX, JOE, and BTC. ### Wrapping Up If all worked successfully, the command prints: ```bash ✓ Successfully created blockchain configuration ``` To view the Genesis configuration, use the following command: ```bash avalanche blockchain describe myblockchain --genesis ``` You've successfully created your first Avalanche L1 configuration. Now it's time to deploy it. # Installation (/docs/tooling/avalanche-cli/get-avalanche-cli) --- title: Installation description: Instructions for installing and setting up the Avalanche-CLI. --- ## Compatibility Avalanche-CLI runs on Linux and Mac. Windows is currently not supported. ## Instructions To download a binary for the latest release, run: ```bash curl -sSfL https://raw.githubusercontent.com/ava-labs/avalanche-cli/main/scripts/install.sh | sh -s ``` The script installs the binary inside the `~/bin` directory. If the directory doesn't exist, it will be created. ## Adding Avalanche-CLI to Your PATH To call the `avalanche` binary from anywhere, you'll need to add it to your system path. If you installed the binary into the default location, you can run the following snippet to add it to your path. To add it to your path permanently, add an export command to your shell initialization script. If you run `bash`, use `.bashrc`. If you run `zsh`, use `.zshrc`. For example: ```bash export PATH=~/bin:$PATH >> .bashrc ``` ## Checking Your Installation You can test your installation by running `avalanche --version`. The tool should print the running version. ## Updating To update your installation, you need to delete your current binary and download the latest version using the preceding steps. ## Building from Source The source code is available in this [GitHub repository](https://github.com/ava-labs/avalanche-cli). After you've cloned the repository, checkout the tag you'd like to run. You can compile the code by running `./scripts/build.sh` from the top level directory. The build script names the binary `./bin/avalanche`. # Avalanche-CLI Overview (Deprecated) (/docs/tooling/avalanche-cli) --- title: Avalanche-CLI Overview (Deprecated) description: Build, deploy, and manage Avalanche L1s with the Avalanche-CLI (deprecated) --- > **Deprecated:** Avalanche-CLI is no longer actively maintained. For P-Chain operations (staking, transfers, subnets, L1 validators), use [Platform CLI](/docs/tooling/platform-cli) instead. For other functionality (ICM, node setup, L1 management), use the [Builder Console](/console). The Avalanche-CLI is a command-line tool that streamlines the process of building, deploying, and managing Avalanche L1 blockchains (formerly known as Subnets). ## Key Features - **Create & Deploy L1s**: Quickly create and deploy new Avalanche L1 blockchains to local, testnet, or mainnet environments - **VM Management**: Deploy L1s with Subnet-EVM or custom Virtual Machines - **Node Operations**: Run and manage validator nodes across different cloud providers - **Cross-Chain Messaging**: Set up Teleporter for cross-chain communication - **Transaction Management**: Handle native token transfers and P-Chain operations ## Getting Started To get started with Avalanche-CLI: 1. [Install Avalanche-CLI](/docs/tooling/avalanche-cli/get-avalanche-cli) on your system 2. Review the [CLI Commands Reference](/docs/tooling/avalanche-cli/cli-commands) for available commands 3. Follow the guide to [Create an Avalanche L1](/docs/tooling/avalanche-cli/create-avalanche-l1) ## Quick Links Get Avalanche-CLI installed on your system Learn how to create your first Avalanche L1 Deploy L1s to various environments Complete reference for all CLI commands ## Common Use Cases ### Local Development Deploy and test your L1 locally before moving to testnet or mainnet. ### Production Deployment Deploy L1s to Fuji testnet for testing, then to mainnet for production use. ### Validator Management Add and remove validators, manage staking, and monitor node health. ### Cross-Chain Integration Enable cross-chain messaging between your L1 and other chains using Teleporter. ## Support - [GitHub Repository](https://github.com/ava-labs/avalanche-cli) - [Discord Community](https://chat.avalabs.org/) - [Documentation](https://docs.avax.network/) # X-Chain API (/docs/rpcs/x-chain/api) --- title: "X-Chain API" description: "This page is an overview of the X-Chain API associated with AvalancheGo." edit_url: https://github.com/ava-labs/avalanchego/edit/master/vms/avm/service.md --- The [X-Chain](https://build.avax.network/docs/primary-network#x-chain), Avalanche's native platform for creating and trading assets, is an instance of the Avalanche Virtual Machine (AVM). This API allows clients to create and trade assets on the X-Chain and other instances of the AVM. ## Format This API uses the `json 2.0` RPC format. For more information on making JSON RPC calls, see [here](https://build.avax.network/docs/rpcs/other/guides/issuing-api-calls). ## Endpoints `/ext/bc/X` to interact with the X-Chain. `/ext/bc/blockchainID` to interact with other AVM instances, where `blockchainID` is the ID of a blockchain running the AVM. ## Methods ### `avm.getAllBalances` Deprecated as of [**v1.9.12**](https://github.com/ava-labs/avalanchego/releases/tag/v1.9.12). Get the balances of all assets controlled by a given address. **Signature:** ```sh avm.getAllBalances({address:string}) -> { balances: []{ asset: string, balance: int } } ``` **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" : 1, "method" :"avm.getAllBalances", "params" :{ "address":"X-avax1c79e0dd0susp7dc8udq34jgk2yvve7hapvdyht" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X ``` **Example Response:** ```json { "jsonrpc": "2.0", "result": { "balances": [ { "asset": "AVAX", "balance": "102" }, { "asset": "2sdnziCz37Jov3QSNMXcFRGFJ1tgauaj6L7qfk7yUcRPfQMC79", "balance": "10000" } ] }, "id": 1 } ``` ### `avm.getAssetDescription` Get information about an asset. **Signature:** ```sh avm.getAssetDescription({assetID: string}) -> { assetId: string, name: string, symbol: string, denomination: int } ``` - `assetID` is the id of the asset for which the information is requested. - `name` is the asset’s human-readable, not necessarily unique name. - `symbol` is the asset’s symbol. - `denomination` determines how balances of this asset are displayed by user interfaces. If denomination is 0, 100 units of this asset are displayed as 100. If denomination is 1, 100 units of this asset are displayed as 10.0. If denomination is 2, 100 units of this asset are displays as .100, etc. The AssetID for AVAX differs depending on the network you are on. Mainnet: FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z Testnet: U8iRqJoiJm8xZHAacmvYyZVwqQx6uDNtQeP3CQ6fcgQk3JqnK For finding the `assetID` of other assets, this [info] might be useful. Also, `avm.getUTXOs` returns the `assetID` in its output. **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"avm.getAssetDescription", "params" :{ "assetID" :"FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X ``` **Example Response:** ```json { "jsonrpc": "2.0", "result": { "assetID": "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z", "name": "Avalanche", "symbol": "AVAX", "denomination": "9" }, "id": 1 }` ``` ### `avm.getBalance` Deprecated as of [**v1.9.12**](https://github.com/ava-labs/avalanchego/releases/tag/v1.9.12). Get the balance of an asset controlled by a given address. **Signature:** ```sh avm.getBalance({ address: string, assetID: string }) -> {balance: int} ``` - `address` owner of the asset - `assetID` id of the asset for which the balance is requested **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" : 1, "method" :"avm.getBalance", "params" :{ "address":"X-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5", "assetID": "2pYGetDWyKdHxpFxh2LHeoLNCH6H5vxxCxHQtFnnFaYxLsqtHC" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X ``` **Example Response:** ```json { "jsonrpc": "2.0", "id": 1, "result": { "balance": "299999999999900", "utxoIDs": [ { "txID": "WPQdyLNqHfiEKp4zcCpayRHYDVYuh1hqs9c1RqgZXS4VPgdvo", "outputIndex": 1 } ] } } ``` ### `avm.getBlock` Returns the block with the given id. **Signature:** ```sh avm.getBlock({ blockID: string encoding: string // optional }) -> { block: string, encoding: string } ``` **Request:** - `blockID` is the block ID. It should be in cb58 format. - `encoding` is the encoding format to use. Can be either `hex` or `json`. Defaults to `hex`. **Response:** - `block` is the transaction encoded to `encoding`. - `encoding` is the `encoding`. #### Hex Example **Example Call:** ```sh curl -X POST --data '{ "jsonrpc": "2.0", "method": "avm.getBlock", "params": { "blockID": "tXJ4xwmR8soHE6DzRNMQPtiwQvuYsHn6eLLBzo2moDqBquqy6", "encoding": "hex" }, "id": 1 }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X ``` **Example Response:** ```json { "jsonrpc": "2.0", "result": { "block": "0x00000000002000000000641ad33ede17f652512193721df87994f783ec806bb5640c39ee73676caffcc3215e0651000000000049a80a000000010000000e0000000100000000000000000000000000000000000000000000000000000000000000000000000121e67317cbc4be2aeb00677ad6462778a8f52274b9d605df2591b23027a87dff000000070000002e1a2a3910000000000000000000000001000000015cf998275803a7277926912defdf177b2e97b0b400000001e0d825c5069a7336671dd27eaa5c7851d2cf449e7e1cdc469c5c9e5a953955950000000021e67317cbc4be2aeb00677ad6462778a8f52274b9d605df2591b23027a87dff000000050000008908223b680000000100000000000000005e45d02fcc9e585544008f1df7ae5c94bf7f0f2600000000641ad3b600000000642d48b60000005aedf802580000000121e67317cbc4be2aeb00677ad6462778a8f52274b9d605df2591b23027a87dff000000070000005aedf80258000000000000000000000001000000015cf998275803a7277926912defdf177b2e97b0b40000000b000000000000000000000001000000012892441ba9a160bcdc596dcd2cc3ad83c3493589000000010000000900000001adf2237a5fe2dfd906265e8e14274aa7a7b2ee60c66213110598ba34fb4824d74f7760321c0c8fb1e8d3c5e86909248e48a7ae02e641da5559351693a8a1939800286d4fa2", "encoding": "hex" }, "id": 1 } ``` ### `avm.getBlockByHeight` Returns block at the given height. **Signature:** ```sh avm.getBlockByHeight({ height: string encoding: string // optional }) -> { block: string, encoding: string } ``` **Request:** - `blockHeight` is the block height. It should be in `string` format. - `encoding` is the encoding format to use. Can be either `hex` or `json`. Defaults to `hex`. **Response:** - `block` is the transaction encoded to `encoding`. - `encoding` is the `encoding`. #### Hex Example **Example Call:** ```sh curl -X POST --data '{ "jsonrpc": "2.0", "method": "avm.getBlockByHeight", "params": { "height": "275686313486", "encoding": "hex" }, "id": 1 }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X ``` **Example Response:** ```json { "jsonrpc": "2.0", "result": { "block": "0x00000000002000000000642f6739d4efcdd07e4d4919a7fc2020b8a0f081dd64c262aaace5a6dad22be0b55fec0700000000004db9e100000001000000110000000100000000000000000000000000000000000000000000000000000000000000000000000121e67317cbc4be2aeb00677ad6462778a8f52274b9d605df2591b23027a87dff000000070000005c6ece390000000000000000000000000100000001930ab7bf5018bfc6f9435c8b15ba2fe1e619c0230000000000000000ed5f38341e436e5d46e2bb00b45d62ae97d1b050c64bc634ae10626739e35c4b00000001c6dda861341665c3b555b46227fb5e56dc0a870c5482809349f04b00348af2a80000000021e67317cbc4be2aeb00677ad6462778a8f52274b9d605df2591b23027a87dff000000050000005c6edd7b40000000010000000000000001000000090000000178688f4d5055bd8733801f9b52793da885bef424c90526c18e4dd97f7514bf6f0c3d2a0e9a5ea8b761bc41902eb4902c34ef034c4d18c3db7c83c64ffeadd93600731676de", "encoding": "hex" }, "id": 1 } ``` ### `avm.getHeight` Returns the height of the last accepted block. **Signature:** ```sh avm.getHeight() -> { height: uint64, } ``` **Example Call:** ```sh curl -X POST --data '{ "jsonrpc": "2.0", "method": "avm.getHeight", "params": {}, "id": 1 }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X ``` **Example Response:** ```json { "jsonrpc": "2.0", "result": { "height": "5094088" }, "id": 1 } ``` ### `avm.getTx` Returns the specified transaction. The `encoding` parameter sets the format of the returned transaction. Can be either `"hex"` or `"json"`. Defaults to `"hex"`. **Signature:** ```sh avm.getTx({ txID: string, encoding: string, //optional }) -> { tx: string, encoding: string, } ``` **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"avm.getTx", "params" :{ "txID":"2oJCbb8pfdxEHAf9A8CdN4Afj9VSR3xzyzNkf8tDv7aM1sfNFL", "encoding": "json" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X ``` **Example Response:** ```json { "jsonrpc": "2.0", "result": { "tx": { "unsignedTx": { "networkID": 1, "blockchainID": "2oYMBNV4eNHyqk2fjjV5nVQLDbtmNJzq5s3qs3Lo6ftnC6FByM", "outputs": [], "inputs": [ { "txID": "2jbZUvi6nHy3Pgmk8xcMpSg5cW6epkPqdKkHSCweb4eRXtq4k9", "outputIndex": 1, "assetID": "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z", "fxID": "spdxUxVJQbX85MGxMHbKw1sHxMnSqJ3QBzDyDYEP3h6TLuxqQ", "input": { "amount": 2570382395, "signatureIndices": [0] } } ], "memo": "0x", "destinationChain": "11111111111111111111111111111111LpoYY", "exportedOutputs": [ { "assetID": "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z", "fxID": "spdxUxVJQbX85MGxMHbKw1sHxMnSqJ3QBzDyDYEP3h6TLuxqQ", "output": { "addresses": ["X-avax1tnuesf6cqwnjw7fxjyk7lhch0vhf0v95wj5jvy"], "amount": 2569382395, "locktime": 0, "threshold": 1 } } ] }, "credentials": [ { "fxID": "spdxUxVJQbX85MGxMHbKw1sHxMnSqJ3QBzDyDYEP3h6TLuxqQ", "credential": { "signatures": [ "0x46ebcbcfbee3ece1fd15015204045cf3cb77f42c48d0201fc150341f91f086f177cfca8894ca9b4a0c55d6950218e4ea8c01d5c4aefb85cd7264b47bd57d224400" ] } } ], "id": "2oJCbb8pfdxEHAf9A8CdN4Afj9VSR3xzyzNkf8tDv7aM1sfNFL" }, "encoding": "json" }, "id": 1 } ``` Where: - `credentials` is a list of this transaction's credentials. Each credential proves that this transaction's creator is allowed to consume one of this transaction's inputs. Each credential is a list of signatures. - `unsignedTx` is the non-signature portion of the transaction. - `networkID` is the ID of the network this transaction happened on. (Avalanche Mainnet is `1`.) - `blockchainID` is the ID of the blockchain this transaction happened on. (Avalanche Mainnet X-Chain is `2oYMBNV4eNHyqk2fjjV5nVQLDbtmNJzq5s3qs3Lo6ftnC6FByM`.) - Each element of `outputs` is an output (UTXO) of this transaction that is not being exported to another chain. - Each element of `inputs` is an input of this transaction which has not been imported from another chain. - Import Transactions have additional fields `sourceChain` and `importedInputs`, which specify the blockchain ID that assets are being imported from, and the inputs that are being imported. - Export Transactions have additional fields `destinationChain` and `exportedOutputs`, which specify the blockchain ID that assets are being exported to, and the UTXOs that are being exported. An output contains: - `assetID`: The ID of the asset being transferred. (The Mainnet Avax ID is `FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z`.) - `fxID`: The ID of the FX this output uses. - `output`: The FX-specific contents of this output. Most outputs use the secp256k1 FX, look like this: ```json { "assetID": "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z", "fxID": "spdxUxVJQbX85MGxMHbKw1sHxMnSqJ3QBzDyDYEP3h6TLuxqQ", "output": { "addresses": ["X-avax126rd3w35xwkmj8670zvf7y5r8k36qa9z9803wm"], "amount": 1530084210, "locktime": 0, "threshold": 1 } } ``` The above output can be consumed after Unix time `locktime` by a transaction that has signatures from `threshold` of the addresses in `addresses`. ### `avm.getTxFee` Get the fees of the network. **Signature**: ``` avm.getTxFee() -> { txFee: uint64, createAssetTxFee: uint64, } ``` - `txFee` is the default fee for making transactions. - `createAssetTxFee` is the fee for creating a new asset. All fees are denominated in nAVAX. **Example Call**: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" : 1, "method" :"avm.getTxFee", }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X ``` **Example Response**: ```json { "jsonrpc": "2.0", "result": { "txFee": "1000000", "createAssetTxFee": "10000000" } } ``` ### `avm.getTxStatus` Deprecated as of **v1.10.0**. Get the status of a transaction sent to the network. **Signature:** ```sh avm.getTxStatus({txID: string}) -> {status: string} ``` `status` is one of: - `Accepted`: The transaction is (or will be) accepted by every node - `Processing`: The transaction is being voted on by this node - `Rejected`: The transaction will never be accepted by any node in the network - `Unknown`: The transaction hasn’t been seen by this node **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"avm.getTxStatus", "params" :{ "txID":"2QouvFWUbjuySRxeX5xMbNCuAaKWfbk5FeEa2JmoF85RKLk2dD" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X ``` **Example Response:** ```json { "jsonrpc": "2.0", "id": 1, "result": { "status": "Accepted" } } ``` ### `avm.getUTXOs` Gets the UTXOs that reference a given address. If `sourceChain` is specified, then it will retrieve the atomic UTXOs exported from that chain to the X Chain. **Signature:** ```sh avm.getUTXOs({ addresses: []string, limit: int, //optional startIndex: { //optional address: string, utxo: string }, sourceChain: string, //optional encoding: string //optional }) -> { numFetched: int, utxos: []string, endIndex: { address: string, utxo: string }, sourceChain: string, //optional encoding: string } ``` - `utxos` is a list of UTXOs such that each UTXO references at least one address in `addresses`. - At most `limit` UTXOs are returned. If `limit` is omitted or greater than 1024, it is set to 1024. - This method supports pagination. `endIndex` denotes the last UTXO returned. To get the next set of UTXOs, use the value of `endIndex` as `startIndex` in the next call. - If `startIndex` is omitted, will fetch all UTXOs up to `limit`. - When using pagination (when `startIndex` is provided), UTXOs are not guaranteed to be unique across multiple calls. That is, a UTXO may appear in the result of the first call, and then again in the second call. - When using pagination, consistency is not guaranteed across multiple calls. That is, the UTXO set of the addresses may have changed between calls. - `encoding` sets the format for the returned UTXOs. Can only be `hex` when a value is provided. #### **Example** Suppose we want all UTXOs that reference at least one of `X-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5` and `X-avax1d09qn852zcy03sfc9hay2llmn9hsgnw4tp3dv6`. ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"avm.getUTXOs", "params" :{ "addresses":["X-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5", "X-avax1d09qn852zcy03sfc9hay2llmn9hsgnw4tp3dv6"], "limit":5, "encoding": "hex" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X ``` This gives response: ```json { "jsonrpc": "2.0", "result": { "numFetched": "5", "utxos": [ "0x0000a195046108a85e60f7a864bb567745a37f50c6af282103e47cc62f036cee404700000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f216c1f01765", "0x0000ae8b1b94444eed8de9a81b1222f00f1b4133330add23d8ac288bffa98b85271100000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f216473d042a", "0x0000731ce04b1feefa9f4291d869adc30a33463f315491e164d89be7d6d2d7890cfc00000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f21600dd3047", "0x0000b462030cc4734f24c0bc224cf0d16ee452ea6b67615517caffead123ab4fbf1500000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f216c71b387e", "0x000054f6826c39bc957c0c6d44b70f961a994898999179cc32d21eb09c1908d7167b00000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f2166290e79d" ], "endIndex": { "address": "X-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5", "utxo": "kbUThAUfmBXUmRgTpgD6r3nLj7rJUGho6xyht5nouNNypH45j" }, "encoding": "hex" }, "id": 1 } ``` Since `numFetched` is the same as `limit`, we can tell that there may be more UTXOs that were not fetched. We call the method again, this time with `startIndex`: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :2, "method" :"avm.getUTXOs", "params" :{ "addresses":["X-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5"], "limit":5, "startIndex": { "address": "X-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5", "utxo": "kbUThAUfmBXUmRgTpgD6r3nLj7rJUGho6xyht5nouNNypH45j" }, "encoding": "hex" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X ``` This gives response: ```json { "jsonrpc": "2.0", "result": { "numFetched": "4", "utxos": [ "0x000020e182dd51ee4dcd31909fddd75bb3438d9431f8e4efce86a88a684f5c7fa09300000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f21662861d59", "0x0000a71ba36c475c18eb65dc90f6e85c4fd4a462d51c5de3ac2cbddf47db4d99284e00000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f21665f6f83f", "0x0000925424f61cb13e0fbdecc66e1270de68de9667b85baa3fdc84741d048daa69fa00000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f216afecf76a", "0x000082f30327514f819da6009fad92b5dba24d27db01e29ad7541aa8e6b6b554615c00000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f216779c2d59" ], "endIndex": { "address": "X-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5", "utxo": "21jG2RfqyHUUgkTLe2tUp6ETGLriSDTW3th8JXFbPRNiSZ11jK" }, "encoding": "hex" }, "id": 1 } ``` Since `numFetched` is less than `limit`, we know that we are done fetching UTXOs and don’t need to call this method again. Suppose we want to fetch the UTXOs exported from the P Chain to the X Chain in order to build an ImportTx. Then we need to call GetUTXOs with the `sourceChain` argument in order to retrieve the atomic UTXOs: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"avm.getUTXOs", "params" :{ "addresses":["X-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5", "X-avax1d09qn852zcy03sfc9hay2llmn9hsgnw4tp3dv6"], "limit":5, "sourceChain": "P", "encoding": "hex" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X ``` This gives response: ```json { "jsonrpc": "2.0", "result": { "numFetched": "1", "utxos": [ "0x00001f989ffaf18a18a59bdfbf209342aa61c6a62a67e8639d02bb3c8ddab315c6fa0000000039c33a499ce4c33a3b09cdd2cfa01ae70dbf2d18b2d7d168524440e55d550088000000070011c304cd7eb5c0000000000000000000000001000000013cb7d3842e8cee6a0ebd09f1fe884f6861e1b29c83497819" ], "endIndex": { "address": "X-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5", "utxo": "2Sz2XwRYqUHwPeiKoRnZ6ht88YqzAF1SQjMYZQQaB5wBFkAqST" }, "encoding": "hex" }, "id": 1 } ``` ### `avm.issueTx` Send a signed transaction to the network. `encoding` specifies the format of the signed transaction. Can only be `hex` when a value is provided. **Signature:** ```sh avm.issueTx({ tx: string, encoding: string, //optional }) -> { txID: string } ``` **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" : 1, "method" :"avm.issueTx", "params" :{ "tx":"0x00000009de31b4d8b22991d51aa6aa1fc733f23a851a8c9400000000000186a0000000005f041280000000005f9ca900000030390000000000000001fceda8f90fcb5d30614b99d79fc4baa29307762668f16eb0259a57c2d3b78c875c86ec2045792d4df2d926c40f829196e0bb97ee697af71f5b0a966dabff749634c8b729855e937715b0e44303fd1014daedc752006011b730", "encoding": "hex" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X ``` **Example Response:** ```json { "jsonrpc": "2.0", "id": 1, "result": { "txID": "NUPLwbt2hsYxpQg4H2o451hmTWQ4JZx2zMzM4SinwtHgAdX1JLPHXvWSXEnpecStLj" } } ``` ### `wallet.issueTx` Send a signed transaction to the network and assume the TX will be accepted. `encoding` specifies the format of the signed transaction. Can only be `hex` when a value is provided. This call is made to the wallet API endpoint: `/ext/bc/X/wallet` Endpoint deprecated as of [**v1.9.12**](https://github.com/ava-labs/avalanchego/releases/tag/v1.9.12). **Signature:** ```sh wallet.issueTx({ tx: string, encoding: string, //optional }) -> { txID: string } ``` **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" : 1, "method" :"wallet.issueTx", "params" :{ "tx":"0x00000009de31b4d8b22991d51aa6aa1fc733f23a851a8c9400000000000186a0000000005f041280000000005f9ca900000030390000000000000001fceda8f90fcb5d30614b99d79fc4baa29307762668f16eb0259a57c2d3b78c875c86ec2045792d4df2d926c40f829196e0bb97ee697af71f5b0a966dabff749634c8b729855e937715b0e44303fd1014daedc752006011b730", "encoding": "hex" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X/wallet ``` **Example Response:** ```json { "jsonrpc": "2.0", "id": 1, "result": { "txID": "NUPLwbt2hsYxpQg4H2o451hmTWQ4JZx2zMzM4SinwtHgAdX1JLPHXvWSXEnpecStLj" } } ``` # AvalancheGo X-Chain RPC (/docs/rpcs/x-chain) --- title: "AvalancheGo X-Chain RPC" description: "This page is an overview of the X-Chain RPC associated with AvalancheGo." edit_url: https://github.com/ava-labs/avalanchego/edit/master/vms/avm/service.md --- The [X-Chain](https://build.avax.network/docs/quick-start/primary-network#x-chain), Avalanche's native platform for creating and trading assets, is an instance of the Avalanche Virtual Machine (AVM). This API allows clients to create and trade assets on the X-Chain and other instances of the AVM. ## Format This API uses the `json 2.0` RPC format. For more information on making JSON RPC calls, see [here](https://build.avax.network/docs/api-reference/guides/issuing-api-calls). ## Endpoints `/ext/bc/X` to interact with the X-Chain. `/ext/bc/blockchainID` to interact with other AVM instances, where `blockchainID` is the ID of a blockchain running the AVM. ## Methods ### `avm.getAllBalances` Deprecated as of [**v1.9.12**](https://github.com/ava-labs/avalanchego/releases/tag/v1.9.12). Get the balances of all assets controlled by a given address. **Signature:** ```sh avm.getAllBalances({address:string}) -> { balances: []{ asset: string, balance: int } } ``` **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" : 1, "method" :"avm.getAllBalances", "params" :{ "address":"X-avax1c79e0dd0susp7dc8udq34jgk2yvve7hapvdyht" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X ``` **Example Response:** ```json { "jsonrpc": "2.0", "result": { "balances": [ { "asset": "AVAX", "balance": "102" }, { "asset": "2sdnziCz37Jov3QSNMXcFRGFJ1tgauaj6L7qfk7yUcRPfQMC79", "balance": "10000" } ] }, "id": 1 } ``` ### `avm.getAssetDescription` Get information about an asset. **Signature:** ```sh avm.getAssetDescription({assetID: string}) -> { assetId: string, name: string, symbol: string, denomination: int } ``` - `assetID` is the id of the asset for which the information is requested. - `name` is the asset’s human-readable, not necessarily unique name. - `symbol` is the asset’s symbol. - `denomination` determines how balances of this asset are displayed by user interfaces. If denomination is 0, 100 units of this asset are displayed as 100. If denomination is 1, 100 units of this asset are displayed as 10.0. If denomination is 2, 100 units of this asset are displays as .100, etc. The AssetID for AVAX differs depending on the network you are on. Mainnet: FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z Testnet: U8iRqJoiJm8xZHAacmvYyZVwqQx6uDNtQeP3CQ6fcgQk3JqnK For finding the `assetID` of other assets, this [info] might be useful. Also, `avm.getUTXOs` returns the `assetID` in its output. **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"avm.getAssetDescription", "params" :{ "assetID" :"FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X ``` **Example Response:** ```json { "jsonrpc": "2.0", "result": { "assetID": "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z", "name": "Avalanche", "symbol": "AVAX", "denomination": "9" }, "id": 1 }` ``` ### `avm.getBalance` Deprecated as of [**v1.9.12**](https://github.com/ava-labs/avalanchego/releases/tag/v1.9.12). Get the balance of an asset controlled by a given address. **Signature:** ```sh avm.getBalance({ address: string, assetID: string }) -> {balance: int} ``` - `address` owner of the asset - `assetID` id of the asset for which the balance is requested **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" : 1, "method" :"avm.getBalance", "params" :{ "address":"X-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5", "assetID": "2pYGetDWyKdHxpFxh2LHeoLNCH6H5vxxCxHQtFnnFaYxLsqtHC" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X ``` **Example Response:** ```json { "jsonrpc": "2.0", "id": 1, "result": { "balance": "299999999999900", "utxoIDs": [ { "txID": "WPQdyLNqHfiEKp4zcCpayRHYDVYuh1hqs9c1RqgZXS4VPgdvo", "outputIndex": 1 } ] } } ``` ### `avm.getBlock` Returns the block with the given id. **Signature:** ```sh avm.getBlock({ blockID: string encoding: string // optional }) -> { block: string, encoding: string } ``` **Request:** - `blockID` is the block ID. It should be in cb58 format. - `encoding` is the encoding format to use. Can be either `hex` or `json`. Defaults to `hex`. **Response:** - `block` is the transaction encoded to `encoding`. - `encoding` is the `encoding`. #### Hex Example **Example Call:** ```sh curl -X POST --data '{ "jsonrpc": "2.0", "method": "avm.getBlock", "params": { "blockID": "tXJ4xwmR8soHE6DzRNMQPtiwQvuYsHn6eLLBzo2moDqBquqy6", "encoding": "hex" }, "id": 1 }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X ``` **Example Response:** ```json { "jsonrpc": "2.0", "result": { "block": "0x00000000002000000000641ad33ede17f652512193721df87994f783ec806bb5640c39ee73676caffcc3215e0651000000000049a80a000000010000000e0000000100000000000000000000000000000000000000000000000000000000000000000000000121e67317cbc4be2aeb00677ad6462778a8f52274b9d605df2591b23027a87dff000000070000002e1a2a3910000000000000000000000001000000015cf998275803a7277926912defdf177b2e97b0b400000001e0d825c5069a7336671dd27eaa5c7851d2cf449e7e1cdc469c5c9e5a953955950000000021e67317cbc4be2aeb00677ad6462778a8f52274b9d605df2591b23027a87dff000000050000008908223b680000000100000000000000005e45d02fcc9e585544008f1df7ae5c94bf7f0f2600000000641ad3b600000000642d48b60000005aedf802580000000121e67317cbc4be2aeb00677ad6462778a8f52274b9d605df2591b23027a87dff000000070000005aedf80258000000000000000000000001000000015cf998275803a7277926912defdf177b2e97b0b40000000b000000000000000000000001000000012892441ba9a160bcdc596dcd2cc3ad83c3493589000000010000000900000001adf2237a5fe2dfd906265e8e14274aa7a7b2ee60c66213110598ba34fb4824d74f7760321c0c8fb1e8d3c5e86909248e48a7ae02e641da5559351693a8a1939800286d4fa2", "encoding": "hex" }, "id": 1 } ``` ### `avm.getBlockByHeight` Returns block at the given height. **Signature:** ```sh avm.getBlockByHeight({ height: string encoding: string // optional }) -> { block: string, encoding: string } ``` **Request:** - `blockHeight` is the block height. It should be in `string` format. - `encoding` is the encoding format to use. Can be either `hex` or `json`. Defaults to `hex`. **Response:** - `block` is the transaction encoded to `encoding`. - `encoding` is the `encoding`. #### Hex Example **Example Call:** ```sh curl -X POST --data '{ "jsonrpc": "2.0", "method": "avm.getBlockByHeight", "params": { "height": "275686313486", "encoding": "hex" }, "id": 1 }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X ``` **Example Response:** ```json { "jsonrpc": "2.0", "result": { "block": "0x00000000002000000000642f6739d4efcdd07e4d4919a7fc2020b8a0f081dd64c262aaace5a6dad22be0b55fec0700000000004db9e100000001000000110000000100000000000000000000000000000000000000000000000000000000000000000000000121e67317cbc4be2aeb00677ad6462778a8f52274b9d605df2591b23027a87dff000000070000005c6ece390000000000000000000000000100000001930ab7bf5018bfc6f9435c8b15ba2fe1e619c0230000000000000000ed5f38341e436e5d46e2bb00b45d62ae97d1b050c64bc634ae10626739e35c4b00000001c6dda861341665c3b555b46227fb5e56dc0a870c5482809349f04b00348af2a80000000021e67317cbc4be2aeb00677ad6462778a8f52274b9d605df2591b23027a87dff000000050000005c6edd7b40000000010000000000000001000000090000000178688f4d5055bd8733801f9b52793da885bef424c90526c18e4dd97f7514bf6f0c3d2a0e9a5ea8b761bc41902eb4902c34ef034c4d18c3db7c83c64ffeadd93600731676de", "encoding": "hex" }, "id": 1 } ``` ### `avm.getHeight` Returns the height of the last accepted block. **Signature:** ```sh avm.getHeight() -> { height: uint64, } ``` **Example Call:** ```sh curl -X POST --data '{ "jsonrpc": "2.0", "method": "avm.getHeight", "params": {}, "id": 1 }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X ``` **Example Response:** ```json { "jsonrpc": "2.0", "result": { "height": "5094088" }, "id": 1 } ``` ### `avm.getTx` Returns the specified transaction. The `encoding` parameter sets the format of the returned transaction. Can be either `"hex"` or `"json"`. Defaults to `"hex"`. **Signature:** ```sh avm.getTx({ txID: string, encoding: string, //optional }) -> { tx: string, encoding: string, } ``` **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"avm.getTx", "params" :{ "txID":"2oJCbb8pfdxEHAf9A8CdN4Afj9VSR3xzyzNkf8tDv7aM1sfNFL", "encoding": "json" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X ``` **Example Response:** ```json { "jsonrpc": "2.0", "result": { "tx": { "unsignedTx": { "networkID": 1, "blockchainID": "2oYMBNV4eNHyqk2fjjV5nVQLDbtmNJzq5s3qs3Lo6ftnC6FByM", "outputs": [], "inputs": [ { "txID": "2jbZUvi6nHy3Pgmk8xcMpSg5cW6epkPqdKkHSCweb4eRXtq4k9", "outputIndex": 1, "assetID": "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z", "fxID": "spdxUxVJQbX85MGxMHbKw1sHxMnSqJ3QBzDyDYEP3h6TLuxqQ", "input": { "amount": 2570382395, "signatureIndices": [0] } } ], "memo": "0x", "destinationChain": "11111111111111111111111111111111LpoYY", "exportedOutputs": [ { "assetID": "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z", "fxID": "spdxUxVJQbX85MGxMHbKw1sHxMnSqJ3QBzDyDYEP3h6TLuxqQ", "output": { "addresses": ["X-avax1tnuesf6cqwnjw7fxjyk7lhch0vhf0v95wj5jvy"], "amount": 2569382395, "locktime": 0, "threshold": 1 } } ] }, "credentials": [ { "fxID": "spdxUxVJQbX85MGxMHbKw1sHxMnSqJ3QBzDyDYEP3h6TLuxqQ", "credential": { "signatures": [ "0x46ebcbcfbee3ece1fd15015204045cf3cb77f42c48d0201fc150341f91f086f177cfca8894ca9b4a0c55d6950218e4ea8c01d5c4aefb85cd7264b47bd57d224400" ] } } ], "id": "2oJCbb8pfdxEHAf9A8CdN4Afj9VSR3xzyzNkf8tDv7aM1sfNFL" }, "encoding": "json" }, "id": 1 } ``` Where: - `credentials` is a list of this transaction's credentials. Each credential proves that this transaction's creator is allowed to consume one of this transaction's inputs. Each credential is a list of signatures. - `unsignedTx` is the non-signature portion of the transaction. - `networkID` is the ID of the network this transaction happened on. (Avalanche Mainnet is `1`.) - `blockchainID` is the ID of the blockchain this transaction happened on. (Avalanche Mainnet X-Chain is `2oYMBNV4eNHyqk2fjjV5nVQLDbtmNJzq5s3qs3Lo6ftnC6FByM`.) - Each element of `outputs` is an output (UTXO) of this transaction that is not being exported to another chain. - Each element of `inputs` is an input of this transaction which has not been imported from another chain. - Import Transactions have additional fields `sourceChain` and `importedInputs`, which specify the blockchain ID that assets are being imported from, and the inputs that are being imported. - Export Transactions have additional fields `destinationChain` and `exportedOutputs`, which specify the blockchain ID that assets are being exported to, and the UTXOs that are being exported. An output contains: - `assetID`: The ID of the asset being transferred. (The Mainnet Avax ID is `FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z`.) - `fxID`: The ID of the FX this output uses. - `output`: The FX-specific contents of this output. Most outputs use the secp256k1 FX, look like this: ```json { "assetID": "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z", "fxID": "spdxUxVJQbX85MGxMHbKw1sHxMnSqJ3QBzDyDYEP3h6TLuxqQ", "output": { "addresses": ["X-avax126rd3w35xwkmj8670zvf7y5r8k36qa9z9803wm"], "amount": 1530084210, "locktime": 0, "threshold": 1 } } ``` The above output can be consumed after Unix time `locktime` by a transaction that has signatures from `threshold` of the addresses in `addresses`. ### `avm.getTxFee` Get the fees of the network. **Signature**: ``` avm.getTxFee() -> { txFee: uint64, createAssetTxFee: uint64, } ``` - `txFee` is the default fee for making transactions. - `createAssetTxFee` is the fee for creating a new asset. All fees are denominated in nAVAX. **Example Call**: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" : 1, "method" :"avm.getTxFee", }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X ``` **Example Response**: ```json { "jsonrpc": "2.0", "result": { "txFee": "1000000", "createAssetTxFee": "10000000" } } ``` ### `avm.getTxStatus` Deprecated as of **v1.10.0**. Get the status of a transaction sent to the network. **Signature:** ```sh avm.getTxStatus({txID: string}) -> {status: string} ``` `status` is one of: - `Accepted`: The transaction is (or will be) accepted by every node - `Processing`: The transaction is being voted on by this node - `Rejected`: The transaction will never be accepted by any node in the network - `Unknown`: The transaction hasn’t been seen by this node **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"avm.getTxStatus", "params" :{ "txID":"2QouvFWUbjuySRxeX5xMbNCuAaKWfbk5FeEa2JmoF85RKLk2dD" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X ``` **Example Response:** ```json { "jsonrpc": "2.0", "id": 1, "result": { "status": "Accepted" } } ``` ### `avm.getUTXOs` Gets the UTXOs that reference a given address. If `sourceChain` is specified, then it will retrieve the atomic UTXOs exported from that chain to the X Chain. **Signature:** ```sh avm.getUTXOs({ addresses: []string, limit: int, //optional startIndex: { //optional address: string, utxo: string }, sourceChain: string, //optional encoding: string //optional }) -> { numFetched: string, utxos: []string, endIndex: { address: string, utxo: string }, sourceChain: string, //optional encoding: string } ``` - `utxos` is a list of UTXOs such that each UTXO references at least one address in `addresses`. - At most `limit` UTXOs are returned. If `limit` is omitted or greater than 1024, it is set to 1024. - This method supports pagination. `endIndex` denotes the last UTXO returned. To get the next set of UTXOs, use the value of `endIndex` as `startIndex` in the next call. - If `startIndex` is omitted, will fetch all UTXOs up to `limit`. - When using pagination (when `startIndex` is provided), UTXOs are not guaranteed to be unique across multiple calls. That is, a UTXO may appear in the result of the first call, and then again in the second call. - When using pagination, consistency is not guaranteed across multiple calls. That is, the UTXO set of the addresses may have changed between calls. - `encoding` sets the format for the returned UTXOs. Can only be `hex` when a value is provided. #### **Example** Suppose we want all UTXOs that reference at least one of `X-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5` and `X-avax1d09qn852zcy03sfc9hay2llmn9hsgnw4tp3dv6`. ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"avm.getUTXOs", "params" :{ "addresses":["X-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5", "X-avax1d09qn852zcy03sfc9hay2llmn9hsgnw4tp3dv6"], "limit":5, "encoding": "hex" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X ``` This gives response: ```json { "jsonrpc": "2.0", "result": { "numFetched": "5", "utxos": [ "0x0000a195046108a85e60f7a864bb567745a37f50c6af282103e47cc62f036cee404700000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f216c1f01765", "0x0000ae8b1b94444eed8de9a81b1222f00f1b4133330add23d8ac288bffa98b85271100000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f216473d042a", "0x0000731ce04b1feefa9f4291d869adc30a33463f315491e164d89be7d6d2d7890cfc00000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f21600dd3047", "0x0000b462030cc4734f24c0bc224cf0d16ee452ea6b67615517caffead123ab4fbf1500000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f216c71b387e", "0x000054f6826c39bc957c0c6d44b70f961a994898999179cc32d21eb09c1908d7167b00000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f2166290e79d" ], "endIndex": { "address": "X-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5", "utxo": "kbUThAUfmBXUmRgTpgD6r3nLj7rJUGho6xyht5nouNNypH45j" }, "encoding": "hex" }, "id": 1 } ``` Since `numFetched` is the same as `limit`, we can tell that there may be more UTXOs that were not fetched. We call the method again, this time with `startIndex`: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :2, "method" :"avm.getUTXOs", "params" :{ "addresses":["X-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5"], "limit":5, "startIndex": { "address": "X-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5", "utxo": "kbUThAUfmBXUmRgTpgD6r3nLj7rJUGho6xyht5nouNNypH45j" }, "encoding": "hex" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X ``` This gives response: ```json { "jsonrpc": "2.0", "result": { "numFetched": "4", "utxos": [ "0x000020e182dd51ee4dcd31909fddd75bb3438d9431f8e4efce86a88a684f5c7fa09300000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f21662861d59", "0x0000a71ba36c475c18eb65dc90f6e85c4fd4a462d51c5de3ac2cbddf47db4d99284e00000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f21665f6f83f", "0x0000925424f61cb13e0fbdecc66e1270de68de9667b85baa3fdc84741d048daa69fa00000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f216afecf76a", "0x000082f30327514f819da6009fad92b5dba24d27db01e29ad7541aa8e6b6b554615c00000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f216779c2d59" ], "endIndex": { "address": "X-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5", "utxo": "21jG2RfqyHUUgkTLe2tUp6ETGLriSDTW3th8JXFbPRNiSZ11jK" }, "encoding": "hex" }, "id": 1 } ``` Since `numFetched` is less than `limit`, we know that we are done fetching UTXOs and don’t need to call this method again. Suppose we want to fetch the UTXOs exported from the P Chain to the X Chain in order to build an ImportTx. Then we need to call GetUTXOs with the `sourceChain` argument in order to retrieve the atomic UTXOs: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"avm.getUTXOs", "params" :{ "addresses":["X-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5", "X-avax1d09qn852zcy03sfc9hay2llmn9hsgnw4tp3dv6"], "limit":5, "sourceChain": "P", "encoding": "hex" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X ``` This gives response: ```json { "jsonrpc": "2.0", "result": { "numFetched": "1", "utxos": [ "0x00001f989ffaf18a18a59bdfbf209342aa61c6a62a67e8639d02bb3c8ddab315c6fa0000000039c33a499ce4c33a3b09cdd2cfa01ae70dbf2d18b2d7d168524440e55d550088000000070011c304cd7eb5c0000000000000000000000001000000013cb7d3842e8cee6a0ebd09f1fe884f6861e1b29c83497819" ], "endIndex": { "address": "X-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5", "utxo": "2Sz2XwRYqUHwPeiKoRnZ6ht88YqzAF1SQjMYZQQaB5wBFkAqST" }, "encoding": "hex" }, "id": 1 } ``` ### `avm.issueTx` Send a signed transaction to the network. `encoding` specifies the format of the signed transaction. Can only be `hex` when a value is provided. **Signature:** ```sh avm.issueTx({ tx: string, encoding: string, //optional }) -> { txID: string } ``` **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" : 1, "method" :"avm.issueTx", "params" :{ "tx":"0x00000009de31b4d8b22991d51aa6aa1fc733f23a851a8c9400000000000186a0000000005f041280000000005f9ca900000030390000000000000001fceda8f90fcb5d30614b99d79fc4baa29307762668f16eb0259a57c2d3b78c875c86ec2045792d4df2d926c40f829196e0bb97ee697af71f5b0a966dabff749634c8b729855e937715b0e44303fd1014daedc752006011b730", "encoding": "hex" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X ``` **Example Response:** ```json { "jsonrpc": "2.0", "id": 1, "result": { "txID": "NUPLwbt2hsYxpQg4H2o451hmTWQ4JZx2zMzM4SinwtHgAdX1JLPHXvWSXEnpecStLj" } } ``` ### `wallet.issueTx` Send a signed transaction to the network and assume the TX will be accepted. `encoding` specifies the format of the signed transaction. Can only be `hex` when a value is provided. This call is made to the wallet API endpoint: `/ext/bc/X/wallet` Endpoint deprecated as of [**v1.9.12**](https://github.com/ava-labs/avalanchego/releases/tag/v1.9.12). **Signature:** ```sh wallet.issueTx({ tx: string, encoding: string, //optional }) -> { txID: string } ``` **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" : 1, "method" :"wallet.issueTx", "params" :{ "tx":"0x00000009de31b4d8b22991d51aa6aa1fc733f23a851a8c9400000000000186a0000000005f041280000000005f9ca900000030390000000000000001fceda8f90fcb5d30614b99d79fc4baa29307762668f16eb0259a57c2d3b78c875c86ec2045792d4df2d926c40f829196e0bb97ee697af71f5b0a966dabff749634c8b729855e937715b0e44303fd1014daedc752006011b730", "encoding": "hex" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X/wallet ``` **Example Response:** ```json { "jsonrpc": "2.0", "id": 1, "result": { "txID": "NUPLwbt2hsYxpQg4H2o451hmTWQ4JZx2zMzM4SinwtHgAdX1JLPHXvWSXEnpecStLj" } } ``` # X-Chain RPC (/docs/rpcs/x-chain/rpc) --- title: "X-Chain RPC" description: "This page is an overview of the X-Chain RPC associated with AvalancheGo." edit_url: https://github.com/ava-labs/avalanchego/edit/master/vms/avm/service.md --- The [X-Chain](https://build.avax.network/docs/primary-network#x-chain), Avalanche's native platform for creating and trading assets, is an instance of the Avalanche Virtual Machine (AVM). This API allows clients to create and trade assets on the X-Chain and other instances of the AVM. ## Format This API uses the `json 2.0` RPC format. For more information on making JSON RPC calls, see [here](https://build.avax.network/docs/rpcs/other/guides/issuing-api-calls). ## Endpoints `/ext/bc/X` to interact with the X-Chain. `/ext/bc/blockchainID` to interact with other AVM instances, where `blockchainID` is the ID of a blockchain running the AVM. ## Methods ### `avm.getAllBalances` Deprecated as of [**v1.9.12**](https://github.com/ava-labs/avalanchego/releases/tag/v1.9.12). Get the balances of all assets controlled by a given address. **Signature:** ```sh avm.getAllBalances({address:string}) -> { balances: []{ asset: string, balance: int } } ``` **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" : 1, "method" :"avm.getAllBalances", "params" :{ "address":"X-avax1c79e0dd0susp7dc8udq34jgk2yvve7hapvdyht" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X ``` **Example Response:** ```json { "jsonrpc": "2.0", "result": { "balances": [ { "asset": "AVAX", "balance": "102" }, { "asset": "2sdnziCz37Jov3QSNMXcFRGFJ1tgauaj6L7qfk7yUcRPfQMC79", "balance": "10000" } ] }, "id": 1 } ``` ### `avm.getAssetDescription` Get information about an asset. **Signature:** ```sh avm.getAssetDescription({assetID: string}) -> { assetId: string, name: string, symbol: string, denomination: int } ``` - `assetID` is the id of the asset for which the information is requested. - `name` is the asset’s human-readable, not necessarily unique name. - `symbol` is the asset’s symbol. - `denomination` determines how balances of this asset are displayed by user interfaces. If denomination is 0, 100 units of this asset are displayed as 100. If denomination is 1, 100 units of this asset are displayed as 10.0. If denomination is 2, 100 units of this asset are displays as .100, etc. The AssetID for AVAX differs depending on the network you are on. Mainnet: FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z Testnet: U8iRqJoiJm8xZHAacmvYyZVwqQx6uDNtQeP3CQ6fcgQk3JqnK For finding the `assetID` of other assets, this [info] might be useful. Also, `avm.getUTXOs` returns the `assetID` in its output. **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"avm.getAssetDescription", "params" :{ "assetID" :"FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X ``` **Example Response:** ```json { "jsonrpc": "2.0", "result": { "assetID": "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z", "name": "Avalanche", "symbol": "AVAX", "denomination": "9" }, "id": 1 }` ``` ### `avm.getBalance` Deprecated as of [**v1.9.12**](https://github.com/ava-labs/avalanchego/releases/tag/v1.9.12). Get the balance of an asset controlled by a given address. **Signature:** ```sh avm.getBalance({ address: string, assetID: string }) -> {balance: int} ``` - `address` owner of the asset - `assetID` id of the asset for which the balance is requested **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" : 1, "method" :"avm.getBalance", "params" :{ "address":"X-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5", "assetID": "2pYGetDWyKdHxpFxh2LHeoLNCH6H5vxxCxHQtFnnFaYxLsqtHC" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X ``` **Example Response:** ```json { "jsonrpc": "2.0", "id": 1, "result": { "balance": "299999999999900", "utxoIDs": [ { "txID": "WPQdyLNqHfiEKp4zcCpayRHYDVYuh1hqs9c1RqgZXS4VPgdvo", "outputIndex": 1 } ] } } ``` ### `avm.getBlock` Returns the block with the given id. **Signature:** ```sh avm.getBlock({ blockID: string encoding: string // optional }) -> { block: string, encoding: string } ``` **Request:** - `blockID` is the block ID. It should be in cb58 format. - `encoding` is the encoding format to use. Can be either `hex` or `json`. Defaults to `hex`. **Response:** - `block` is the transaction encoded to `encoding`. - `encoding` is the `encoding`. #### Hex Example **Example Call:** ```sh curl -X POST --data '{ "jsonrpc": "2.0", "method": "avm.getBlock", "params": { "blockID": "tXJ4xwmR8soHE6DzRNMQPtiwQvuYsHn6eLLBzo2moDqBquqy6", "encoding": "hex" }, "id": 1 }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X ``` **Example Response:** ```json { "jsonrpc": "2.0", "result": { "block": "0x00000000002000000000641ad33ede17f652512193721df87994f783ec806bb5640c39ee73676caffcc3215e0651000000000049a80a000000010000000e0000000100000000000000000000000000000000000000000000000000000000000000000000000121e67317cbc4be2aeb00677ad6462778a8f52274b9d605df2591b23027a87dff000000070000002e1a2a3910000000000000000000000001000000015cf998275803a7277926912defdf177b2e97b0b400000001e0d825c5069a7336671dd27eaa5c7851d2cf449e7e1cdc469c5c9e5a953955950000000021e67317cbc4be2aeb00677ad6462778a8f52274b9d605df2591b23027a87dff000000050000008908223b680000000100000000000000005e45d02fcc9e585544008f1df7ae5c94bf7f0f2600000000641ad3b600000000642d48b60000005aedf802580000000121e67317cbc4be2aeb00677ad6462778a8f52274b9d605df2591b23027a87dff000000070000005aedf80258000000000000000000000001000000015cf998275803a7277926912defdf177b2e97b0b40000000b000000000000000000000001000000012892441ba9a160bcdc596dcd2cc3ad83c3493589000000010000000900000001adf2237a5fe2dfd906265e8e14274aa7a7b2ee60c66213110598ba34fb4824d74f7760321c0c8fb1e8d3c5e86909248e48a7ae02e641da5559351693a8a1939800286d4fa2", "encoding": "hex" }, "id": 1 } ``` ### `avm.getBlockByHeight` Returns block at the given height. **Signature:** ```sh avm.getBlockByHeight({ height: string encoding: string // optional }) -> { block: string, encoding: string } ``` **Request:** - `blockHeight` is the block height. It should be in `string` format. - `encoding` is the encoding format to use. Can be either `hex` or `json`. Defaults to `hex`. **Response:** - `block` is the transaction encoded to `encoding`. - `encoding` is the `encoding`. #### Hex Example **Example Call:** ```sh curl -X POST --data '{ "jsonrpc": "2.0", "method": "avm.getBlockByHeight", "params": { "height": "275686313486", "encoding": "hex" }, "id": 1 }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X ``` **Example Response:** ```json { "jsonrpc": "2.0", "result": { "block": "0x00000000002000000000642f6739d4efcdd07e4d4919a7fc2020b8a0f081dd64c262aaace5a6dad22be0b55fec0700000000004db9e100000001000000110000000100000000000000000000000000000000000000000000000000000000000000000000000121e67317cbc4be2aeb00677ad6462778a8f52274b9d605df2591b23027a87dff000000070000005c6ece390000000000000000000000000100000001930ab7bf5018bfc6f9435c8b15ba2fe1e619c0230000000000000000ed5f38341e436e5d46e2bb00b45d62ae97d1b050c64bc634ae10626739e35c4b00000001c6dda861341665c3b555b46227fb5e56dc0a870c5482809349f04b00348af2a80000000021e67317cbc4be2aeb00677ad6462778a8f52274b9d605df2591b23027a87dff000000050000005c6edd7b40000000010000000000000001000000090000000178688f4d5055bd8733801f9b52793da885bef424c90526c18e4dd97f7514bf6f0c3d2a0e9a5ea8b761bc41902eb4902c34ef034c4d18c3db7c83c64ffeadd93600731676de", "encoding": "hex" }, "id": 1 } ``` ### `avm.getHeight` Returns the height of the last accepted block. **Signature:** ```sh avm.getHeight() -> { height: uint64, } ``` **Example Call:** ```sh curl -X POST --data '{ "jsonrpc": "2.0", "method": "avm.getHeight", "params": {}, "id": 1 }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X ``` **Example Response:** ```json { "jsonrpc": "2.0", "result": { "height": "5094088" }, "id": 1 } ``` ### `avm.getTx` Returns the specified transaction. The `encoding` parameter sets the format of the returned transaction. Can be either `"hex"` or `"json"`. Defaults to `"hex"`. **Signature:** ```sh avm.getTx({ txID: string, encoding: string, //optional }) -> { tx: string, encoding: string, } ``` **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"avm.getTx", "params" :{ "txID":"2oJCbb8pfdxEHAf9A8CdN4Afj9VSR3xzyzNkf8tDv7aM1sfNFL", "encoding": "json" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X ``` **Example Response:** ```json { "jsonrpc": "2.0", "result": { "tx": { "unsignedTx": { "networkID": 1, "blockchainID": "2oYMBNV4eNHyqk2fjjV5nVQLDbtmNJzq5s3qs3Lo6ftnC6FByM", "outputs": [], "inputs": [ { "txID": "2jbZUvi6nHy3Pgmk8xcMpSg5cW6epkPqdKkHSCweb4eRXtq4k9", "outputIndex": 1, "assetID": "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z", "fxID": "spdxUxVJQbX85MGxMHbKw1sHxMnSqJ3QBzDyDYEP3h6TLuxqQ", "input": { "amount": 2570382395, "signatureIndices": [0] } } ], "memo": "0x", "destinationChain": "11111111111111111111111111111111LpoYY", "exportedOutputs": [ { "assetID": "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z", "fxID": "spdxUxVJQbX85MGxMHbKw1sHxMnSqJ3QBzDyDYEP3h6TLuxqQ", "output": { "addresses": ["X-avax1tnuesf6cqwnjw7fxjyk7lhch0vhf0v95wj5jvy"], "amount": 2569382395, "locktime": 0, "threshold": 1 } } ] }, "credentials": [ { "fxID": "spdxUxVJQbX85MGxMHbKw1sHxMnSqJ3QBzDyDYEP3h6TLuxqQ", "credential": { "signatures": [ "0x46ebcbcfbee3ece1fd15015204045cf3cb77f42c48d0201fc150341f91f086f177cfca8894ca9b4a0c55d6950218e4ea8c01d5c4aefb85cd7264b47bd57d224400" ] } } ], "id": "2oJCbb8pfdxEHAf9A8CdN4Afj9VSR3xzyzNkf8tDv7aM1sfNFL" }, "encoding": "json" }, "id": 1 } ``` Where: - `credentials` is a list of this transaction's credentials. Each credential proves that this transaction's creator is allowed to consume one of this transaction's inputs. Each credential is a list of signatures. - `unsignedTx` is the non-signature portion of the transaction. - `networkID` is the ID of the network this transaction happened on. (Avalanche Mainnet is `1`.) - `blockchainID` is the ID of the blockchain this transaction happened on. (Avalanche Mainnet X-Chain is `2oYMBNV4eNHyqk2fjjV5nVQLDbtmNJzq5s3qs3Lo6ftnC6FByM`.) - Each element of `outputs` is an output (UTXO) of this transaction that is not being exported to another chain. - Each element of `inputs` is an input of this transaction which has not been imported from another chain. - Import Transactions have additional fields `sourceChain` and `importedInputs`, which specify the blockchain ID that assets are being imported from, and the inputs that are being imported. - Export Transactions have additional fields `destinationChain` and `exportedOutputs`, which specify the blockchain ID that assets are being exported to, and the UTXOs that are being exported. An output contains: - `assetID`: The ID of the asset being transferred. (The Mainnet Avax ID is `FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z`.) - `fxID`: The ID of the FX this output uses. - `output`: The FX-specific contents of this output. Most outputs use the secp256k1 FX, look like this: ```json { "assetID": "FvwEAhmxKfeiG8SnEvq42hc6whRyY3EFYAvebMqDNDGCgxN5Z", "fxID": "spdxUxVJQbX85MGxMHbKw1sHxMnSqJ3QBzDyDYEP3h6TLuxqQ", "output": { "addresses": ["X-avax126rd3w35xwkmj8670zvf7y5r8k36qa9z9803wm"], "amount": 1530084210, "locktime": 0, "threshold": 1 } } ``` The above output can be consumed after Unix time `locktime` by a transaction that has signatures from `threshold` of the addresses in `addresses`. ### `avm.getTxFee` Get the fees of the network. **Signature**: ``` avm.getTxFee() -> { txFee: uint64, createAssetTxFee: uint64, } ``` - `txFee` is the default fee for making transactions. - `createAssetTxFee` is the fee for creating a new asset. All fees are denominated in nAVAX. **Example Call**: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" : 1, "method" :"avm.getTxFee", }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X ``` **Example Response**: ```json { "jsonrpc": "2.0", "result": { "txFee": "1000000", "createAssetTxFee": "10000000" } } ``` ### `avm.getTxStatus` Deprecated as of **v1.10.0**. Get the status of a transaction sent to the network. **Signature:** ```sh avm.getTxStatus({txID: string}) -> {status: string} ``` `status` is one of: - `Accepted`: The transaction is (or will be) accepted by every node - `Processing`: The transaction is being voted on by this node - `Rejected`: The transaction will never be accepted by any node in the network - `Unknown`: The transaction hasn’t been seen by this node **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"avm.getTxStatus", "params" :{ "txID":"2QouvFWUbjuySRxeX5xMbNCuAaKWfbk5FeEa2JmoF85RKLk2dD" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X ``` **Example Response:** ```json { "jsonrpc": "2.0", "id": 1, "result": { "status": "Accepted" } } ``` ### `avm.getUTXOs` Gets the UTXOs that reference a given address. If `sourceChain` is specified, then it will retrieve the atomic UTXOs exported from that chain to the X Chain. **Signature:** ```sh avm.getUTXOs({ addresses: []string, limit: int, //optional startIndex: { //optional address: string, utxo: string }, sourceChain: string, //optional encoding: string //optional }) -> { numFetched: int, utxos: []string, endIndex: { address: string, utxo: string }, sourceChain: string, //optional encoding: string } ``` - `utxos` is a list of UTXOs such that each UTXO references at least one address in `addresses`. - At most `limit` UTXOs are returned. If `limit` is omitted or greater than 1024, it is set to 1024. - This method supports pagination. `endIndex` denotes the last UTXO returned. To get the next set of UTXOs, use the value of `endIndex` as `startIndex` in the next call. - If `startIndex` is omitted, will fetch all UTXOs up to `limit`. - When using pagination (when `startIndex` is provided), UTXOs are not guaranteed to be unique across multiple calls. That is, a UTXO may appear in the result of the first call, and then again in the second call. - When using pagination, consistency is not guaranteed across multiple calls. That is, the UTXO set of the addresses may have changed between calls. - `encoding` sets the format for the returned UTXOs. Can only be `hex` when a value is provided. #### **Example** Suppose we want all UTXOs that reference at least one of `X-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5` and `X-avax1d09qn852zcy03sfc9hay2llmn9hsgnw4tp3dv6`. ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"avm.getUTXOs", "params" :{ "addresses":["X-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5", "X-avax1d09qn852zcy03sfc9hay2llmn9hsgnw4tp3dv6"], "limit":5, "encoding": "hex" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X ``` This gives response: ```json { "jsonrpc": "2.0", "result": { "numFetched": "5", "utxos": [ "0x0000a195046108a85e60f7a864bb567745a37f50c6af282103e47cc62f036cee404700000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f216c1f01765", "0x0000ae8b1b94444eed8de9a81b1222f00f1b4133330add23d8ac288bffa98b85271100000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f216473d042a", "0x0000731ce04b1feefa9f4291d869adc30a33463f315491e164d89be7d6d2d7890cfc00000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f21600dd3047", "0x0000b462030cc4734f24c0bc224cf0d16ee452ea6b67615517caffead123ab4fbf1500000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f216c71b387e", "0x000054f6826c39bc957c0c6d44b70f961a994898999179cc32d21eb09c1908d7167b00000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f2166290e79d" ], "endIndex": { "address": "X-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5", "utxo": "kbUThAUfmBXUmRgTpgD6r3nLj7rJUGho6xyht5nouNNypH45j" }, "encoding": "hex" }, "id": 1 } ``` Since `numFetched` is the same as `limit`, we can tell that there may be more UTXOs that were not fetched. We call the method again, this time with `startIndex`: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :2, "method" :"avm.getUTXOs", "params" :{ "addresses":["X-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5"], "limit":5, "startIndex": { "address": "X-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5", "utxo": "kbUThAUfmBXUmRgTpgD6r3nLj7rJUGho6xyht5nouNNypH45j" }, "encoding": "hex" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X ``` This gives response: ```json { "jsonrpc": "2.0", "result": { "numFetched": "4", "utxos": [ "0x000020e182dd51ee4dcd31909fddd75bb3438d9431f8e4efce86a88a684f5c7fa09300000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f21662861d59", "0x0000a71ba36c475c18eb65dc90f6e85c4fd4a462d51c5de3ac2cbddf47db4d99284e00000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f21665f6f83f", "0x0000925424f61cb13e0fbdecc66e1270de68de9667b85baa3fdc84741d048daa69fa00000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f216afecf76a", "0x000082f30327514f819da6009fad92b5dba24d27db01e29ad7541aa8e6b6b554615c00000000345aa98e8a990f4101e2268fab4c4e1f731c8dfbcffa3a77978686e6390d624f000000070000000000000001000000000000000000000001000000018ba98dabaebcd83056799841cfbc567d8b10f216779c2d59" ], "endIndex": { "address": "X-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5", "utxo": "21jG2RfqyHUUgkTLe2tUp6ETGLriSDTW3th8JXFbPRNiSZ11jK" }, "encoding": "hex" }, "id": 1 } ``` Since `numFetched` is less than `limit`, we know that we are done fetching UTXOs and don’t need to call this method again. Suppose we want to fetch the UTXOs exported from the P Chain to the X Chain in order to build an ImportTx. Then we need to call GetUTXOs with the `sourceChain` argument in order to retrieve the atomic UTXOs: ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" :1, "method" :"avm.getUTXOs", "params" :{ "addresses":["X-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5", "X-avax1d09qn852zcy03sfc9hay2llmn9hsgnw4tp3dv6"], "limit":5, "sourceChain": "P", "encoding": "hex" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X ``` This gives response: ```json { "jsonrpc": "2.0", "result": { "numFetched": "1", "utxos": [ "0x00001f989ffaf18a18a59bdfbf209342aa61c6a62a67e8639d02bb3c8ddab315c6fa0000000039c33a499ce4c33a3b09cdd2cfa01ae70dbf2d18b2d7d168524440e55d550088000000070011c304cd7eb5c0000000000000000000000001000000013cb7d3842e8cee6a0ebd09f1fe884f6861e1b29c83497819" ], "endIndex": { "address": "X-avax18jma8ppw3nhx5r4ap8clazz0dps7rv5ukulre5", "utxo": "2Sz2XwRYqUHwPeiKoRnZ6ht88YqzAF1SQjMYZQQaB5wBFkAqST" }, "encoding": "hex" }, "id": 1 } ``` ### `avm.issueTx` Send a signed transaction to the network. `encoding` specifies the format of the signed transaction. Can only be `hex` when a value is provided. **Signature:** ```sh avm.issueTx({ tx: string, encoding: string, //optional }) -> { txID: string } ``` **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" : 1, "method" :"avm.issueTx", "params" :{ "tx":"0x00000009de31b4d8b22991d51aa6aa1fc733f23a851a8c9400000000000186a0000000005f041280000000005f9ca900000030390000000000000001fceda8f90fcb5d30614b99d79fc4baa29307762668f16eb0259a57c2d3b78c875c86ec2045792d4df2d926c40f829196e0bb97ee697af71f5b0a966dabff749634c8b729855e937715b0e44303fd1014daedc752006011b730", "encoding": "hex" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X ``` **Example Response:** ```json { "jsonrpc": "2.0", "id": 1, "result": { "txID": "NUPLwbt2hsYxpQg4H2o451hmTWQ4JZx2zMzM4SinwtHgAdX1JLPHXvWSXEnpecStLj" } } ``` ### `wallet.issueTx` Send a signed transaction to the network and assume the TX will be accepted. `encoding` specifies the format of the signed transaction. Can only be `hex` when a value is provided. This call is made to the wallet API endpoint: `/ext/bc/X/wallet` Endpoint deprecated as of [**v1.9.12**](https://github.com/ava-labs/avalanchego/releases/tag/v1.9.12). **Signature:** ```sh wallet.issueTx({ tx: string, encoding: string, //optional }) -> { txID: string } ``` **Example Call:** ```sh curl -X POST --data '{ "jsonrpc":"2.0", "id" : 1, "method" :"wallet.issueTx", "params" :{ "tx":"0x00000009de31b4d8b22991d51aa6aa1fc733f23a851a8c9400000000000186a0000000005f041280000000005f9ca900000030390000000000000001fceda8f90fcb5d30614b99d79fc4baa29307762668f16eb0259a57c2d3b78c875c86ec2045792d4df2d926c40f829196e0bb97ee697af71f5b0a966dabff749634c8b729855e937715b0e44303fd1014daedc752006011b730", "encoding": "hex" } }' -H 'content-type:application/json;' 127.0.0.1:9650/ext/bc/X/wallet ``` **Example Response:** ```json { "jsonrpc": "2.0", "id": 1, "result": { "txID": "NUPLwbt2hsYxpQg4H2o451hmTWQ4JZx2zMzM4SinwtHgAdX1JLPHXvWSXEnpecStLj" } } ``` # Transaction Format (/docs/rpcs/x-chain/txn-format) --- title: Transaction Format --- This file is meant to be the single source of truth for how we serialize transactions in the Avalanche Virtual Machine (AVM). This document uses the [primitive serialization](/docs/rpcs/other/standards/serialization-primitives) format for packing and [secp256k1](/docs/rpcs/other/standards/cryptographic-primitives#secp256k1-addresses) for cryptographic user identification. ## Codec ID Some data is prepended with a codec ID (unt16) that denotes how the data should be deserialized. Right now, the only valid codec ID is 0 (`0x00 0x00`). ## Transferable Output Transferable outputs wrap an output with an asset ID. ### What Transferable Output Contains A transferable output contains an `AssetID` and an [`Output`](/docs/rpcs/x-chain/txn-format#outputs). - **`AssetID`** is a 32-byte array that defines which asset this output references. - **`Output`** is an output, as defined [below](/docs/rpcs/x-chain/txn-format#outputs). Outputs have four possible types: [`SECP256K1TransferOutput`](/docs/rpcs/x-chain/txn-format#secp256k1-transfer-output), [`SECP256K1MintOutput`](/docs/rpcs/x-chain/txn-format#secp256k1-mint-output), [`NFTTransferOutput`](/docs/rpcs/x-chain/txn-format#nft-transfer-output) and [`NFTMintOutput`](/docs/rpcs/x-chain/txn-format#nft-mint-output). ### Gantt Transferable Output Specification ```text +----------+----------+-------------------------+ | asset_id : [32]byte | 32 bytes | +----------+----------+-------------------------+ | output : Output | size(output) bytes | +----------+----------+-------------------------+ | 32 + size(output) bytes | +-------------------------+ ``` ### Proto Transferable Output Specification ```text message TransferableOutput { bytes asset_id = 1; // 32 bytes Output output = 2; // size(output) } ``` ### Transferable Output Example Let's make a transferable output: - `AssetID`: `0x000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f` - `Output`: `"Example SECP256K1 Transfer Output from below"` ```text [ AssetID <- 0x000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f Output <- 0x000000070000000000003039000000000000d431000000010000000251025c61fbcfc078f69334f834be6dd26d55a955c3344128e060128ede3523a24a461c8943ab0859, ] = [ // assetID: 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, // output: 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x30, 0x39, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xd4, 0x31, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x02, 0x51, 0x02, 0x5c, 0x61, 0xfb, 0xcf, 0xc0, 0x78, 0xf6, 0x93, 0x34, 0xf8, 0x34, 0xbe, 0x6d, 0xd2, 0x6d, 0x55, 0xa9, 0x55, 0xc3, 0x34, 0x41, 0x28, 0xe0, 0x60, 0x12, 0x8e, 0xde, 0x35, 0x23, 0xa2, 0x4a, 0x46, 0x1c, 0x89, 0x43, 0xab, 0x08, 0x59, ] ``` ## Transferable Input Transferable inputs describe a specific UTXO with a provided transfer input. ### What Transferable Input Contains A transferable input contains a `TxID`, `UTXOIndex` `AssetID` and an `Input`. - **`TxID`** is a 32-byte array that defines which transaction this input is consuming an output from. Transaction IDs are calculated by taking sha256 of the bytes of the signed transaction. - **`UTXOIndex`** is an int that defines which UTXO this input is consuming in the specified transaction. - **`AssetID`** is a 32-byte array that defines which asset this input references. - **`Input`** is an input, as defined below. This can currently only be a [SECP256K1 transfer input](/docs/rpcs/x-chain/txn-format#secp256k1-transfer-input) ### Gantt Transferable Input Specification ```text +------------+----------+------------------------+ | tx_id : [32]byte | 32 bytes | +------------+----------+------------------------+ | utxo_index : int | 04 bytes | +------------+----------+------------------------+ | asset_id : [32]byte | 32 bytes | +------------+----------+------------------------+ | input : Input | size(input) bytes | +------------+----------+------------------------+ | 68 + size(input) bytes | +------------------------+ ``` ### Proto Transferable Input Specification ```text message TransferableInput { bytes tx_id = 1; // 32 bytes uint32 utxo_index = 2; // 04 bytes bytes asset_id = 3; // 32 bytes Input input = 4; // size(input) } ``` ### Transferable Input Example Let's make a transferable input: - `TxID`: `0xf1e1d1c1b1a191817161514131211101f0e0d0c0b0a090807060504030201000` - `UTXOIndex`: `5` - `AssetID`: `0x000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f` - `Input`: `"Example SECP256K1 Transfer Input from below"` ```text [ TxID <- 0xf1e1d1c1b1a191817161514131211101f0e0d0c0b0a090807060504030201000 UTXOIndex <- 0x00000005 AssetID <- 0x000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f Input <- 0x0000000500000000075bcd15000000020000000700000003 ] = [ // txID: 0xf1, 0xe1, 0xd1, 0xc1, 0xb1, 0xa1, 0x91, 0x81, 0x71, 0x61, 0x51, 0x41, 0x31, 0x21, 0x11, 0x01, 0xf0, 0xe0, 0xd0, 0xc0, 0xb0, 0xa0, 0x90, 0x80, 0x70, 0x60, 0x50, 0x40, 0x30, 0x20, 0x10, 0x00, // utxoIndex: 0x00, 0x00, 0x00, 0x05, // assetID: 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, // input: 0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x00, 0x07, 0x5b, 0xcd, 0x15, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x07 ] ``` ## Transferable Op Transferable operations describe a set of UTXOs with a provided transfer operation. Only one Asset ID is able to be referenced per operation. ### What Transferable Op Contains A transferable operation contains an `AssetID`, `UTXOIDs`, and a `TransferOp`. - **`AssetID`** is a 32-byte array that defines which asset this operation changes. - **`UTXOIDs`** is an array of TxID-OutputIndex tuples. This array must be sorted in lexicographical order. - **`TransferOp`** is a [transferable operation object](/docs/rpcs/x-chain/txn-format#operations). ### Gantt Transferable Op Specification ```text +-------------+------------+------------------------------+ | asset_id : [32]byte | 32 bytes | +-------------+------------+------------------------------+ | utxo_ids : []UTXOID | 4 + 36 * len(utxo_ids) bytes | +-------------+------------+------------------------------+ | transfer_op : TransferOp | size(transfer_op) bytes | +-------------+------------+------------------------------+ | 36 + 36 * len(utxo_ids) | | + size(transfer_op) bytes | +------------------------------+ ``` ### Proto Transferable Op Specification ```text message UTXOID { bytes tx_id = 1; // 32 bytes uint32 utxo_index = 2; // 04 bytes } message TransferableOp { bytes asset_id = 1; // 32 bytes repeated UTXOID utxo_ids = 2; // 4 + 36 * len(utxo_ids) bytes TransferOp transfer_op = 3; // size(transfer_op) } ``` ### Transferable Op Example Let's make a transferable operation: - `AssetID`: `0x000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f` - `UTXOIDs`: - `UTXOID`: - `TxID`: `0xf1e1d1c1b1a191817161514131211101f0e0d0c0b0a090807060504030201000` - `UTXOIndex`: `5` - `Op`: `"Example Transfer Op from below"` ```text [ AssetID <- 0x000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f UTXOIDs <- [ { TxID:0xf1e1d1c1b1a191817161514131211101f0e0d0c0b0a090807060504030201000 UTXOIndex:5 } ] Op <- 0x0000000d0000000200000003000000070000303900000003431100000000010000000251025c61fbcfc078f69334f834be6dd26d55a955c3344128e060128ede3523a24a461c8943ab0859 ] = [ // assetID: 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, // number of utxoIDs: 0x00, 0x00, 0x00, 0x01, // txID: 0xf1, 0xe1, 0xd1, 0xc1, 0xb1, 0xa1, 0x91, 0x81, 0x71, 0x61, 0x51, 0x41, 0x31, 0x21, 0x11, 0x01, 0xf0, 0xe0, 0xd0, 0xc0, 0xb0, 0xa0, 0x90, 0x80, 0x70, 0x60, 0x50, 0x40, 0x30, 0x20, 0x10, 0x00, // utxoIndex: 0x00, 0x00, 0x00, 0x05, // op: 0x00, 0x00, 0x00, 0x0d, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x30, 0x39, 0x00, 0x00, 0x00, 0x03, 0x43, 0x11, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x02, 0x51, 0x02, 0x5c, 0x61, 0xfb, 0xcf, 0xc0, 0x78, 0xf6, 0x93, 0x34, 0xf8, 0x34, 0xbe, 0x6d, 0xd2, 0x6d, 0x55, 0xa9, 0x55, 0xc3, 0x34, 0x41, 0x28, 0xe0, 0x60, 0x12, 0x8e, 0xde, 0x35, 0x23, 0xa2, 0x4a, 0x46, 0x1c, 0x89, 0x43, 0xab, 0x08, 0x59, ] ``` ## Outputs Outputs have four possible types: [`SECP256K1TransferOutput`](/docs/rpcs/x-chain/txn-format#secp256k1-transfer-output), [`SECP256K1MintOutput`](/docs/rpcs/x-chain/txn-format#secp256k1-mint-output), [`NFTTransferOutput`](/docs/rpcs/x-chain/txn-format#nft-transfer-output) and [`NFTMintOutput`](/docs/rpcs/x-chain/txn-format#nft-mint-output). ## SECP256K1 Mint Output A [secp256k1](/docs/rpcs/other/standards/cryptographic-primitives#secp256k1-addresses) mint output is an output that is owned by a collection of addresses. ### What SECP256K1 Mint Output Contains A secp256k1 Mint output contains a `TypeID`, `Locktime`, `Threshold`, and `Addresses`. - **`TypeID`** is the ID for this output type. It is `0x00000006`. - **`Locktime`** is a long that contains the Unix timestamp that this output can be spent after. The Unix timestamp is specific to the second. - **`Threshold`** is an int that names the number of unique signatures required to spend the output. Must be less than or equal to the length of **`Addresses`**. If **`Addresses`** is empty, must be 0. - **`Addresses`** is a list of unique addresses that correspond to the private keys that can be used to spend this output. Addresses must be sorted lexicographically. ### Gantt SECP256K1 Mint Output Specification ```text +-----------+------------+--------------------------------+ | type_id : int | 4 bytes | +-----------+------------+--------------------------------+ | locktime : long | 8 bytes | +-----------+------------+--------------------------------+ | threshold : int | 4 bytes | +-----------+------------+--------------------------------+ | addresses : [][20]byte | 4 + 20 * len(addresses) bytes | +-----------+------------+--------------------------------+ | 20 + 20 * len(addresses) bytes | +--------------------------------+ ``` ### Proto SECP256K1 Mint Output Specification ```text message SECP256K1MintOutput { uint32 typeID = 1; // 04 bytes uint64 locktime = 2; // 08 bytes uint32 threshold = 3; // 04 bytes repeated bytes addresses = 4; // 04 bytes + 20 bytes * len(addresses) } ``` ### SECP256K1 Mint Output Example Let's make a SECP256K1 mint output with: - **`TypeID`**: `6` - **`Locktime`**: `54321` - **`Threshold`**: `1` - **`Addresses`**: - `0x51025c61fbcfc078f69334f834be6dd26d55a955` - `0xc3344128e060128ede3523a24a461c8943ab0859` ```text [ TypeID <- 0x00000006 Locktime <- 0x000000000000d431 Threshold <- 0x00000001 Addresses <- [ 0x51025c61fbcfc078f69334f834be6dd26d55a955, 0xc3344128e060128ede3523a24a461c8943ab0859, ] ] = [ // typeID: 0x00, 0x00, 0x00, 0x06, // locktime: 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xd4, 0x31, // threshold: 0x00, 0x00, 0x00, 0x01, // number of addresses: 0x00, 0x00, 0x00, 0x02, // addrs[0]: 0x51, 0x02, 0x5c, 0x61, 0xfb, 0xcf, 0xc0, 0x78, 0xf6, 0x93, 0x34, 0xf8, 0x34, 0xbe, 0x6d, 0xd2, 0x6d, 0x55, 0xa9, 0x55, // addrs[1]: 0xc3, 0x34, 0x41, 0x28, 0xe0, 0x60, 0x12, 0x8e, 0xde, 0x35, 0x23, 0xa2, 0x4a, 0x46, 0x1c, 0x89, 0x43, 0xab, 0x08, 0x59, ] ``` ## SECP256K1 Transfer Output A [secp256k1](/docs/rpcs/other/standards/cryptographic-primitives#secp256k1-addresses) transfer output allows for sending a quantity of an asset to a collection of addresses after a specified Unix time. ### What SECP256K1 Transfer Output Contains A secp256k1 transfer output contains a `TypeID`, `Amount`, `Locktime`, `Threshold`, and `Addresses`. - **`TypeID`** is the ID for this output type. It is `0x00000007`. - **`Amount`** is a long that specifies the quantity of the asset that this output owns. Must be positive. - **`Locktime`** is a long that contains the Unix timestamp that this output can be spent after. The Unix timestamp is specific to the second. - **`Threshold`** is an int that names the number of unique signatures required to spend the output. Must be less than or equal to the length of **`Addresses`**. If **`Addresses`** is empty, must be 0. - **`Addresses`** is a list of unique addresses that correspond to the private keys that can be used to spend this output. Addresses must be sorted lexicographically. ### Gantt SECP256K1 Transfer Output Specification ```text +-----------+------------+--------------------------------+ | type_id : int | 4 bytes | +-----------+------------+--------------------------------+ | amount : long | 8 bytes | +-----------+------------+--------------------------------+ | locktime : long | 8 bytes | +-----------+------------+--------------------------------+ | threshold : int | 4 bytes | +-----------+------------+--------------------------------+ | addresses : [][20]byte | 4 + 20 * len(addresses) bytes | +-----------+------------+--------------------------------+ | 28 + 20 * len(addresses) bytes | +--------------------------------+ ``` ### Proto SECP256K1 Transfer Output Specification ```text message SECP256K1TransferOutput { uint32 typeID = 1; // 04 bytes uint64 amount = 2; // 08 bytes uint64 locktime = 3; // 08 bytes uint32 threshold = 4; // 04 bytes repeated bytes addresses = 5; // 04 bytes + 20 bytes * len(addresses) } ``` ### SECP256K1 Transfer Output Example Let's make a secp256k1 transfer output with: - **`TypeID`**: `7` - **`Amount`**: `12345` - **`Locktime`**: `54321` - **`Threshold`**: `1` - **`Addresses`**: - `0x51025c61fbcfc078f69334f834be6dd26d55a955` - `0xc3344128e060128ede3523a24a461c8943ab0859` ```text [ TypeID <- 0x00000007 Amount <- 0x0000000000003039 Locktime <- 0x000000000000d431 Threshold <- 0x00000001 Addresses <- [ 0x51025c61fbcfc078f69334f834be6dd26d55a955, 0xc3344128e060128ede3523a24a461c8943ab0859, ] ] = [ // typeID: 0x00, 0x00, 0x00, 0x07, // amount: 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x30, 0x39, // locktime: 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xd4, 0x31, // threshold: 0x00, 0x00, 0x00, 0x01, // number of addresses: 0x00, 0x00, 0x00, 0x02, // addrs[0]: 0x51, 0x02, 0x5c, 0x61, 0xfb, 0xcf, 0xc0, 0x78, 0xf6, 0x93, 0x34, 0xf8, 0x34, 0xbe, 0x6d, 0xd2, 0x6d, 0x55, 0xa9, 0x55, // addrs[1]: 0xc3, 0x34, 0x41, 0x28, 0xe0, 0x60, 0x12, 0x8e, 0xde, 0x35, 0x23, 0xa2, 0x4a, 0x46, 0x1c, 0x89, 0x43, 0xab, 0x08, 0x59, ] ``` ## NFT Mint Output An NFT mint output is an NFT that is owned by a collection of addresses. ### What NFT Mint Output Contains An NFT Mint output contains a `TypeID`, `GroupID`, `Locktime`, `Threshold`, and `Addresses`. - **`TypeID`** is the ID for this output type. It is `0x0000000a`. - **`GroupID`** is an int that specifies the group this NFT is issued to. - **`Locktime`** is a long that contains the Unix timestamp that this output can be spent after. The Unix timestamp is specific to the second. - **`Threshold`** is an int that names the number of unique signatures required to spend the output. Must be less than or equal to the length of **`Addresses`**. If **`Addresses`** is empty, must be 0. - **`Addresses`** is a list of unique addresses that correspond to the private keys that can be used to spend this output. Addresses must be sorted lexicographically. ### Gantt NFT Mint Output Specification ```text +-----------+------------+--------------------------------+ | type_id : int | 4 bytes | +-----------+------------+--------------------------------+ | group_id : int | 4 bytes | +-----------+------------+--------------------------------+ | locktime : long | 8 bytes | +-----------+------------+--------------------------------+ | threshold : int | 4 bytes | +-----------+------------+--------------------------------+ | addresses : [][20]byte | 4 + 20 * len(addresses) bytes | +-----------+------------+--------------------------------+ | 24 + 20 * len(addresses) bytes | +--------------------------------+ ``` ### Proto NFT Mint Output Specification ```text message NFTMintOutput { uint32 typeID = 1; // 04 bytes uint32 group_id = 2; // 04 bytes uint64 locktime = 3; // 08 bytes uint32 threshold = 4; // 04 bytes repeated bytes addresses = 5; // 04 bytes + 20 bytes * len(addresses) } ``` ### NFT Mint Output Example Let's make an NFT mint output with: - **`TypeID`**: `10` - **`GroupID`**: `12345` - **`Locktime`**: `54321` - **`Threshold`**: `1` - **`Addresses`**: - `0x51025c61fbcfc078f69334f834be6dd26d55a955` - `0xc3344128e060128ede3523a24a461c8943ab0859` ```text [ TypeID <- 0x0000000a GroupID <- 0x00003039 Locktime <- 0x000000000000d431 Threshold <- 0x00000001 Addresses <- [ 0x51025c61fbcfc078f69334f834be6dd26d55a955, 0xc3344128e060128ede3523a24a461c8943ab0859, ] ] = [ // TypeID 0x00, 0x00, 0x00, 0x0a, // groupID: 0x00, 0x00, 0x30, 0x39, // locktime: 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xd4, 0x31, // threshold: 0x00, 0x00, 0x00, 0x01, // number of addresses: 0x00, 0x00, 0x00, 0x02, // addrs[0]: 0x51, 0x02, 0x5c, 0x61, 0xfb, 0xcf, 0xc0, 0x78, 0xf6, 0x93, 0x34, 0xf8, 0x34, 0xbe, 0x6d, 0xd2, 0x6d, 0x55, 0xa9, 0x55, // addrs[1]: 0xc3, 0x34, 0x41, 0x28, 0xe0, 0x60, 0x12, 0x8e, 0xde, 0x35, 0x23, 0xa2, 0x4a, 0x46, 0x1c, 0x89, 0x43, 0xab, 0x08, 0x59, ] ``` ## NFT Transfer Output An NFT transfer output is an NFT that is owned by a collection of addresses. ### What NFT Transfer Output Contains An NFT transfer output contains a `TypeID`, `GroupID`, `Payload`, `Locktime`, `Threshold`, and `Addresses`. - **`TypeID`** is the ID for this output type. It is `0x0000000b`. - **`GroupID`** is an int that specifies the group this NFT was issued with. - **`Payload`** is an arbitrary string of bytes no long longer than 1024 bytes. - **`Locktime`** is a long that contains the Unix timestamp that this output can be spent after. The Unix timestamp is specific to the second. - **`Threshold`** is an int that names the number of unique signatures required to spend the output. Must be less than or equal to the length of **`Addresses`**. If **`Addresses`** is empty, must be 0. - **`Addresses`** is a list of unique addresses that correspond to the private keys that can be used to spend this output. Addresses must be sorted lexicographically. ### Gantt NFT Transfer Output Specification ```text +-----------+------------+-------------------------------+ | type_id : int | 4 bytes | +-----------+------------+-------------------------------+ | group_id : int | 4 bytes | +-----------+------------+-------------------------------+ | payload : []byte | 4 + len(payload) bytes | +-----------+------------+-------------------------------+ | locktime : long | 8 bytes | +-----------+------------+-------------------------------+ | threshold : int | 4 bytes | +-----------+------------+-------------------------------+ | addresses : [][20]byte | 4 + 20 * len(addresses) bytes | +-----------+------------+-------------------------------+ | 28 + len(payload) | | + 20 * len(addresses) bytes | +-------------------------------+ ``` ### Proto NFT Transfer Output Specification ```text message NFTTransferOutput { uint32 typeID = 1; // 04 bytes uint32 group_id = 2; // 04 bytes bytes payload = 3; // 04 bytes + len(payload) uint64 locktime = 4 // 08 bytes uint32 threshold = 5; // 04 bytes repeated bytes addresses = 6; // 04 bytes + 20 bytes * len(addresses) } ``` ### NFT Transfer Output Example Let's make an NFT transfer output with: - **`TypeID`**: `11` - **`GroupID`**: `12345` - **`Payload`**: `NFT Payload` - **`Locktime`**: `54321` - **`Threshold`**: `1` - **`Addresses`**: - `0x51025c61fbcfc078f69334f834be6dd26d55a955` - `0xc3344128e060128ede3523a24a461c8943ab0859` ```text [ TypeID <- 0x0000000b GroupID <- 0x00003039 Payload <- 0x4e4654205061796c6f6164 Locktime <- 0x000000000000d431 Threshold <- 0x00000001 Addresses <- [ 0x51025c61fbcfc078f69334f834be6dd26d55a955, 0xc3344128e060128ede3523a24a461c8943ab0859, ] ] = [ // TypeID: 0x00, 0x00, 0x00, 0x0b, // groupID: 0x00, 0x00, 0x30, 0x39, // length of payload: 0x00, 0x00, 0x00, 0x0b, // payload: 0x4e, 0x46, 0x54, 0x20, 0x50, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, // locktime: 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xd4, 0x31, // threshold: 0x00, 0x00, 0x00, 0x01, // number of addresses: 0x00, 0x00, 0x00, 0x02, // addrs[0]: 0x51, 0x02, 0x5c, 0x61, 0xfb, 0xcf, 0xc0, 0x78, 0xf6, 0x93, 0x34, 0xf8, 0x34, 0xbe, 0x6d, 0xd2, 0x6d, 0x55, 0xa9, 0x55, // addrs[1]: 0xc3, 0x34, 0x41, 0x28, 0xe0, 0x60, 0x12, 0x8e, 0xde, 0x35, 0x23, 0xa2, 0x4a, 0x46, 0x1c, 0x89, 0x43, 0xab, 0x08, 0x59, ] ``` ## Inputs Inputs have one possible type: `SECP256K1TransferInput`. ## SECP256K1 Transfer Input A [secp256k1](/docs/rpcs/other/standards/cryptographic-primitives#secp256k1-addresses) transfer input allows for spending an unspent secp256k1 transfer output. ### What SECP256K1 Transfer Input Contains A secp256k1 transfer input contains an `Amount` and `AddressIndices`. - **`TypeID`** is the ID for this input type. It is `0x00000005`. - **`Amount`** is a long that specifies the quantity that this input should be consuming from the UTXO. Must be positive. Must be equal to the amount specified in the UTXO. - **`AddressIndices`** is a list of unique ints that define the private keys that are being used to spend the UTXO. Each UTXO has an array of addresses that can spend the UTXO. Each int represents the index in this address array that will sign this transaction. The array must be sorted low to high. ### Gantt SECP256K1 Transfer Input Specification ```text +-------------------------+-------------------------------------+ | type_id : int | 4 bytes | +-----------------+-------+-------------------------------------+ | amount : long | 8 bytes | +-----------------+-------+-------------------------------------+ | address_indices : []int | 4 + 4 * len(address_indices) bytes | +-----------------+-------+-------------------------------------+ | 16 + 4 * len(address_indices) bytes | +-------------------------------------+ ``` ### Proto SECP256K1 Transfer Input Specification ```text message SECP256K1TransferInput { uint32 typeID = 1; // 04 bytes uint64 amount = 2; // 08 bytes repeated uint32 address_indices = 3; // 04 bytes + 04 bytes * len(address_indices) } ``` ### SECP256K1 Transfer Input Example Let's make a payment input with: - **`TypeId`**: `5` - **`Amount`**: `123456789` - **`AddressIndices`**: \[`3`,`7`\] ```text [ TypeID <- 0x00000005 Amount <- 123456789 = 0x00000000075bcd15, AddressIndices <- [0x00000003, 0x00000007] ] = [ // type id: 0x00, 0x00, 0x00, 0x05, // amount: 0x00, 0x00, 0x00, 0x00, 0x07, 0x5b, 0xcd, 0x15, // length: 0x00, 0x00, 0x00, 0x02, // sig[0] 0x00, 0x00, 0x00, 0x03, // sig[1] 0x00, 0x00, 0x00, 0x07, ] ``` ## Operations Operations have three possible types: `SECP256K1MintOperation`, `NFTMintOp`, and `NFTTransferOp`. ## SECP256K1 Mint Operation A [secp256k1](/docs/rpcs/other/standards/cryptographic-primitives#secp256k1-addresses) mint operation consumes a SECP256K1 mint output, creates a new mint output and sends a transfer output to a new set of owners. ### What SECP256K1 Mint Operation Contains A secp256k1 Mint operation contains a `TypeID`, `AddressIndices`, `MintOutput`, and `TransferOutput`. - **`TypeID`** is the ID for this output type. It is `0x00000008`. - **`AddressIndices`** is a list of unique ints that define the private keys that are being used to spend the [UTXO](/docs/rpcs/x-chain/txn-format#utxo). Each UTXO has an array of addresses that can spend the UTXO. Each int represents the index in this address array that will sign this transaction. The array must be sorted low to high. - **`MintOutput`** is a [SECP256K1 Mint output](/docs/rpcs/x-chain/txn-format#secp256k1-mint-output). - **`TransferOutput`** is a [SECP256K1 Transfer output](/docs/rpcs/x-chain/txn-format#secp256k1-transfer-output). ### Gantt SECP256K1 Mint Operation Specification ```text +----------------------------------+------------------------------------+ | type_id : int | 4 bytes | +----------------------------------+------------------------------------+ | address_indices : []int | 4 + 4 * len(address_indices) bytes | +----------------------------------+------------------------------------+ | mint_output : MintOutput | size(mint_output) bytes | +----------------------------------+------------------------------------+ | transfer_output : TransferOutput | size(transfer_output) bytes | +----------------------------------+------------------------------------+ | 8 + 4 * len(address_indices) | | + size(mint_output) | | + size(transfer_output) bytes | +------------------------------------+ ``` ### Proto SECP256K1 Mint Operation Specification ```text message SECP256K1MintOperation { uint32 typeID = 1; // 4 bytes repeated uint32 address_indices = 2; // 04 bytes + 04 bytes * len(address_indices) MintOutput mint_output = 3; // size(mint_output TransferOutput transfer_output = 4; // size(transfer_output) } ``` ### SECP256K1 Mint Operation Example Let's make a [secp256k1](/docs/rpcs/other/standards/cryptographic-primitives#secp256k1-addresses) mint operation with: - **`TypeId`**: `8` - **`AddressIndices`**: - `0x00000003` - `0x00000007` - **`MintOutput`**: `"Example SECP256K1 Mint Output from above"` - **`TransferOutput`**: `"Example SECP256K1 Transfer Output from above"` ```text [ TypeID <- 0x00000008 AddressIndices <- [0x00000003, 0x00000007] MintOutput <- 0x00000006000000000000d431000000010000000251025c61fbcfc078f69334f834be6dd26d55a955c3344128e060128ede3523a24a461c89 TransferOutput <- 0x000000070000000000003039000000000000d431000000010000000251025c61fbcfc078f69334f834be6dd26d55a955c3344128e060128ede3523a24a461c8943ab0859 ] = [ // typeID 0x00, 0x00, 0x00, 0x08, // number of address_indices: 0x00, 0x00, 0x00, 0x02, // address_indices[0]: 0x00, 0x00, 0x00, 0x03, // address_indices[1]: 0x00, 0x00, 0x00, 0x07, // mint output 0x00, 0x00, 0x00, 0x06, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xd4, 0x31, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x02, 0x51, 0x02, 0x5c, 0x61, 0xfb, 0xcf, 0xc0, 0x78, 0xf6, 0x93, 0x34, 0xf8, 0x34, 0xbe, 0x6d, 0xd2, 0x6d, 0x55, 0xa9, 0x55, 0xc3, 0x34, 0x41, 0x28, 0xe0, 0x60, 0x12, 0x8e, 0xde, 0x35, 0x23, 0xa2, 0x4a, 0x46, 0x1c, 0x89, 0x43, 0xab, 0x08, 0x59, // transfer output 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x30, 0x39, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xd4, 0x31, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x02, 0x51, 0x02, 0x5c, 0x61, 0xfb, 0xcf, 0xc0, 0x78, 0xf6, 0x93, 0x34, 0xf8, 0x34, 0xbe, 0x6d, 0xd2, 0x6d, 0x55, 0xa9, 0x55, 0xc3, 0x34, 0x41, 0x28, 0xe0, 0x60, 0x12, 0x8e, 0xde, 0x35, 0x23, 0xa2, 0x4a, 0x46, 0x1c, 0x89, 0x43, 0xab, 0x08, 0x59, ] ``` ## NFT Mint Op An NFT mint operation consumes an NFT mint output and sends an unspent output to a new set of owners. ### What NFT Mint Op Contains An NFT mint operation contains a `TypeID`, `AddressIndices`, `GroupID`, `Payload`, and `Output` of addresses. - **`TypeID`** is the ID for this operation type. It is `0x0000000c`. - **`AddressIndices`** is a list of unique ints that define the private keys that are being used to spend the UTXO. Each UTXO has an array of addresses that can spend the UTXO. Each int represents the index in this address array that will sign this transaction. The array must be sorted low to high. - **`GroupID`** is an int that specifies the group this NFT is issued to. - **`Payload`** is an arbitrary string of bytes no longer than 1024 bytes. - **`Output`** is not a `TransferableOutput`, but rather is a lock time, threshold, and an array of unique addresses that correspond to the private keys that can be used to spend this output. Addresses must be sorted lexicographically. ### Gantt NFT Mint Op Specification ```text +------------------------------+------------------------------------+ | type_id : int | 4 bytes | +-----------------+------------+------------------------------------+ | address_indices : []int | 4 + 4 * len(address_indices) bytes | +-----------------+------------+------------------------------------+ | group_id : int | 4 bytes | +-----------------+------------+------------------------------------+ | payload : []byte | 4 + len(payload) bytes | +-----------------+------------+------------------------------------+ | outputs : []Output | 4 + size(outputs) bytes | +-----------------+------------+------------------------------------+ | 20 + | | 4 * len(address_indices) + | | len(payload) + | | size(outputs) bytes | +------------------------------------+ ``` ### Proto NFT Mint Op Specification ```text message NFTMintOp { uint32 typeID = 1; // 04 bytes repeated uint32 address_indices = 2; // 04 bytes + 04 bytes * len(address_indices) uint32 group_id = 3; // 04 bytes bytes payload = 4; // 04 bytes + len(payload) repeated bytes outputs = 5; // 04 bytes + size(outputs) } ``` ### NFT Mint Op Example Let's make an NFT mint operation with: - **`TypeId`**: `12` - **`AddressIndices`**: - `0x00000003` - `0x00000007` - **`GroupID`**: `12345` - **`Payload`**: `0x431100` - **`Locktime`**: `54321` - **`Threshold`**: `1` - **`Addresses`**: - `0xc3344128e060128ede3523a24a461c8943ab0859` ```text [ TypeID <- 0x0000000c AddressIndices <- [ 0x00000003, 0x00000007, ] GroupID <- 0x00003039 Payload <- 0x431100 Locktime <- 0x000000000000d431 Threshold <- 0x00000001 Addresses <- [ 0xc3344128e060128ede3523a24a461c8943ab0859 ] ] = [ // Type ID 0x00, 0x00, 0x00, 0x0c, // number of address indices: 0x00, 0x00, 0x00, 0x02, // address index 0: 0x00, 0x00, 0x00, 0x03, // address index 1: 0x00, 0x00, 0x00, 0x07, // groupID: 0x00, 0x00, 0x30, 0x39, // length of payload: 0x00, 0x00, 0x00, 0x03, // payload: 0x43, 0x11, 0x00, // number of outputs: 0x00, 0x00, 0x00, 0x01, // outputs[0] // locktime: 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xd4, 0x31, // threshold: 0x00, 0x00, 0x00, 0x01, // number of addresses: 0x00, 0x00, 0x00, 0x01, // addrs[0]: 0xc3, 0x34, 0x41, 0x28, 0xe0, 0x60, 0x12, 0x8e, 0xde, 0x35, 0x23, 0xa2, 0x4a, 0x46, 0x1c, 0x89, 0x43, 0xab, 0x08, 0x59, ] ``` ## NFT Transfer Op An NFT transfer operation sends an unspent NFT transfer output to a new set of owners. ### What NFT Transfer Op Contains An NFT transfer operation contains a `TypeID`, `AddressIndices` and an untyped `NFTTransferOutput`. - **`TypeID`** is the ID for this output type. It is `0x0000000d`. - **`AddressIndices`** is a list of unique ints that define the private keys that are being used to spend the UTXO. Each UTXO has an array of addresses that can spend the UTXO. Each int represents the index in this address array that will sign this transaction. The array must be sorted low to high. - **`NFTTransferOutput`** is the output of this operation and must be an [NFT Transfer Output](/docs/rpcs/x-chain/txn-format#nft-transfer-output). This output doesn't have the **`TypeId`**, because the type is known by the context of being in this operation. ### Gantt NFT Transfer Op Specification ```text +------------------------------+------------------------------------+ | type_id : int | 4 bytes | +-----------------+------------+------------------------------------+ | address_indices : []int | 4 + 4 * len(address_indices) bytes | +-----------------+------------+------------------------------------+ | group_id : int | 4 bytes | +-----------------+------------+------------------------------------+ | payload : []byte | 4 + len(payload) bytes | +-----------------+------------+------------------------------------+ | locktime : long | 8 bytes | +-----------+------------+------------------------------------------+ | threshold : int | 4 bytes | +-----------------+------------+------------------------------------+ | addresses : [][20]byte | 4 + 20 * len(addresses) bytes | +-----------------+------------+------------------------------------+ | 36 + len(payload) | | + 4 * len(address_indices) | | + 20 * len(addresses) bytes | +------------------------------------+ ``` ### Proto NFT Transfer Op Specification ```text message NFTTransferOp { uint32 typeID = 1; // 04 bytes repeated uint32 address_indices = 2; // 04 bytes + 04 bytes * len(address_indices) uint32 group_id = 3; // 04 bytes bytes payload = 4; // 04 bytes + len(payload) uint64 locktime = 5; // 08 bytes uint32 threshold = 6; // 04 bytes repeated bytes addresses = 7; // 04 bytes + 20 bytes * len(addresses) } ``` ### NFT Transfer Op Example Let's make an NFT transfer operation with: - **`TypeID`**: `13` - **`AddressIndices`**: - `0x00000007` - `0x00000003` - **`GroupID`**: `12345` - **`Payload`**: `0x431100` - **`Locktime`**: `54321` - **`Threshold`**: `1` - **`Addresses`**: - `0xc3344128e060128ede3523a24a461c8943ab0859` - `0x51025c61fbcfc078f69334f834be6dd26d55a955` ```text [ TypeID <- 0x0000000d AddressIndices <- [ 0x00000007, 0x00000003, ] GroupID <- 0x00003039 Payload <- 0x431100 Locktime <- 0x000000000000d431 Threshold <- 00000001 Addresses <- [ 0x51025c61fbcfc078f69334f834be6dd26d55a955, 0xc3344128e060128ede3523a24a461c8943ab0859, ] ] = [ // Type ID 0x00, 0x00, 0x00, 0x0d, // number of address indices: 0x00, 0x00, 0x00, 0x02, // address index 0: 0x00, 0x00, 0x00, 0x07, // address index 1: 0x00, 0x00, 0x00, 0x03, // groupID: 0x00, 0x00, 0x30, 0x39, // length of payload: 0x00, 0x00, 0x00, 0x03, // payload: 0x43, 0x11, 0x00, // locktime: 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xd4, 0x31, // threshold: 0x00, 0x00, 0x00, 0x01, // number of addresses: 0x00, 0x00, 0x00, 0x02, // addrs[0]: 0x51, 0x02, 0x5c, 0x61, 0xfb, 0xcf, 0xc0, 0x78, 0xf6, 0x93, 0x34, 0xf8, 0x34, 0xbe, 0x6d, 0xd2, 0x6d, 0x55, 0xa9, 0x55, // addrs[1]: 0xc3, 0x34, 0x41, 0x28, 0xe0, 0x60, 0x12, 0x8e, 0xde, 0x35, 0x23, 0xa2, 0x4a, 0x46, 0x1c, 0x89, 0x43, 0xab, 0x08, 0x59, ] ``` ## Initial State Initial state describes the initial state of an asset when it is created. It contains the ID of the feature extension that the asset uses, and a variable length array of outputs that denote the genesis UTXO set of the asset. ### What Initial State Contains Initial state contains a `FxID` and an array of `Output`. - **`FxID`** is an int that defines which feature extension this state is part of. For SECP256K1 assets, this is `0x00000000`. For NFT assets, this is `0x00000001`. - **`Outputs`** is a variable length array of [outputs](/docs/rpcs/x-chain/txn-format#outputs), as defined above. ### Gantt Initial State Specification ```text +---------------+----------+-------------------------------+ | fx_id : int | 4 bytes | +---------------+----------+-------------------------------+ | outputs : []Output | 4 + size(outputs) bytes | +---------------+----------+-------------------------------+ | 8 + size(outputs) bytes | +-------------------------------+ ``` ### Proto Initial State Specification ```text message InitialState { uint32 fx_id = 1; // 04 bytes repeated Output outputs = 2; // 04 + size(outputs) bytes } ``` ### Initial State Example Let's make an initial state: - `FxID`: `0x00000000` - `InitialState`: `["Example SECP256K1 Transfer Output from above"]` ```text [ FxID <- 0x00000000 InitialState <- [ 0x000000070000000000003039000000000000d431000000010000000251025c61fbcfc078f69334f834be6dd26d55a955c3344128e060128ede3523a24a461c8943ab0859, ] ] = [ // fxID: 0x00, 0x00, 0x00, 0x00, // num outputs: 0x00, 0x00, 0x00, 0x01, // output: 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x30, 0x39, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xd4, 0x31, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x02, 0x51, 0x02, 0x5c, 0x61, 0xfb, 0xcf, 0xc0, 0x78, 0xf6, 0x93, 0x34, 0xf8, 0x34, 0xbe, 0x6d, 0xd2, 0x6d, 0x55, 0xa9, 0x55, 0xc3, 0x34, 0x41, 0x28, 0xe0, 0x60, 0x12, 0x8e, 0xde, 0x35, 0x23, 0xa2, 0x4a, 0x46, 0x1c, 0x89, 0x43, 0xab, 0x08, 0x59, ] ``` ## Credentials Credentials have two possible types: `SECP256K1Credential`, and `NFTCredential`. Each credential is paired with an Input or Operation. The order of the credentials match the order of the inputs or operations. ## SECP256K1 Credential A [secp256k1](/docs/rpcs/other/standards/cryptographic-primitives#secp256k1-addresses) credential contains a list of 65-byte recoverable signatures. ### What SECP256K1 Credential Contains - **`TypeID`** is the ID for this type. It is `0x00000009`. - **`Signatures`** is an array of 65-byte recoverable signatures. The order of the signatures must match the input's signature indices. ### Gantt SECP256K1 Credential Specification ```text +------------------------------+---------------------------------+ | type_id : int | 4 bytes | +-----------------+------------+---------------------------------+ | signatures : [][65]byte | 4 + 65 * len(signatures) bytes | +-----------------+------------+---------------------------------+ | 8 + 65 * len(signatures) bytes | +---------------------------------+ ``` ### Proto SECP256K1 Credential Specification ```text message SECP256K1Credential { uint32 typeID = 1; // 4 bytes repeated bytes signatures = 2; // 4 bytes + 65 bytes * len(signatures) } ``` ### SECP256K1 Credential Example Let's make a payment input with: - **`TypeID`**: `9` - **`signatures`**: - `0x000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1e1d1f202122232425262728292a2b2c2e2d2f303132333435363738393a3b3c3d3e3f00` - `0x404142434445464748494a4b4c4d4e4f505152535455565758595a5b5c5e5d5f606162636465666768696a6b6c6e6d6f707172737475767778797a7b7c7d7e7f00` ```text [ TypeID <- 0x00000009 Signatures <- [ 0x000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1e1d1f202122232425262728292a2b2c2e2d2f303132333435363738393a3b3c3d3e3f00, 0x404142434445464748494a4b4c4d4e4f505152535455565758595a5b5c5e5d5f606162636465666768696a6b6c6e6d6f707172737475767778797a7b7c7d7e7f00, ] ] = [ // Type ID 0x00, 0x00, 0x00, 0x09, // length: 0x00, 0x00, 0x00, 0x02, // sig[0] 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1e, 0x1d, 0x1f, 0x20, 0x21, 0x22, 0x23, 0x24, 0x25, 0x26, 0x27, 0x28, 0x29, 0x2a, 0x2b, 0x2c, 0x2e, 0x2d, 0x2f, 0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37, 0x38, 0x39, 0x3a, 0x3b, 0x3c, 0x3d, 0x3e, 0x3f, 0x00, // sig[1] 0x40, 0x41, 0x42, 0x43, 0x44, 0x45, 0x46, 0x47, 0x48, 0x49, 0x4a, 0x4b, 0x4c, 0x4d, 0x4e, 0x4f, 0x50, 0x51, 0x52, 0x53, 0x54, 0x55, 0x56, 0x57, 0x58, 0x59, 0x5a, 0x5b, 0x5c, 0x5e, 0x5d, 0x5f, 0x60, 0x61, 0x62, 0x63, 0x64, 0x65, 0x66, 0x67, 0x68, 0x69, 0x6a, 0x6b, 0x6c, 0x6e, 0x6d, 0x6f, 0x70, 0x71, 0x72, 0x73, 0x74, 0x75, 0x76, 0x77, 0x78, 0x79, 0x7a, 0x7b, 0x7c, 0x7d, 0x7e, 0x7f, 0x00, ] ``` ## NFT Credential An NFT credential is the same as an [secp256k1 credential](/docs/rpcs/x-chain/txn-format#secp256k1-credential) with a different TypeID. The TypeID for an NFT credential is `0x0000000e`. ## Unsigned Transactions Unsigned transactions contain the full content of a transaction with only the signatures missing. Unsigned transactions have four possible types: [`CreateAssetTx`](/docs/rpcs/x-chain/txn-format#what-unsigned-create-asset-tx-contains), [`OperationTx`](/docs/rpcs/x-chain/txn-format#what-unsigned-operation-tx-contains), [`ImportTx`](/docs/rpcs/x-chain/txn-format#what-unsigned-import-tx-contains), and [`ExportTx`](/docs/rpcs/x-chain/txn-format#what-unsigned-export-tx-contains). They all embed [`BaseTx`](/docs/rpcs/x-chain/txn-format#what-base-tx-contains), which contains common fields and operations. ## Unsigned BaseTx ### What Base TX Contains A base TX contains a `TypeID`, `NetworkID`, `BlockchainID`, `Outputs`, `Inputs`, and `Memo`. - **`TypeID`** is the ID for this type. It is `0x00000000`. - **`NetworkID`** is an int that defines which network this transaction is meant to be issued to. This value is meant to support transaction routing and is not designed for replay attack prevention. - **`BlockchainID`** is a 32-byte array that defines which blockchain this transaction was issued to. This is used for replay attack prevention for transactions that could potentially be valid across network or blockchain. - **`Outputs`** is an array of [transferable output objects](/docs/rpcs/x-chain/txn-format#transferable-output). Outputs must be sorted lexicographically by their serialized representation. The total quantity of the assets created in these outputs must be less than or equal to the total quantity of each asset consumed in the inputs minus the transaction fee. - **`Inputs`** is an array of [transferable input objects](/docs/rpcs/x-chain/txn-format#transferable-input). Inputs must be sorted and unique. Inputs are sorted first lexicographically by their **`TxID`** and then by the **`UTXOIndex`** from low to high. If there are inputs that have the same **`TxID`** and **`UTXOIndex`**, then the transaction is invalid as this would result in a double spend. - **`Memo`** Memo field contains arbitrary bytes, up to 256 bytes. ### Gantt Base TX Specification ```text +--------------------------------------+-----------------------------------------+ | type_id : int | 4 bytes | +---------------+----------------------+-----------------------------------------+ | network_id : int | 4 bytes | +---------------+----------------------+-----------------------------------------+ | blockchain_id : [32]byte | 32 bytes | +---------------+----------------------+-----------------------------------------+ | outputs : []TransferableOutput | 4 + size(outputs) bytes | +---------------+----------------------+-----------------------------------------+ | inputs : []TransferableInput | 4 + size(inputs) bytes | +---------------+----------------------+-----------------------------------------+ | memo : [256]byte | 4 + size(memo) bytes | +---------------+----------------------+-----------------------------------------+ | 52 + size(outputs) + size(inputs) + size(memo) bytes | +------------------------------------------------------+ ``` ### Proto Base TX Specification ```text message BaseTx { uint32 typeID = 1; // 04 bytes uint32 network_id = 2; // 04 bytes bytes blockchain_id = 3; // 32 bytes repeated Output outputs = 4; // 04 bytes + size(outs) repeated Input inputs = 5; // 04 bytes + size(ins) bytes memo = 6; // 04 bytes + size(memo) } ``` ### Base TX Example Let's make an base TX that uses the inputs and outputs from the previous examples: - **`TypeID`**: `0` - **`NetworkID`**: `4` - **`BlockchainID`**: `0xffffffffeeeeeeeeddddddddcccccccbbbbbbbbaaaaaaaa9999999988888888` - **`Outputs`**: - `"Example Transferable Output as defined above"` - **`Inputs`**: - `"Example Transferable Input as defined above"` - **`Memo`**: `0x00010203` ```text [ TypeID <- 0x00000000 NetworkID <- 0x00000004 BlockchainID <- 0xffffffffeeeeeeeeddddddddcccccccbbbbbbbbaaaaaaaa9999999988888888 Outputs <- [ 0x000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f000000070000000000003039000000000000d431000000010000000251025c61fbcfc078f69334f834be6dd26d55a955c3344128e060128ede3523a24a461c8943ab0859 ] Inputs <- [ 0xf1e1d1c1b1a191817161514131211101f0e0d0c0b0a09080706050403020100000000005000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f0000000500000000075bcd15000000020000000700000003 ] Memo <- 0x00010203 ] = [ // typeID 0x00, 0x00, 0x00, 0x00, // networkID: 0x00, 0x00, 0x00, 0x04, // blockchainID: 0xff, 0xff, 0xff, 0xff, 0xee, 0xee, 0xee, 0xee, 0xdd, 0xdd, 0xdd, 0xdd, 0xcc, 0xcc, 0xcc, 0xcc, 0xbb, 0xbb, 0xbb, 0xbb, 0xaa, 0xaa, 0xaa, 0xaa, 0x99, 0x99, 0x99, 0x99, 0x88, 0x88, 0x88, 0x88, // number of outputs: 0x00, 0x00, 0x00, 0x01, // transferable output: 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x30, 0x39, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xd4, 0x31, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x02, 0x51, 0x02, 0x5c, 0x61, 0xfb, 0xcf, 0xc0, 0x78, 0xf6, 0x93, 0x34, 0xf8, 0x34, 0xbe, 0x6d, 0xd2, 0x6d, 0x55, 0xa9, 0x55, 0xc3, 0x34, 0x41, 0x28, 0xe0, 0x60, 0x12, 0x8e, 0xde, 0x35, 0x23, 0xa2, 0x4a, 0x46, 0x1c, 0x89, 0x43, 0xab, 0x08, 0x59, // number of inputs: 0x00, 0x00, 0x00, 0x01, // transferable input: 0xf1, 0xe1, 0xd1, 0xc1, 0xb1, 0xa1, 0x91, 0x81, 0x71, 0x61, 0x51, 0x41, 0x31, 0x21, 0x11, 0x01, 0xf0, 0xe0, 0xd0, 0xc0, 0xb0, 0xa0, 0x90, 0x80, 0x70, 0x60, 0x50, 0x40, 0x30, 0x20, 0x10, 0x00, 0x00, 0x00, 0x00, 0x05, 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, 0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x00, 0x07, 0x5b, 0xcd, 0x15, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x03, // Memo length: 0x00, 0x00, 0x00, 0x04, // Memo: 0x00, 0x01, 0x02, 0x03, ] ``` ## Unsigned CreateAssetTx ### What Unsigned Create Asset TX Contains An unsigned create asset TX contains a `BaseTx`, `Name`, `Symbol`, `Denomination`, and `InitialStates`. The `TypeID` is `0x00000001`. - **`BaseTx`** - **`Name`** is a human readable string that defines the name of the asset this transaction will create. The name is not guaranteed to be unique. The name must consist of only printable ASCII characters and must be no longer than 128 characters. - **`Symbol`** is a human readable string that defines the symbol of the asset this transaction will create. The symbol is not guaranteed to be unique. The symbol must consist of only printable ASCII characters and must be no longer than 4 characters. - **`Denomination`** is a byte that defines the divisibility of the asset this transaction will create. For example, the AVAX token is divisible into billionths. Therefore, the denomination of the AVAX token is 9. The denomination must be no more than 32. - **`InitialStates`** is a variable length array that defines the feature extensions this asset supports, and the [initial state](/docs/rpcs/x-chain/txn-format#initial-state) of those feature extensions. ### Gantt Unsigned Create Asset TX Specification ```text +----------------+----------------+--------------------------------------+ | base_tx : BaseTx | size(base_tx) bytes | +----------------+----------------+--------------------------------------+ | name : string | 2 + len(name) bytes | +----------------+----------------+--------------------------------------+ | symbol : string | 2 + len(symbol) bytes | +----------------+----------------+--------------------------------------+ | denomination : byte | 1 bytes | +----------------+----------------+--------------------------------------+ | initial_states : []InitialState | 4 + size(initial_states) bytes | +----------------+----------------+--------------------------------------+ | size(base_tx) + size(initial_states) | | + 9 + len(name) + len(symbol) bytes | +--------------------------------------+ ``` ### Proto Unsigned Create Asset TX Specification ```text message CreateAssetTx { BaseTx base_tx = 1; // size(base_tx) string name = 2; // 2 bytes + len(name) name symbol = 3; // 2 bytes + len(symbol) uint8 denomination = 4; // 1 bytes repeated InitialState initial_states = 5; // 4 bytes + size(initial_states) } ``` ### Unsigned Create Asset TX Example Let's make an unsigned base TX that uses the inputs and outputs from the previous examples: - `BaseTx`: `"Example BaseTx as defined above with ID set to 1"` - `Name`: `Volatility Index` - `Symbol`: `VIX` - `Denomination`: `2` - **`InitialStates`**: - `"Example Initial State as defined above"` ```text [ BaseTx <- 0x0000000100000004ffffffffeeeeeeeeddddddddccccccccbbbbbbbbaaaaaaaa999999998888888800000001000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f000000070000000000003039000000000000d431000000010000000251025c61fbcfc078f69334f834be6dd26d55a955c3344128e060128ede3523a24a461c8943ab085900000001f1e1d1c1b1a191817161514131211101f0e0d0c0b0a09080706050403020100000000005000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f0000000500000000075bcd150000000200000007000000030000000400010203 Name <- 0x0010566f6c6174696c69747920496e646578 Symbol <- 0x0003564958 Denomination <- 0x02 InitialStates <- [ 0x0000000000000001000000070000000000003039000000000000d431000000010000000251025c61fbcfc078f69334f834be6dd26d55a955c3344128e060128ede3523a24a461c8943ab0859, ] ] = [ // base tx: 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x04, 0xff, 0xff, 0xff, 0xff, 0xee, 0xee, 0xee, 0xee, 0xdd, 0xdd, 0xdd, 0xdd, 0xcc, 0xcc, 0xcc, 0xcc, 0xbb, 0xbb, 0xbb, 0xbb, 0xaa, 0xaa, 0xaa, 0xaa, 0x99, 0x99, 0x99, 0x99, 0x88, 0x88, 0x88, 0x88, 0x00, 0x00, 0x00, 0x01, 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x30, 0x39, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xd4, 0x31, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x02, 0x51, 0x02, 0x5c, 0x61, 0xfb, 0xcf, 0xc0, 0x78, 0xf6, 0x93, 0x34, 0xf8, 0x34, 0xbe, 0x6d, 0xd2, 0x6d, 0x55, 0xa9, 0x55, 0xc3, 0x34, 0x41, 0x28, 0xe0, 0x60, 0x12, 0x8e, 0xde, 0x35, 0x23, 0xa2, 0x4a, 0x46, 0x1c, 0x89, 0x43, 0xab, 0x08, 0x59, 0x00, 0x00, 0x00, 0x01, 0xf1, 0xe1, 0xd1, 0xc1, 0xb1, 0xa1, 0x91, 0x81, 0x71, 0x61, 0x51, 0x41, 0x31, 0x21, 0x11, 0x01, 0xf0, 0xe0, 0xd0, 0xc0, 0xb0, 0xa0, 0x90, 0x80, 0x70, 0x60, 0x50, 0x40, 0x30, 0x20, 0x10, 0x00, 0x00, 0x00, 0x00, 0x05, 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, 0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x00, 0x07, 0x5b, 0xcd, 0x15, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x04, 0x00, 0x01, 0x02, 0x03 // name: 0x00, 0x10, 0x56, 0x6f, 0x6c, 0x61, 0x74, 0x69, 0x6c, 0x69, 0x74, 0x79, 0x20, 0x49, 0x6e, 0x64, 0x65, 0x78, // symbol length: 0x00, 0x03, // symbol: 0x56, 0x49, 0x58, // denomination: 0x02, // number of InitialStates: 0x00, 0x00, 0x00, 0x01, // InitialStates[0]: 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x30, 0x39, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xd4, 0x31, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x02, 0x51, 0x02, 0x5c, 0x61, 0xfb, 0xcf, 0xc0, 0x78, 0xf6, 0x93, 0x34, 0xf8, 0x34, 0xbe, 0x6d, 0xd2, 0x6d, 0x55, 0xa9, 0x55, 0xc3, 0x34, 0x41, 0x28, 0xe0, 0x60, 0x12, 0x8e, 0xde, 0x35, 0x23, 0xa2, 0x4a, 0x46, 0x1c, 0x89, 0x43, 0xab, 0x08, 0x59, ] ``` ## Unsigned OperationTx ### What Unsigned Operation TX Contains An unsigned operation TX contains a `BaseTx`, and `Ops`. The `TypeID` for this type is `0x00000002`. - **`BaseTx`** - **`Ops`** is a variable-length array of [Transferable Ops](/docs/rpcs/x-chain/txn-format#transferable-op). ### Gantt Unsigned Operation TX Specification ```text +---------+------------------+-------------------------------------+ | base_tx : BaseTx | size(base_tx) bytes | +---------+------------------+-------------------------------------+ | ops : []TransferableOp | 4 + size(ops) bytes | +---------+------------------+-------------------------------------+ | 4 + size(ops) + size(base_tx) bytes | +-------------------------------------+ ``` ### Proto Unsigned Operation TX Specification ```text message OperationTx { BaseTx base_tx = 1; // size(base_tx) repeated TransferOp ops = 2; // 4 bytes + size(ops) } ``` ### Unsigned Operation TX Example Let's make an unsigned operation TX that uses the inputs and outputs from the previous examples: - `BaseTx`: `"Example BaseTx above" with TypeID set to 2` - **`Ops`**: \[`"Example Transferable Op as defined above"`\] ```text [ BaseTx <- 0x0000000200000004ffffffffeeeeeeeeddddddddccccccccbbbbbbbbaaaaaaaa999999998888888800000001000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f000000070000000000003039000000000000d431000000010000000251025c61fbcfc078f69334f834be6dd26d55a955c3344128e060128ede3523a24a461c8943ab085900000001f1e1d1c1b1a191817161514131211101f0e0d0c0b0a09080706050403020100000000005000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f0000000500000000075bcd150000000200000007000000030000000400010203 Ops <- [ 0x000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f00000001f1e1d1c1b1a191817161514131211101f0e0d0c0b0a090807060504030201000000000050000000d0000000200000003000000070000303900000003431100000000010000000251025c61fbcfc078f69334f834be6dd26d55a955c3344128e060128ede3523a24a461c8943ab0859, ] ] = [ // base tx: 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x04, 0xff, 0xff, 0xff, 0xff, 0xee, 0xee, 0xee, 0xee, 0xdd, 0xdd, 0xdd, 0xdd, 0xcc, 0xcc, 0xcc, 0xcc, 0xbb, 0xbb, 0xbb, 0xbb, 0xaa, 0xaa, 0xaa, 0xaa, 0x99, 0x99, 0x99, 0x99, 0x88, 0x88, 0x88, 0x88, 0x00, 0x00, 0x00, 0x01, 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x30, 0x39, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xd4, 0x31, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x02, 0x51, 0x02, 0x5c, 0x61, 0xfb, 0xcf, 0xc0, 0x78, 0xf6, 0x93, 0x34, 0xf8, 0x34, 0xbe, 0x6d, 0xd2, 0x6d, 0x55, 0xa9, 0x55, 0xc3, 0x34, 0x41, 0x28, 0xe0, 0x60, 0x12, 0x8e, 0xde, 0x35, 0x23, 0xa2, 0x4a, 0x46, 0x1c, 0x89, 0x43, 0xab, 0x08, 0x59, 0x00, 0x00, 0x00, 0x01, 0xf1, 0xe1, 0xd1, 0xc1, 0xb1, 0xa1, 0x91, 0x81, 0x71, 0x61, 0x51, 0x41, 0x31, 0x21, 0x11, 0x01, 0xf0, 0xe0, 0xd0, 0xc0, 0xb0, 0xa0, 0x90, 0x80, 0x70, 0x60, 0x50, 0x40, 0x30, 0x20, 0x10, 0x00, 0x00, 0x00, 0x00, 0x05, 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, 0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x00, 0x07, 0x5b, 0xcd, 0x15, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x04, 0x00, 0x01, 0x02, 0x03 // number of operations: 0x00, 0x00, 0x00, 0x01, // transfer operation: 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, 0x00, 0x00, 0x00, 0x01, 0xf1, 0xe1, 0xd1, 0xc1, 0xb1, 0xa1, 0x91, 0x81, 0x71, 0x61, 0x51, 0x41, 0x31, 0x21, 0x11, 0x01, 0xf0, 0xe0, 0xd0, 0xc0, 0xb0, 0xa0, 0x90, 0x80, 0x70, 0x60, 0x50, 0x40, 0x30, 0x20, 0x10, 0x00, 0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x0d, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x30, 0x39, 0x00, 0x00, 0x00, 0x03, 0x43, 0x11, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x02, 0x51, 0x02, 0x5c, 0x61, 0xfb, 0xcf, 0xc0, 0x78, 0xf6, 0x93, 0x34, 0xf8, 0x34, 0xbe, 0x6d, 0xd2, 0x6d, 0x55, 0xa9, 0x55, 0xc3, 0x34, 0x41, 0x28, 0xe0, 0x60, 0x12, 0x8e, 0xde, 0x35, 0x23, 0xa2, 0x4a, 0x46, 0x1c, 0x89, 0x43, 0xab, 0x08, 0x59, ] ``` ## Unsigned ImportTx ### What Unsigned Import TX Contains An unsigned import TX contains a `BaseTx`, `SourceChain` and `Ins`. \* The `TypeID`for this type is `0x00000003`. - **`BaseTx`** - **`SourceChain`** is a 32-byte source blockchain ID. - **`Ins`** is a variable length array of [Transferable Inputs](/docs/rpcs/x-chain/txn-format#transferable-input). ### Gantt Unsigned Import TX Specification ```text +---------+----------------------+-----------------------------+ | base_tx : BaseTx | size(base_tx) bytes | +-----------------+--------------+-----------------------------+ | source_chain : [32]byte | 32 bytes | +---------+----------------------+-----------------------------+ | ins : []TransferIn | 4 + size(ins) bytes | +---------+----------------------+-----------------------------+ | 36 + size(ins) + size(base_tx) bytes | +--------------------------------------+ ``` ### Proto Unsigned Import TX Specification ```text message ImportTx { BaseTx base_tx = 1; // size(base_tx) bytes source_chain = 2; // 32 bytes repeated TransferIn ins = 3; // 4 bytes + size(ins) } ``` ### Unsigned Import TX Example Let's make an unsigned import TX that uses the inputs from the previous examples: - `BaseTx`: `"Example BaseTx as defined above"`, but with `TypeID` set to `3` - `SourceChain`: `0x0000000000000000000000000000000000000000000000000000000000000000` - `Ins`: `"Example SECP256K1 Transfer Input as defined above"` ```text [ BaseTx <- 0x0000000300000004ffffffffeeeeeeeeddddddddccccccccbbbbbbbbaaaaaaaa999999998888888800000001000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f000000070000000000003039000000000000d431000000010000000251025c61fbcfc078f69334f834be6dd26d55a955c3344128e060128ede3523a24a461c8943ab085900000001f1e1d1c1b1a191817161514131211101f0e0d0c0b0a09080706050403020100000000005000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f0000000500000000075bcd150000000200000007000000030000000400010203 SourceChain <- 0x0000000000000000000000000000000000000000000000000000000000000000 Ins <- [ f1e1d1c1b1a191817161514131211101f0e0d0c0b0a09080706050403020100000000005000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f0000000500000000075bcd15000000020000000300000007, ] ] = [ // base tx: 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x04, 0xff, 0xff, 0xff, 0xff, 0xee, 0xee, 0xee, 0xee, 0xdd, 0xdd, 0xdd, 0xdd, 0xcc, 0xcc, 0xcc, 0xcc, 0xbb, 0xbb, 0xbb, 0xbb, 0xaa, 0xaa, 0xaa, 0xaa, 0x99, 0x99, 0x99, 0x99, 0x88, 0x88, 0x88, 0x88, 0x00, 0x00, 0x00, 0x01, 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x30, 0x39, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xd4, 0x31, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x02, 0x51, 0x02, 0x5c, 0x61, 0xfb, 0xcf, 0xc0, 0x78, 0xf6, 0x93, 0x34, 0xf8, 0x34, 0xbe, 0x6d, 0xd2, 0x6d, 0x55, 0xa9, 0x55, 0xc3, 0x34, 0x41, 0x28, 0xe0, 0x60, 0x12, 0x8e, 0xde, 0x35, 0x23, 0xa2, 0x4a, 0x46, 0x1c, 0x89, 0x43, 0xab, 0x08, 0x59, 0x00, 0x00, 0x00, 0x01, 0xf1, 0xe1, 0xd1, 0xc1, 0xb1, 0xa1, 0x91, 0x81, 0x71, 0x61, 0x51, 0x41, 0x31, 0x21, 0x11, 0x01, 0xf0, 0xe0, 0xd0, 0xc0, 0xb0, 0xa0, 0x90, 0x80, 0x70, 0x60, 0x50, 0x40, 0x30, 0x20, 0x10, 0x00, 0x00, 0x00, 0x00, 0x05, 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, 0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x00, 0x07, 0x5b, 0xcd, 0x15, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x04, 0x00, 0x01, 0x02, 0x03 // source chain: 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, // input count: 0x00, 0x00, 0x00, 0x01, // txID: 0xf1, 0xe1, 0xd1, 0xc1, 0xb1, 0xa1, 0x91, 0x81, 0x71, 0x61, 0x51, 0x41, 0x31, 0x21, 0x11, 0x01, 0xf0, 0xe0, 0xd0, 0xc0, 0xb0, 0xa0, 0x90, 0x80, 0x70, 0x60, 0x50, 0x40, 0x30, 0x20, 0x10, 0x00, // utxoIndex: 0x00, 0x00, 0x00, 0x05, // assetID: 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, // input: 0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x00, 0x07, 0x5b, 0xcd, 0x15, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x07, ] ``` ## Unsigned ExportTx ### What Unsigned Export TX Contains An unsigned export TX contains a `BaseTx`, `DestinationChain`, and `Outs`. The `TypeID` for this type is `0x00000004`. - **`DestinationChain`** is the 32 byte ID of the chain where the funds are being exported to. - **`Outs`** is a variable length array of [Transferable Outputs](/docs/rpcs/x-chain/txn-format#transferable-output). ### Gantt Unsigned Export TX Specification ```text +-------------------+---------------+--------------------------------------+ | base_tx : BaseTx | size(base_tx) bytes | +-------------------+---------------+--------------------------------------+ | destination_chain : [32]byte | 32 bytes | +-------------------+---------------+--------------------------------------+ | outs : []TransferOut | 4 + size(outs) bytes | +-------------------+---------------+--------------------------------------+ | 36 + size(outs) + size(base_tx) bytes | +---------------------------------------+ ``` ### Proto Unsigned Export TX Specification ```text message ExportTx { BaseTx base_tx = 1; // size(base_tx) bytes destination_chain = 2; // 32 bytes repeated TransferOut outs = 3; // 4 bytes + size(outs) } ``` ### Unsigned Export TX Example Let's make an unsigned export TX that uses the outputs from the previous examples: - `BaseTx`: `"Example BaseTx as defined above"`, but with `TypeID` set to `4` - `DestinationChain`: `0x0000000000000000000000000000000000000000000000000000000000000000` - `Outs`: `"Example SECP256K1 Transfer Output as defined above"` ```text [ BaseTx <- 0x0000000400000004ffffffffeeeeeeeeddddddddccccccccbbbbbbbbaaaaaaaa999999998888888800000001000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f000000070000000000003039000000000000d431000000010000000251025c61fbcfc078f69334f834be6dd26d55a955c3344128e060128ede3523a24a461c8943ab085900000001f1e1d1c1b1a191817161514131211101f0e0d0c0b0a09080706050403020100000000005000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f0000000500000000075bcd150000000200000007000000030000000400010203 DestinationChain <- 0x0000000000000000000000000000000000000000000000000000000000000000 Outs <- [ 000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f000000070000000000003039000000000000d431000000010000000251025c61fbcfc078f69334f834be6dd26d55a955c3344128e060128ede3523a24a461c8943ab0859, ] ] = [ // base tx: 0x00, 0x00, 0x00, 0x04 0x00, 0x00, 0x00, 0x04, 0xff, 0xff, 0xff, 0xff, 0xee, 0xee, 0xee, 0xee, 0xdd, 0xdd, 0xdd, 0xdd, 0xcc, 0xcc, 0xcc, 0xcc, 0xbb, 0xbb, 0xbb, 0xbb, 0xaa, 0xaa, 0xaa, 0xaa, 0x99, 0x99, 0x99, 0x99, 0x88, 0x88, 0x88, 0x88, 0x00, 0x00, 0x00, 0x01, 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x30, 0x39, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xd4, 0x31, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x02, 0x51, 0x02, 0x5c, 0x61, 0xfb, 0xcf, 0xc0, 0x78, 0xf6, 0x93, 0x34, 0xf8, 0x34, 0xbe, 0x6d, 0xd2, 0x6d, 0x55, 0xa9, 0x55, 0xc3, 0x34, 0x41, 0x28, 0xe0, 0x60, 0x12, 0x8e, 0xde, 0x35, 0x23, 0xa2, 0x4a, 0x46, 0x1c, 0x89, 0x43, 0xab, 0x08, 0x59, 0x00, 0x00, 0x00, 0x01, 0xf1, 0xe1, 0xd1, 0xc1, 0xb1, 0xa1, 0x91, 0x81, 0x71, 0x61, 0x51, 0x41, 0x31, 0x21, 0x11, 0x01, 0xf0, 0xe0, 0xd0, 0xc0, 0xb0, 0xa0, 0x90, 0x80, 0x70, 0x60, 0x50, 0x40, 0x30, 0x20, 0x10, 0x00, 0x00, 0x00, 0x00, 0x05, 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, 0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x00, 0x07, 0x5b, 0xcd, 0x15, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x04, 0x00, 0x01, 0x02, 0x03 // destination_chain: 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, // outs[] count: 0x00, 0x00, 0x00, 0x01, // assetID: 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, // output: 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x30, 0x39, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xd4, 0x31, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x02, 0x51, 0x02, 0x5c, 0x61, 0xfb, 0xcf, 0xc0, 0x78, 0xf6, 0x93, 0x34, 0xf8, 0x34, 0xbe, 0x6d, 0xd2, 0x6d, 0x55, 0xa9, 0x55, 0xc3, 0x34, 0x41, 0x28, 0xe0, 0x60, 0x12, 0x8e, 0xde, 0x35, 0x23, 0xa2, 0x4a, 0x46, 0x1c, 0x89, 0x43, 0xab, 0x08, 0x59, ] ``` ## Signed Transaction A signed transaction is an unsigned transaction with the addition of an array of [credentials](/docs/rpcs/x-chain/txn-format#credentials). ### What Signed Transaction Contains A signed transaction contains a `CodecID`, `UnsignedTx`, and `Credentials`. - **`CodecID`** The only current valid codec id is `00 00`. - **`UnsignedTx`** is an unsigned transaction, as described above. - **`Credentials`** is an array of [credentials](/docs/rpcs/x-chain/txn-format#credentials). Each credential will be paired with the input in the same index at this credential. ### Gantt Signed Transaction Specification ```text +---------------------+--------------+------------------------------------------------+ | codec_id : uint16 | 2 bytes | +---------------------+--------------+------------------------------------------------+ | unsigned_tx : UnsignedTx | size(unsigned_tx) bytes | +---------------------+--------------+------------------------------------------------+ | credentials : []Credential | 4 + size(credentials) bytes | +---------------------+--------------+------------------------------------------------+ | 6 + size(unsigned_tx) + len(credentials) bytes | +------------------------------------------------+ ``` ### Proto Signed Transaction Specification ```text message Tx { uint16 codec_id = 1; // 2 bytes UnsignedTx unsigned_tx = 2; // size(unsigned_tx) repeated Credential credentials = 3; // 4 bytes + size(credentials) } ``` ### Signed Transaction Example Let's make a signed transaction that uses the unsigned transaction and credentials from the previous examples. - **`CodecID`**: `0` - **`UnsignedTx`**: `0x0000000100000004ffffffffeeeeeeeeddddddddccccccccbbbbbbbbaaaaaaaa999999998888888800000001000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f000000070000000000003039000000000000d431000000010000000251025c61fbcfc078f69334f834be6dd26d55a955c3344128e060128ede3523a24a461c8943ab085900000001f1e1d1c1b1a191817161514131211101f0e0d0c0b0a09080706050403020100000000005000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f0000000500000000075bcd150000000200000007000000030000000400010203` - **`Credentials`** `0x0000000900000002000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1e1d1f202122232425262728292a2b2c2e2d2f303132333435363738393a3b3c3d3e3f00404142434445464748494a4b4c4d4e4f505152535455565758595a5b5c5e5d5f606162636465666768696a6b6c6e6d6f707172737475767778797a7b7c7d7e7f00` ```text [ CodecID <- 0x0000 UnsignedTx <- 0x0000000100000004ffffffffeeeeeeeeddddddddccccccccbbbbbbbbaaaaaaaa999999998888888800000001000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f000000070000000000003039000000000000d431000000010000000251025c61fbcfc078f69334f834be6dd26d55a955c3344128e060128ede3523a24a461c8943ab085900000001f1e1d1c1b1a191817161514131211101f0e0d0c0b0a09080706050403020100000000005000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f0000000500000000075bcd150000000200000007000000030000000400010203 Credentials <- [ 0x0000000900000002000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1e1d1f202122232425262728292a2b2c2e2d2f303132333435363738393a3b3c3d3e3f00404142434445464748494a4b4c4d4e4f505152535455565758595a5b5c5e5d5f606162636465666768696a6b6c6e6d6f707172737475767778797a7b7c7d7e7f00, ] ] = [ // Codec ID 0x00, 0x00, // unsigned transaction: 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x04, 0xff, 0xff, 0xff, 0xff, 0xee, 0xee, 0xee, 0xee, 0xdd, 0xdd, 0xdd, 0xdd, 0xcc, 0xcc, 0xcc, 0xcc, 0xbb, 0xbb, 0xbb, 0xbb, 0xaa, 0xaa, 0xaa, 0xaa, 0x99, 0x99, 0x99, 0x99, 0x88, 0x88, 0x88, 0x88, 0x00, 0x00, 0x00, 0x01, 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x30, 0x39, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xd4, 0x31, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x02, 0x51, 0x02, 0x5c, 0x61, 0xfb, 0xcf, 0xc0, 0x78, 0xf6, 0x93, 0x34, 0xf8, 0x34, 0xbe, 0x6d, 0xd2, 0x6d, 0x55, 0xa9, 0x55, 0xc3, 0x34, 0x41, 0x28, 0xe0, 0x60, 0x12, 0x8e, 0xde, 0x35, 0x23, 0xa2, 0x4a, 0x46, 0x1c, 0x89, 0x43, 0xab, 0x08, 0x59, 0x00, 0x00, 0x00, 0x01, 0xf1, 0xe1, 0xd1, 0xc1, 0xb1, 0xa1, 0x91, 0x81, 0x71, 0x61, 0x51, 0x41, 0x31, 0x21, 0x11, 0x01, 0xf0, 0xe0, 0xd0, 0xc0, 0xb0, 0xa0, 0x90, 0x80, 0x70, 0x60, 0x50, 0x40, 0x30, 0x20, 0x10, 0x00, 0x00, 0x00, 0x00, 0x05, 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, 0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x00, 0x07, 0x5b, 0xcd, 0x15, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x04, 0x00, 0x01, 0x02, 0x03 // number of credentials: 0x00, 0x00, 0x00, 0x01, // credential[0]: 0x00, 0x00, 0x00, 0x09, 0x00, 0x00, 0x00, 0x02, 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1e, 0x1d, 0x1f, 0x20, 0x21, 0x22, 0x23, 0x24, 0x25, 0x26, 0x27, 0x28, 0x29, 0x2a, 0x2b, 0x2c, 0x2e, 0x2d, 0x2f, 0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37, 0x38, 0x39, 0x3a, 0x3b, 0x3c, 0x3d, 0x3e, 0x3f, 0x00, 0x40, 0x41, 0x42, 0x43, 0x44, 0x45, 0x46, 0x47, 0x48, 0x49, 0x4a, 0x4b, 0x4c, 0x4d, 0x4e, 0x4f, 0x50, 0x51, 0x52, 0x53, 0x54, 0x55, 0x56, 0x57, 0x58, 0x59, 0x5a, 0x5b, 0x5c, 0x5e, 0x5d, 0x5f, 0x60, 0x61, 0x62, 0x63, 0x64, 0x65, 0x66, 0x67, 0x68, 0x69, 0x6a, 0x6b, 0x6c, 0x6e, 0x6d, 0x6f, 0x70, 0x71, 0x72, 0x73, 0x74, 0x75, 0x76, 0x77, 0x78, 0x79, 0x7a, 0x7b, 0x7c, 0x7d, 0x7e, 0x7f, 0x00, ``` ## UTXO A UTXO is a standalone representation of a transaction output. ### What UTXO Contains A UTXO contains a `CodecID`, `TxID`, `UTXOIndex`, `AssetID`, and `Output`. - **`CodecID`** The only valid `CodecID` is `00 00` - **`TxID`** is a 32-byte transaction ID. Transaction IDs are calculated by taking sha256 of the bytes of the signed transaction. - **`UTXOIndex`** is an int that specifies which output in the transaction specified by **`TxID`** that this UTXO was created by. - **`AssetID`** is a 32-byte array that defines which asset this UTXO references. - **`Output`** is the output object that created this UTXO. The serialization of Outputs was defined above. Valid output types are [SECP Mint Output](/docs/rpcs/x-chain/txn-format#secp256k1-mint-output), [SECP Transfer Output](/docs/rpcs/x-chain/txn-format#secp256k1-transfer-output), [NFT Mint Output](/docs/rpcs/x-chain/txn-format#nft-mint-output), [NFT Transfer Output](/docs/rpcs/x-chain/txn-format#nft-transfer-output). ### Gantt UTXO Specification ```text +--------------+----------+-------------------------+ | codec_id : uint16 | 2 bytes | +--------------+----------+-------------------------+ | tx_id : [32]byte | 32 bytes | +--------------+----------+-------------------------+ | output_index : int | 4 bytes | +--------------+----------+-------------------------+ | asset_id : [32]byte | 32 bytes | +--------------+----------+-------------------------+ | output : Output | size(output) bytes | +--------------+----------+-------------------------+ | 70 + size(output) bytes | +-------------------------+ ``` ### Proto UTXO Specification ```text message Utxo { uint16 codec_id = 1; // 02 bytes bytes tx_id = 2; // 32 bytes uint32 output_index = 3; // 04 bytes bytes asset_id = 4; // 32 bytes Output output = 5; // size(output) } ``` ### UTXO Examples Let's make a UTXO with a SECP Mint Output: - **`CodecID`**: `0` - **`TxID`**: `0x47c92ed62d18e3cccda512f60a0d5b1e939b6ab73fb2d011e5e306e79bd0448f` - **`UTXOIndex`**: `0` = `0x00000001` - **`AssetID`**: `0x47c92ed62d18e3cccda512f60a0d5b1e939b6ab73fb2d011e5e306e79bd0448f` - **`Output`**: `0x00000006000000000000000000000001000000013cb7d3842e8cee6a0ebd09f1fe884f6861e1b29c6276aa2a` ```text [ CodecID <- 0x0000 TxID <- 0x47c92ed62d18e3cccda512f60a0d5b1e939b6ab73fb2d011e5e306e79bd0448f UTXOIndex <- 0x00000001 AssetID <- 0x47c92ed62d18e3cccda512f60a0d5b1e939b6ab73fb2d011e5e306e79bd0448f Output <- 00000006000000000000000000000001000000013cb7d3842e8cee6a0ebd09f1fe884f6861e1b29c6276aa2a ] = [ // codecID: 0x00, 0x00, // txID: 0x47, 0xc9, 0x2e, 0xd6, 0x2d, 0x18, 0xe3, 0xcc, 0xcd, 0xa5, 0x12, 0xf6, 0x0a, 0x0d, 0x5b, 0x1e, 0x93, 0x9b, 0x6a, 0xb7, 0x3f, 0xb2, 0xd0, 0x11, 0xe5, 0xe3, 0x06, 0xe7, 0x9b, 0xd0, 0x44, 0x8f, // utxo index: 0x00, 0x00, 0x00, 0x01, // assetID: 0x47, 0xc9, 0x2e, 0xd6, 0x2d, 0x18, 0xe3, 0xcc, 0xcd, 0xa5, 0x12, 0xf6, 0x0a, 0x0d, 0x5b, 0x1e, 0x93, 0x9b, 0x6a, 0xb7, 0x3f, 0xb2, 0xd0, 0x11, 0xe5, 0xe3, 0x06, 0xe7, 0x9b, 0xd0, 0x44, 0x8f, // secp mint output: 0x00, 0x00, 0x00, 0x06, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x01, 0x3c, 0xb7, 0xd3, 0x84, 0x2e, 0x8c, 0xee, 0x6a, 0x0e, 0xbd, 0x09, 0xf1, 0xfe, 0x88, 0x4f, 0x68, 0x61, 0xe1, 0xb2, 0x9c, 0x62, 0x76, 0xaa, 0x2a, ] ``` Let's make a UTXO with a SECP Transfer Output from the signed transaction created above: - **`CodecID`**: `0` - **`TxID`**: `0xf966750f438867c3c9828ddcdbe660e21ccdbb36a9276958f011ba472f75d4e7` - **`UTXOIndex`**: `0` = `0x00000000` - **`AssetID`**: `0x000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f` - **`Output`**: `"Example SECP256K1 Transferable Output as defined above"` ```text [ CodecID <- 0x0000 TxID <- 0xf966750f438867c3c9828ddcdbe660e21ccdbb36a9276958f011ba472f75d4e7 UTXOIndex <- 0x00000000 AssetID <- 0x000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f Output <- 0x000000070000000000003039000000000000d431000000010000000251025c61fbcfc078f69334f834be6dd26d55a955c3344128e060128ede3523a24a461c8943ab0859 ] = [ // codecID: 0x00, 0x00, // txID: 0xf9, 0x66, 0x75, 0x0f, 0x43, 0x88, 0x67, 0xc3, 0xc9, 0x82, 0x8d, 0xdc, 0xdb, 0xe6, 0x60, 0xe2, 0x1c, 0xcd, 0xbb, 0x36, 0xa9, 0x27, 0x69, 0x58, 0xf0, 0x11, 0xba, 0x47, 0x2f, 0x75, 0xd4, 0xe7, // utxo index: 0x00, 0x00, 0x00, 0x00, // assetID: 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, // secp transfer output: 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x30, 0x39, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xd4, 0x31, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x02, 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, 0x20, 0x21, 0x22, 0x23, 0x24, 0x25, 0x26, 0x27, ] ``` Let's make a UTXO with an NFT Mint Output: - **`CodecID`**: `0` - **`TxID`**: `0x03c686efe8d80c519f356929f6da945f7ff90378f0044bb0e1a5d6c1ad06bae7` - **`UTXOIndex`**: `0` = `0x00000001` - **`AssetID`**: `0x03c686efe8d80c519f356929f6da945f7ff90378f0044bb0e1a5d6c1ad06bae7` - **`Output`**: `0x0000000a00000000000000000000000000000001000000013cb7d3842e8cee6a0ebd09f1fe884f6861e1b29c6276aa2a` ```text [ CodecID <- 0x0000 TxID <- 0x03c686efe8d80c519f356929f6da945f7ff90378f0044bb0e1a5d6c1ad06bae7 UTXOIndex <- 0x00000001 AssetID <- 0x03c686efe8d80c519f356929f6da945f7ff90378f0044bb0e1a5d6c1ad06bae7 Output <- 0000000a00000000000000000000000000000001000000013cb7d3842e8cee6a0ebd09f1fe884f6861e1b29c6276aa2a ] = [ // codecID: 0x00, 0x00, // txID: 0x03, 0xc6, 0x86, 0xef, 0xe8, 0xd8, 0x0c, 0x51, 0x9f, 0x35, 0x69, 0x29, 0xf6, 0xda, 0x94, 0x5f, 0x7f, 0xf9, 0x03, 0x78, 0xf0, 0x04, 0x4b, 0xb0, 0xe1, 0xa5, 0xd6, 0xc1, 0xad, 0x06, 0xba, 0xe7, // utxo index: 0x00, 0x00, 0x00, 0x01, // assetID: 0x03, 0xc6, 0x86, 0xef, 0xe8, 0xd8, 0x0c, 0x51, 0x9f, 0x35, 0x69, 0x29, 0xf6, 0xda, 0x94, 0x5f, 0x7f, 0xf9, 0x03, 0x78, 0xf0, 0x04, 0x4b, 0xb0, 0xe1, 0xa5, 0xd6, 0xc1, 0xad, 0x06, 0xba, 0xe7, // nft mint output: 0x00, 0x00, 0x00, 0x0a, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x01, 0x3c, 0xb7, 0xd3, 0x84, 0x2e, 0x8c, 0xee, 0x6a, 0x0e, 0xbd, 0x09, 0xf1, 0xfe, 0x88, 0x4f, 0x68, 0x61, 0xe1, 0xb2, 0x9c, 0x62, 0x76, 0xaa, 0x2a, ] ``` Let's make a UTXO with an NFT Transfer Output: - **`CodecID`**: `0` - **`TxID`**: `0xa68f794a7de7bdfc5db7ba5b73654304731dd586bbf4a6d7b05be6e49de2f936` - **`UTXOIndex`**: `0` = `0x00000001` - **`AssetID`**: `0x03c686efe8d80c519f356929f6da945f7ff90378f0044bb0e1a5d6c1ad06bae7` - **`Output`**: `0x0000000b000000000000000b4e4654205061796c6f6164000000000000000000000001000000013cb7d3842e8cee6a0ebd09f1fe884f6861e1b29c6276aa2a` ```text [ CodecID <- 0x0000 TxID <- 0xa68f794a7de7bdfc5db7ba5b73654304731dd586bbf4a6d7b05be6e49de2f936 UTXOIndex <- 0x00000001 AssetID <- 0x03c686efe8d80c519f356929f6da945f7ff90378f0044bb0e1a5d6c1ad06bae7 Output <- 0000000b000000000000000b4e4654205061796c6f6164000000000000000000000001000000013cb7d3842e8cee6a0ebd09f1fe884f6861e1b29c6276aa2a ] = [ // codecID: 0x00, 0x00, // txID: 0xa6, 0x8f, 0x79, 0x4a, 0x7d, 0xe7, 0xbd, 0xfc, 0x5d, 0xb7, 0xba, 0x5b, 0x73, 0x65, 0x43, 0x04, 0x73, 0x1d, 0xd5, 0x86, 0xbb, 0xf4, 0xa6, 0xd7, 0xb0, 0x5b, 0xe6, 0xe4, 0x9d, 0xe2, 0xf9, 0x36, // utxo index: 0x00, 0x00, 0x00, 0x01, // assetID: 0x03, 0xc6, 0x86, 0xef, 0xe8, 0xd8, 0x0c, 0x51, 0x9f, 0x35, 0x69, 0x29, 0xf6, 0xda, 0x94, 0x5f, 0x7f, 0xf9, 0x03, 0x78, 0xf0, 0x04, 0x4b, 0xb0, 0xe1, 0xa5, 0xd6, 0xc1, 0xad, 0x06, 0xba, 0xe7, // nft transfer output: 0x00, 0x00, 0x00, 0x0b, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0b, 0x4e, 0x46, 0x54, 0x20, 0x50, 0x61, 0x79, 0x6c, 0x6f, 0x61, 0x64, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x01, 0x3c, 0xb7, 0xd3, 0x84, 0x2e, 0x8c, 0xee, 0x6a, 0x0e, 0xbd, 0x09, 0xf1, 0xfe, 0x88, 0x4f, 0x68, 0x61, 0xe1, 0xb2, 0x9c, 0x62, 0x76, 0xaa, 0x2a, ] ``` ## GenesisAsset An asset to be issued in an instance of the AVM's Genesis ### What GenesisAsset Contains An instance of a GenesisAsset contains an `Alias`, `NetworkID`, `BlockchainID`, `Outputs`, `Inputs`, `Memo`, `Name`, `Symbol`, `Denomination`, and `InitialStates`. - **`Alias`** is the alias for this asset. - **`NetworkID`** defines which network this transaction is meant to be issued to. This value is meant to support transaction routing and is not designed for replay attack prevention. - **`BlockchainID`** is the ID (32-byte array) that defines which blockchain this transaction was issued to. This is used for replay attack prevention for transactions that could potentially be valid across network or blockchain. - **`Outputs`** is an array of [transferable output objects](/docs/rpcs/x-chain/txn-format#transferable-output). Outputs must be sorted lexicographically by their serialized representation. The total quantity of the assets created in these outputs must be less than or equal to the total quantity of each asset consumed in the inputs minus the transaction fee. - **`Inputs`** is an array of [transferable input objects](/docs/rpcs/x-chain/txn-format#transferable-input). Inputs must be sorted and unique. Inputs are sorted first lexicographically by their **`TxID`** and then by the **`UTXOIndex`** from low to high. If there are inputs that have the same **`TxID`** and **`UTXOIndex`**, then the transaction is invalid as this would result in a double spend. - **`Memo`** is a memo field that contains arbitrary bytes, up to 256 bytes. - **`Name`** is a human readable string that defines the name of the asset this transaction will create. The name is not guaranteed to be unique. The name must consist of only printable ASCII characters and must be no longer than 128 characters. - **`Symbol`** is a human readable string that defines the symbol of the asset this transaction will create. The symbol is not guaranteed to be unique. The symbol must consist of only printable ASCII characters and must be no longer than 4 characters. - **`Denomination`** is a byte that defines the divisibility of the asset this transaction will create. For example, the AVAX token is divisible into billionths. Therefore, the denomination of the AVAX token is 9. The denomination must be no more than 32. - **`InitialStates`** is a variable length array that defines the feature extensions this asset supports, and the [initial state](/docs/rpcs/x-chain/txn-format#initial-state) of those feature extensions. ### Gantt GenesisAsset Specification ````text +----------------+----------------------+--------------------------------+ | alias : string | 2 + len(alias) bytes | +----------------+----------------------+--------------------------------+ | network_id : int | 4 bytes | +----------------+----------------------+--------------------------------+ | blockchain_id : [32]byte | 32 bytes | +----------------+----------------------+--------------------------------+ | outputs : []TransferableOutput | 4 + size(outputs) bytes | +----------------+----------------------+--------------------------------+ | inputs : []TransferableInput | 4 + size(inputs) bytes | +----------------+----------------------+--------------------------------+ | memo : [256]byte | 4 + size(memo) bytes | +----------------+----------------------+--------------------------------+ | name : string | 2 + len(name) bytes | +----------------+----------------------+--------------------------------+ | symbol : string | 2 + len(symbol) bytes | +----------------+----------------------+--------------------------------+ | denomination : byte | 1 bytes | +----------------+----------------------+--------------------------------+ | initial_states : []InitialState | 4 + size(initial_states) bytes | +----------------+----------------------+--------------------------------+ | 59 + size(alias) + size(outputs) + size(inputs) + size(memo) | | + len(name) + len(symbol) + size(initial_states) bytes | +------------------------------------------------------------------------+ ### Proto GenesisAsset Specification ```text message GenesisAsset { string alias = 1; // 2 bytes + len(alias) uint32 network_id = 2; // 04 bytes bytes blockchain_id = 3; // 32 bytes repeated Output outputs = 4; // 04 bytes + size(outputs) repeated Input inputs = 5; // 04 bytes + size(inputs) bytes memo = 6; // 04 bytes + size(memo) string name = 7; // 2 bytes + len(name) name symbol = 8; // 2 bytes + len(symbol) uint8 denomination = 9; // 1 bytes repeated InitialState initial_states = 10; // 4 bytes + size(initial_states) } ```` ### GenesisAsset Example Let's make a GenesisAsset: - **`Alias`**: `asset1` - **`NetworkID`**: `12345` - **`BlockchainID`**: `0x0000000000000000000000000000000000000000000000000000000000000000` - **`Outputs`**: \[\] - **`Inputs`**: \[\] - **`Memo`**: `2Zc54v4ek37TEwu4LiV3j41PUMRd6acDDU3ZCVSxE7X` - **`Name`**: `asset1` - **`Symbol`**: `MFCA` - **`Denomination`**: `1` - **`InitialStates`**: - `"Example Initial State as defined above"` ```text [ Alias <- 0x617373657431 NetworkID <- 0x00003039 BlockchainID <- 0x0000000000000000000000000000000000000000000000000000000000000000 Outputs <- [] Inputs <- [] Memo <- 0x66x726f6d20736e6f77666c616b6520746f206176616c616e636865 Name <- 0x617373657431 Symbol <- 0x66x726f6d20736e6f77666c616b6520746f206176616c616e636865 Denomination <- 0x66x726f6d20736e6f77666c616b6520746f206176616c616e636865 InitialStates <- [ 0x0000000000000001000000070000000000003039000000000000d431000000010000000251025c61fbcfc078f69334f834be6dd26d55a955c3344128e060128ede3523a24a461c8943ab0859 ] ] = [ // asset alias len: 0x00, 0x06, // asset alias: 0x61, 0x73, 0x73, 0x65, 0x74, 0x31, // network_id: 0x00, 0x00, 0x30, 0x39, // blockchain_id: 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, // output_len: 0x00, 0x00, 0x00, 0x00, // input_len: 0x00, 0x00, 0x00, 0x00, // memo_len: 0x00, 0x00, 0x00, 0x1b, // memo: 0x66, 0x72, 0x6f, 0x6d, 0x20, 0x73, 0x6e, 0x6f, 0x77, 0x66, 0x6c, 0x61, 0x6b, 0x65, 0x20, 0x74, 0x6f, 0x20, 0x61, 0x76, 0x61, 0x6c, 0x61, 0x6e, 0x63, 0x68, 0x65, // asset_name_len: 0x00, 0x0f, // asset_name: 0x6d, 0x79, 0x46, 0x69, 0x78, 0x65, 0x64, 0x43, 0x61, 0x70, 0x41, 0x73, 0x73, 0x65, 0x74, // symbol_len: 0x00, 0x04, // symbol: 0x4d, 0x46, 0x43, 0x41, // denomination: 0x07, // number of InitialStates: 0x00, 0x00, 0x00, 0x01, // InitialStates[0]: 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x07, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x30, 0x39, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xd4, 0x31, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x02, 0x51, 0x02, 0x5c, 0x61, 0xfb, 0xcf, 0xc0, 0x78, 0xf6, 0x93, 0x34, 0xf8, 0x34, 0xbe, 0x6d, 0xd2, 0x6d, 0x55, 0xa9, 0x55, 0xc3, 0x34, 0x41, 0x28, 0xe0, 0x60, 0x12, 0x8e, 0xde, 0x35, 0x23, 0xa2, 0x4a, 0x46, 0x1c, 0x89, 0x43, 0xab, 0x08, 0x59, ] ``` # Overview (/docs/tooling/avalanche-sdk) --- title: Overview description: Build applications and interact with Avalanche networks programmatically icon: Rocket --- The **Avalanche SDK for TypeScript** is a modular suite of tools designed for building powerful applications on the Avalanche ecosystem. Whether you're building DeFi applications, NFT platforms, or cross-chain bridges, our SDKs provide everything you need. ### Core Capabilities * **Direct Chain Access** - RPC calls, wallet integration, and transaction management. * **Indexed Data & Metrics** - Access Glacier Data API & Metrics API with type safety. * **Interchain Messaging** - Build cross-L1 applications with ICM/Teleporter. **Developer Preview**: This suite of SDKs is currently in beta and is subject to change. Use in production at your own risk. We'd love to hear about your experience! **Please share your feedback here.** Check out the code, contribute, or report issues. The Avalanche SDK TypeScript is fully open source. ## Which SDK Should I Use? Choose the right SDK based on your specific needs: | SDK Package | Description | | :--------------------------------------------------------------------------- | :--------------------------------------------------------------- | | [`@avalanche-sdk/client`](/avalanche-sdk/client-sdk/getting-started) | Direct blockchain interaction - transactions, wallets, RPC calls | | [`@avalanche-sdk/chainkit`](/avalanche-sdk/chainkit-sdk/getting-started) | Complete suite: Data, Metrics and Webhooks API | | [`@avalanche-sdk/interchain`](/avalanche-sdk/interchain-sdk/getting-started) | Send messages between Avalanche L1s using ICM/Teleporter | ## Quick Start ```bash theme={null} npm install @avalanche-sdk/client ``` ```bash theme={null} yarn add @avalanche-sdk/client ``` ```bash theme={null} pnpm add @avalanche-sdk/client ``` ### Basic Example ```typescript theme={null} import { createClient } from '@avalanche-sdk/client'; // Initialize the client const client = createClient({ network: 'mainnet' }); // Get balance const balance = await client.getBalance({ address: '0x...', chainId: 43114 }); console.log('Balance:', balance); ``` ## Available SDKs ### Client SDK The main Avalanche client SDK for interacting with Avalanche nodes and building blockchain applications. **Key Features:** * **Complete API coverage** for P-Chain, X-Chain, and C-Chain. * **Full viem compatibility** - anything you can do with viem works here. * **TypeScript-first design** with full type safety. * **Smart contract interactions** with first-class APIs. * **Wallet integration** and transaction management. * **Cross-chain transfers** between X, P and C chains. **Common Use Cases:** * Retrieve balances and UTXOs for addresses * Build, sign, and issue transactions to any chain * Add validators and delegators * Create subnets and blockchains. * Convert subnets to L1s. Learn how to integrate blockchain functionality into your application ### ChainKit SDK Combined SDK with full typed coverage of Avalanche Data (Glacier) and Metrics APIs. **Key Features:** * **Full endpoint coverage** for Glacier Data API and Metrics API * **Strongly-typed models** with automatic TypeScript inference * **Built-in pagination** helpers and automatic retries/backoff * **High-level helpers** for transactions, blocks, addresses, tokens, NFTs * **Metrics insights** including network health, validator stats, throughput * **Webhook support** with payload shapes and signature verification **API Endpoints:** * Glacier API: [https://glacier-api.avax.network/api](https://glacier-api.avax.network/api) * Metrics API: [https://metrics.avax.network/api](https://metrics.avax.network/api) Access comprehensive blockchain data and analytics ### Interchain SDK SDK for building cross-L1 applications and bridges. **Key Features:** * **Type-safe ICM client** for sending cross-chain messages * **Seamless wallet integration** with existing wallet clients * **Built-in support** for Avalanche C-Chain and custom subnets * **Message tracking** and delivery confirmation * **Gas estimation** for cross-chain operations **Use Cases:** * Cross-chain token bridges * Multi-L1 governance systems * Interchain data oracles * Cross-subnet liquidity pools Build powerful cross-chain applications ## Support ### Community & Help * Discord - Get real-time help in the #avalanche-sdk channel * Telegram - Join discussions * Twitter - Stay updated ### Feedback Sessions * Book a Google Meet Feedback Session - Schedule a 1-on-1 session to share your feedback and suggestions ### Issue Tracking * Report a Bug * Request a Feature * View All Issues ### Direct Support * Technical Issues: GitHub Issues * Security Issues: [security@avalabs.org](mailto:security@avalabs.org) * General Inquiries: [data-platform@avalabs.org](mailto:data-platform@avalabs.org) # L1 Add-Ons (/docs/tooling/avalanche-deploy/add-ons) --- title: L1 Add-Ons description: Deploy Blockscout block explorer, faucets, The Graph, ICM Relayer, eRPC load balancer, and Safe multisig for your Avalanche L1. --- After deploying your L1, you can add optional services to enhance the developer and operator experience. Each add-on is deployed via a dedicated Ansible playbook and runs as a Docker Compose stack on your monitoring or RPC nodes. All add-on commands assume you have sourced your L1 environment: `source l1.env` ## eRPC Load Balancer eRPC is deployed **automatically** during `make configure-l1`. It provides a single RPC endpoint that load balances across your archive and pruned RPC nodes with intelligent routing. ### Features - **Intelligent routing** — `debug_*` and `trace_*` methods route to archive nodes only - **Load balancing** across all RPC nodes - **Automatic failover** with circuit breaker - **Response caching** - **Prometheus metrics** ### Endpoints | Endpoint | URL | |----------|-----| | RPC | `http://:4000` | | Health | `http://:4001/healthcheck` | ### Usage Point your dApps and tools at the eRPC endpoint instead of individual nodes: ```bash # Through eRPC (recommended) curl -X POST http://:4000 \ -H "Content-Type: application/json" \ -d '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}' ``` To skip eRPC during L1 configuration, add `SKIP_ERPC=true`: ```bash make configure-l1 SUBNET_ID=$SUBNET_ID CHAIN_ID=$CHAIN_ID SKIP_ERPC=true ``` To redeploy eRPC standalone: ```bash source l1.env make erpc CHAIN_ID=$CHAIN_ID EVM_CHAIN_ID=99999 ``` ## Blockscout Block Explorer Deploy a full-featured block explorer for your L1: ```bash source l1.env make deploy-blockscout CHAIN_ID=$CHAIN_ID EVM_CHAIN_ID=99999 CHAIN_NAME="My L1" ``` **Access**: `http://:4001` Blockscout is deployed to the first archive RPC host (falls back to the first generic RPC host on GCP/Azure). It includes the backend indexer, frontend UI, stats service, and nginx reverse proxy. Initial indexing can take hours for chains with significant history. Monitor progress with `docker logs -f blockscout-backend` on the RPC node. ## Faucet Deploy a token faucet for developers to request test tokens: ```bash source l1.env make faucet CHAIN_ID=$CHAIN_ID EVM_CHAIN_ID=99999 FAUCET_KEY=0x... ``` **Access**: `http://:8010` | Parameter | Description | |-----------|-------------| | `CHAIN_ID` | Blockchain ID from `l1.env` | | `EVM_CHAIN_ID` | EVM chain ID from genesis | | `FAUCET_KEY` | Hex private key of a funded wallet on your L1 | The faucet wallet must be funded on your L1 chain. Use a dedicated wallet — not your deployer key. ## The Graph Node Deploy The Graph for indexing blockchain data via GraphQL subgraphs: ```bash source l1.env make graph-node CHAIN_ID=$CHAIN_ID NETWORK_NAME=my-l1 ``` ### Endpoints | Endpoint | URL | |----------|-----| | GraphQL | `http://:8000/subgraphs/name/` | | Admin | `http://:8020` | | IPFS | `http://:5001` | ### Deploying a Subgraph After The Graph Node is running, deploy a subgraph: ```bash # 1. Initialize your subgraph project graph init --product hosted-service my-subgraph # 2. Update subgraph.yaml with your L1 network # network: my-l1 # source.address: "" # source.startBlock: 0 # 3. Generate types and build graph codegen && graph build # 4. Create and deploy graph create --node http://:8020 my-subgraph graph deploy --node http://:8020 \ --ipfs http://:5001 \ my-subgraph ``` ## ICM Relayer (Cross-Chain Messaging) Deploy the Interchain Messaging Relayer for cross-chain communication between your L1 and C-Chain: ```bash source l1.env make icm-relayer SUBNET_ID=$SUBNET_ID CHAIN_ID=$CHAIN_ID RELAYER_KEY=0x... ``` ### Endpoints | Endpoint | URL | |----------|-----| | API | `http://:8080` | | Health | `http://:8080/health` | | Metrics | `http://:9090/metrics` | ### How It Works The ICM Relayer listens for Avalanche Warp Messages on source blockchains, aggregates BLS signatures from validators, and delivers cross-chain messages to destination blockchains. By default, it relays **bidirectionally** between your L1 and C-Chain. ### Configuration | Parameter | Default | Description | |-----------|---------|-------------| | `SUBNET_ID` | (required) | Subnet ID from `l1.env` | | `CHAIN_ID` | (required) | Blockchain ID from `l1.env` | | `RELAYER_KEY` | (required) | Hex private key for relay transactions | | `NETWORK` | fuji | Network name (`fuji` or `mainnet`) | The relayer key wallet must be funded on **both** chains — AVAX on C-Chain for gas, and your L1's native token on the L1 chain. Use a dedicated relay wallet. ### Kubernetes Deployment ```bash make k8s-icm-relayer SUBNET_ID=$SUBNET_ID CHAIN_ID=$CHAIN_ID RELAYER_KEY=0x... ``` ## Safe Multisig Deploy Gnosis Safe infrastructure for multisig governance of your L1: ```bash make safe ``` This deploys the Safe UI, transaction service, client gateway, and nginx reverse proxy. It auto-detects chain configuration from `l1.env`. Safe requires the Singleton Factory (`0x914d7Fec6aaC8cd542e72Bca78B30650d45643d7`) in your genesis `alloc`. The default genesis template includes this. For detailed Safe setup including contract deployment and chain registration, see the `SAFE.md` guide in the repository. ## Add-On Summary | Add-On | Playbook | Port | Deployed To | |--------|----------|------|-------------| | eRPC | `08-deploy-erpc.yml` | 4000, 4001 | Monitoring host | | Blockscout | `04-deploy-blockscout.yml` | 4001 | Archive RPC | | Faucet | `06-deploy-faucet.yml` | 8010 | RPC node | | The Graph | `07-deploy-graph-node.yml` | 8000, 8020, 5001 | RPC node | | ICM Relayer | `16-deploy-icm-relayer.yml` | 8080, 9090 | RPC node | | Safe | `05-deploy-safe.yml` | 8100 | RPC node | | Monitoring | `03-setup-monitoring.yml` | 3000, 9090 | Monitoring host | # Deploy an L1 on Kubernetes (/docs/tooling/avalanche-deploy/deploy-l1-kubernetes) --- title: Deploy an L1 on Kubernetes description: Launch an Avalanche L1 blockchain on Kubernetes using Helm charts, with local kind clusters for development and testing. --- This guide covers deploying an Avalanche L1 on Kubernetes as an alternative to the Terraform + Ansible path. Use this when you already have a Kubernetes cluster or want local development with kind. **Requirements**: `kubectl`, `helm` v3+, Docker (for kind). **Time to deploy**: ~15 minutes locally, ~30 minutes on a remote cluster (plus sync time). The Kubernetes path covers deployment, sync, and L1 creation. Advanced operational workflows (staking key backup, database snapshots, validator migration) are only available in the [Terraform + Ansible path](/docs/tooling/avalanche-deploy/deploy-l1). ## Prerequisites - `kubectl` connected to your cluster - `helm` v3+ - For local testing: `kind` and Docker - For L1 creation: funded key in platform-cli keystore ## Helm Charts | Chart | Path | Purpose | |-------|------|---------| | `avalanche-validator` | `helm/avalanche-validator` | L1 validator nodes | | `avalanche-rpc` | `helm/avalanche-rpc` | L1 RPC nodes | | `monitoring` | `helm/monitoring` | Prometheus + Grafana | | `icm-relayer` | `helm/icm-relayer` | Cross-chain messaging | ## Quick Start with Local Kind Cluster ### Create a Local Cluster ```bash cd kubernetes ./scripts/create-kind-cluster.sh \ --name=avalanche-l1 \ --image=kindest/node:v1.34.0 \ --workers=1 ``` The first run pulls the node image and can take several minutes. If your machine is resource-constrained, start with `--workers=1` and scale up later. ### Deploy L1 Validators and RPC ```bash helm upgrade --install l1-validators ./helm/avalanche-validator \ -f ./helm/avalanche-validator/values-kind.yaml \ --set network=fuji helm upgrade --install l1-rpc ./helm/avalanche-rpc \ -f ./helm/avalanche-rpc/values-kind.yaml \ --set network=fuji ``` ### Wait for P-Chain Sync ```bash ./scripts/wait-for-sync.sh --release=l1-validators ``` ### Create Your L1 ```bash # Import or create a deployer key platform keys import --name l1-deployer platform keys default --name l1-deployer ./scripts/create-l1.sh \ --release=l1-validators \ --network=fuji \ --chain-name=mychain \ --output=l1.env \ --key-name=l1-deployer ``` The script collects NodeIDs from validator pods and runs the same P-Chain transactions as the Terraform path: `CreateSubnetTx`, `CreateChainTx`, and `ConvertSubnetToL1Tx`. ### Configure Validators for Your L1 ```bash ./scripts/configure-l1.sh --release=l1-validators --env=l1.env ``` ### Verify Status ```bash ./scripts/status.sh --release=l1-validators ``` ## Deploying on an Existing Cluster Skip the kind cluster creation step and use the same Helm releases and scripts above. Ensure your cluster has sufficient resources for the validator and RPC pods. ## Accessing RPC ```bash # L1 RPC service kubectl port-forward svc/l1-rpc 9650:9650 # Then query curl -X POST http://localhost:9650/ext/bc//rpc \ -H "Content-Type: application/json" \ -d '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}' ``` ## Add-Ons on Kubernetes ### Monitoring ```bash helm install monitoring ./helm/monitoring # Access Grafana kubectl port-forward svc/monitoring-grafana 3000:3000 # http://localhost:3000 (admin/admin) ``` ### ICM Relayer ```bash source l1.env helm upgrade --install icm-relayer ./helm/icm-relayer \ --set "l1.subnetId=$SUBNET_ID" \ --set "l1.blockchainId=$CHAIN_ID" \ --set "relayerPrivateKey=0x..." \ --set "network=fuji" ``` The relayer connects to the `l1-rpc` service by default. Override with `--set avalanchego.serviceName=`. ## Make Wrappers From the repo root, you can also use `make` targets: | Command | Description | |---------|-------------| | `make k8s-kind` | Create local kind cluster | | `make k8s-l1-deploy` | Deploy L1 validators + RPC | | `make k8s-l1-wait` | Wait for P-Chain sync | | `make k8s-l1-create` | Create L1 chain | | `make k8s-l1-configure` | Configure validators for L1 | | `make k8s-l1-status` | Check L1 status | | `make k8s-monitoring` | Deploy monitoring stack | | `make k8s-icm-relayer` | Deploy ICM Relayer | | `make k8s-cleanup` | Remove releases and optional PVC/kind cleanup | ## Troubleshooting ### Pods Stuck in Pending ```bash kubectl describe pod ``` If you see `Insufficient cpu` or `does not have a host assigned`, use the kind-specific value files: ```bash helm upgrade --install l1-validators ./helm/avalanche-validator \ -f ./helm/avalanche-validator/values-kind.yaml --set network=fuji ``` ### Node Not Syncing ```bash kubectl logs -f ``` ### Kind Fails with "No Such Container" This usually means the Docker daemon API is unhealthy. Restart Docker Desktop and retry `./scripts/create-kind-cluster.sh`. ## Cleanup ```bash cd kubernetes ./scripts/cleanup.sh ``` This removes Helm releases and optionally deletes PVCs and the kind cluster. ## Next Steps - [Deploy with Terraform + Ansible instead](/docs/tooling/avalanche-deploy/deploy-l1) — Full-featured deployment with archive/pruned RPC split and staking key backup - [Deploy add-ons](/docs/tooling/avalanche-deploy/add-ons) — Blockscout, faucet, The Graph, ICM Relayer - [Operations guide](/docs/tooling/avalanche-deploy/operations) — Upgrades, monitoring, health checks - [Troubleshooting](/docs/tooling/avalanche-deploy/troubleshooting) — Common issues and solutions # Deploy an L1 with Terraform and Ansible (/docs/tooling/avalanche-deploy/deploy-l1) --- title: Deploy an L1 with Terraform and Ansible description: Launch a production-ready Avalanche L1 with validators, RPC nodes, and monitoring on AWS, GCP, or Azure using Terraform and Ansible. --- This guide walks through deploying a complete Avalanche L1 blockchain on cloud VMs. By the end, you will have a running L1 with validators, archive and pruned RPC nodes, monitoring, and an eRPC load balancer. **Supported clouds**: AWS (full feature set), GCP, Azure. **Time to deploy**: ~30 minutes (plus sync time). **Cost**: ~$651/month on AWS. ## Architecture |P2P :9651| V1 PrimaryNetwork <-->|P2P :9651| V2 PrimaryNetwork <-->|P2P :9651| ArchiveRPC PrimaryNetwork <-->|P2P :9651| PrunedRPC Users -->|RPC| eRPC eRPC -->|debug/trace| ArchiveRPC eRPC -->|eth/net/web3| PrunedRPC Users -->|Dashboard :3000| Grafana V1 -.->|metrics| Prometheus V2 -.->|metrics| Prometheus ArchiveRPC -.->|metrics| Prometheus PrunedRPC -.->|metrics| Prometheus Prometheus -.-> Grafana `} /> ## Infrastructure Sizing (AWS) | Component | Instance | Disk | Purpose | |-----------|----------|------|---------| | Validators (default: 3, production: 5) | c6a.xlarge | 500GB EBS gp3 | Block production, consensus | | Archive RPC | c6a.xlarge | 1TB EBS gp3 | Full history, debug/trace APIs, block explorer | | Pruned RPC | c6a.large | 500GB EBS gp3 | State-sync, transaction workloads | | Monitoring | t3.small | 50GB EBS gp3 | Prometheus, Grafana, eRPC | ### RPC Node Types | Type | APIs | Pruning | State-Sync | Use Case | |------|------|---------|------------|----------| | Archive | Full (incl. `debug_*`, `trace_*`) | Disabled | Disabled | Block explorer, debugging, historical queries | | Pruned | Standard (`eth`, `net`, `web3`) | Enabled | Enabled | Transaction submission, latest state queries | GCP and Azure use a single generic `rpc` pool instead of the archive/pruned split. The archive/pruned separation is AWS-only. ## Step-by-Step Deployment ### Configure Cloud Credentials and SSH ```bash # AWS export AWS_ACCESS_KEY_ID="your-access-key" export AWS_SECRET_ACCESS_KEY="your-secret-key" # Generate an SSH key for node access ssh-keygen -t rsa -b 4096 -f ~/.ssh/avalanche-deploy -N "" ``` For GCP, authenticate with `gcloud auth application-default login`. For Azure, use `az login`. ### Configure Terraform Variables ```bash cd terraform/aws # or terraform/gcp, terraform/azure cp terraform.tfvars.example terraform.tfvars ``` Edit `terraform.tfvars`: ```hcl name_prefix = "my-l1" environment = "fuji" # or "mainnet" validator_count = 5 rpc_archive_count = 1 rpc_pruned_count = 1 ssh_public_key = "ssh-rsa AAAA..." ssh_private_key_file = "~/.ssh/avalanche-deploy" enable_staking_key_backup = true ``` ### Provision Cloud Infrastructure ```bash make infra # Runs terraform init && terraform apply ``` This creates all VMs, networking (VPC, subnets, security groups), and storage (S3 bucket for staking keys if enabled). Terraform auto-generates the Ansible inventory at `ansible/inventory/aws_hosts`. ### Deploy AvalancheGo ```bash make deploy make status # Wait for "P:OK" on all nodes ``` This runs playbook `01-deploy-nodes.yml`, which: - Installs AvalancheGo on all nodes - Starts syncing with the Primary Network using `partial-sync-primary-network: true` (syncs only P-Chain headers — much faster than a full sync) - Collects NodeIDs and saves them to `ansible/node_ids.txt` - Backs up initial staking keys to S3 (if configured) ### Configure Your Genesis Before creating the L1, prepare your genesis file at `configs/l1/genesis/genesis.json`. Use the [Genesis Builder](https://build.avax.network/tools/l1-toolbox/create-chain) to generate this visually, or edit the included template. Key settings: - **`chainId`** — Unique EVM chain ID ([check availability](https://chainlist.org/)) - **`feeConfig`** — Gas limits and base fees - **`warpConfig`** — Cross-chain messaging with `quorumNumerator: 67` - **`alloc`** — Pre-funded addresses and pre-deployed contracts ### Create Your L1 ```bash # Import or create a deployer key platform keys import --name l1-deployer platform keys default --name l1-deployer # Build and run the create-l1 tool make create-l1 ./tools/create-l1/create-l1 \ --network=fuji \ --key-name=l1-deployer \ --validators=$(cd terraform/aws && terraform output -json validator_ips | jq -r 'join(",")') \ --chain-name=mychain \ --output=l1.env ``` The `create-l1` tool executes three P-Chain transactions: 1. **`CreateSubnetTx`** — Creates a new Subnet (returns `SUBNET_ID`) 2. **`CreateChainTx`** — Creates the EVM chain with your genesis (returns `CHAIN_ID`) 3. **`ConvertSubnetToL1Tx`** — Converts the Subnet to an L1, registering all validators with their BLS keys The output `l1.env` file contains `SUBNET_ID`, `CHAIN_ID`, `CONVERSION_TX`, and `EVM_CHAIN_ID`. Your deployer key must be funded on the P-Chain. On Fuji, get test AVAX from the [Builder Hub Faucet](https://build.avax.network/tools/faucet) and cross-chain transfer to P-Chain via Core Wallet. ### Configure Nodes for Your L1 ```bash source l1.env make configure-l1 SUBNET_ID=$SUBNET_ID CHAIN_ID=$CHAIN_ID make status ``` This runs playbook `02-configure-l1.yml`, which: - Adds `track-subnets: ` to each node's config - Copies the appropriate chain config (archive, pruned, or validator) - Restarts AvalancheGo on all nodes - Automatically deploys **eRPC** as a load balancer (auto-detects EVM chain ID from genesis) Your L1 is now running. Access it at: | Endpoint | URL | Notes | |----------|-----|-------| | eRPC (recommended) | `http://:4000` | Load balanced, cached, automatic failover | | Direct Archive RPC | `http://:9650/ext/bc//rpc` | Full debug/trace APIs | | Direct Pruned RPC | `http://:9650/ext/bc//rpc` | Standard APIs only | | Grafana | `http://:3000` | Default credentials: admin/admin | ## Optional: Initialize ValidatorManager If your genesis includes a pre-deployed ValidatorManager proxy contract, initialize it to enable on-chain validator management: ```bash # Install Foundry (if not already installed) curl -L https://foundry.paradigm.xyz | bash && foundryup # Set ICM contracts path export ICM_CONTRACTS_PATH=~/code/icm-contracts # Initialize source l1.env make initialize-validator-manager \ SUBNET_ID=$SUBNET_ID \ CHAIN_ID=$CHAIN_ID \ CONVERSION_TX=$CONVERSION_TX \ PROXY_ADDRESS=0xfacade01... \ EVM_CHAIN_ID=$EVM_CHAIN_ID ``` ## Cost Estimate (AWS us-east-1) | Component | Count | Monthly Estimate | |-----------|-------|-----------------| | Validators (c6a.xlarge) | 5 | ~$450 | | Archive RPC (c6a.xlarge) | 1 | ~$120 | | Pruned RPC (c6a.large) | 1 | ~$65 | | Monitoring (t3.small) | 1 | ~$15 | | S3 + KMS | — | ~$1 | | **Total** | | **~$651/mo** | ## Next Steps - [Deploy on Kubernetes instead](/docs/tooling/avalanche-deploy/deploy-l1-kubernetes) — Container-native alternative using Helm charts - [Deploy add-ons](/docs/tooling/avalanche-deploy/add-ons) — Blockscout, faucet, The Graph, ICM Relayer, Safe multisig - [Operations guide](/docs/tooling/avalanche-deploy/operations) — Upgrades, monitoring, health checks, backups - [Troubleshooting](/docs/tooling/avalanche-deploy/troubleshooting) — Common issues and solutions # Deploy Primary Network on Kubernetes (/docs/tooling/avalanche-deploy/deploy-primary-network-kubernetes) --- title: Deploy Primary Network on Kubernetes description: Run Avalanche Primary Network validators and RPC nodes on Kubernetes using Helm charts with Prometheus and Grafana monitoring. --- This guide covers deploying Primary Network validators and RPC nodes on Kubernetes. Use this when you already have a Kubernetes cluster and want a container-native deployment. **Requirements**: `kubectl`, `helm` v3+, cluster with 500GB+ storage per validator. **Bootstrap time**: 2–4 hours via state-sync. Advanced operational workflows (staking key backup to S3, database snapshots, zero-downtime validator migration) are only available in the [Terraform + Ansible path](/docs/tooling/avalanche-deploy/deploy-primary-network). The Kubernetes path covers deployment and sync monitoring. ## Prerequisites - `kubectl` connected to your cluster - `helm` v3+ - Sufficient cluster resources (Primary Network nodes require significant storage for full P/X/C chain data) ## Helm Charts | Chart | Path | Purpose | |-------|------|---------| | `primary-network-validator` | `helm/primary-network-validator` | Primary Network validators | | `primary-network-rpc` | `helm/primary-network-rpc` | Primary Network RPC nodes | | `monitoring` | `helm/monitoring` | Prometheus + Grafana | ## Quick Start ### Deploy Primary Network Validators ```bash cd kubernetes helm install primary-validators ./helm/primary-network-validator \ --set primary_validator_replicas=2 \ --set network=fuji ``` ### Deploy Primary Network RPC Nodes ```bash helm install primary-rpc ./helm/primary-network-rpc \ --set primary_rpc_replicas=2 \ --set network=fuji ``` ### Wait for Sync ```bash ./scripts/wait-for-sync.sh --release=primary-validators ``` All three chains (P, X, C) must complete bootstrapping. This typically takes 2-4 hours via state-sync. ### Verify Status ```bash ./scripts/status.sh --release=primary-validators ``` ### Register Your Validator After sync completes, register your validator on the P-Chain using [Core Wallet](https://core.app/) or the Avalanche CLI. You need the NodeID displayed by the status script. Staking requirements: - **Fuji testnet**: 1 AVAX minimum - **Mainnet**: 2,000 AVAX minimum ## Accessing RPC ```bash # Primary Network RPC kubectl port-forward svc/primary-rpc 9650:9650 # Query C-Chain curl -X POST http://localhost:9650/ext/bc/C/rpc \ -H "Content-Type: application/json" \ -d '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}' ``` ## Monitoring ```bash helm install monitoring ./helm/monitoring kubectl port-forward svc/monitoring-grafana 3000:3000 # http://localhost:3000 (admin/admin) ``` ## Make Wrappers From the repo root: | Command | Description | |---------|-------------| | `make k8s-primary-deploy` | Deploy Primary Network validators + RPC | | `make k8s-primary-wait` | Wait for chain sync | | `make k8s-primary-status` | Check Primary Network status | | `make k8s-monitoring` | Deploy monitoring stack | | `make k8s-cleanup` | Remove releases and optional PVC cleanup | ## Troubleshooting ### Insufficient Storage Primary Network nodes require significant disk space for full chain data. Ensure your PersistentVolumeClaims have adequate storage provisioned. For mainnet, plan for at least 500GB per validator. ### Pods Stuck in Pending ```bash kubectl describe pod ``` Check for resource constraints (`Insufficient cpu`, `Insufficient memory`) and adjust replica counts or node pool sizes. ### Node Not Syncing ```bash kubectl logs -f ``` Verify the pod can reach the Avalanche P2P network on port 9651. Check that your cluster's network policies and load balancer allow inbound/outbound traffic on this port. ## Cleanup ```bash cd kubernetes ./scripts/cleanup.sh ``` ## Next Steps - [Deploy with Terraform + Ansible instead](/docs/tooling/avalanche-deploy/deploy-primary-network) — Full-featured deployment with staking key backup, snapshots, and zero-downtime migration - [Operations guide](/docs/tooling/avalanche-deploy/operations) — Upgrades, monitoring, health checks - [Troubleshooting](/docs/tooling/avalanche-deploy/troubleshooting) — Common issues and solutions # Deploy Primary Network with Terraform and Ansible (/docs/tooling/avalanche-deploy/deploy-primary-network) --- title: Deploy Primary Network with Terraform and Ansible description: Operate production Avalanche Primary Network validators on AWS with staking key management, database snapshots, and zero-downtime migration. --- This guide covers deploying and operating Avalanche Primary Network validators — the nodes that validate the P-Chain, X-Chain, and C-Chain. These validators participate in Avalanche consensus and earn staking rewards. **AWS only**. **Staking minimum**: 2,000 AVAX (mainnet), 1 AVAX (Fuji). **Bootstrap time**: 2–4 hours via state-sync. **Cost**: ~$326/month per validator. Primary Network workflows are currently supported on **AWS only**. The instances require high-performance NVMe storage for the full chain database. ## Architecture |P2P :9651| PV1 PrimaryNetwork <-->|P2P :9651| PV2 Operator -->|SSH / API| PV1 Operator -->|Dashboard :3000| Grafana PV1 -.->|backup| S3 PV2 -.->|backup| S3 PV1 -.->|snapshot| Snapshots PV1 -.->|metrics| Prometheus PV2 -.->|metrics| Prometheus Prometheus -.-> Grafana `} /> ## Key Differences from L1 Deployment | Aspect | L1 Deployment | Primary Network | |--------|---------------|-----------------| | Chain sync | Partial P-Chain headers only | Full P/X/C chain data | | Instance type | c6a.xlarge (general compute) | i4i.xlarge (NVMe-optimized) | | Storage | EBS gp3 volumes | 937GB local NVMe | | Bootstrap time | Minutes (partial sync) | 2–4 hours (state-sync) | | Cloud support | AWS, GCP, Azure | AWS only | | Staking key backup | Optional | Strongly recommended | ## Quick Start ### Provision Infrastructure ```bash make primary-infra CLOUD=aws ``` This uses a **separate Terraform state** from the L1 deployment (`terraform/aws-primary-network/`), creating i4i.xlarge instances with 937GB NVMe drives. Edit `terraform/aws-primary-network/terraform.tfvars` before running: ```hcl primary_validator_count = 2 enable_staking_key_backup = true ssh_public_key = "ssh-rsa AAAA..." ssh_private_key_file = "~/.ssh/avalanche-deploy" ``` ### Deploy Validators ```bash make primary-deploy CLOUD=aws NETWORK=fuji # or mainnet ``` This runs playbook `10-deploy-primary-network.yml`, which: 1. Installs AvalancheGo with Primary Network configuration 2. Enables state-sync for fast initial bootstrap 3. Waits for P/X/C chain bootstrap to complete (polls for up to 90 minutes) 4. Backs up staking keys to S3 with KMS encryption 5. Creates an initial database snapshot and uploads it to S3 ### Monitor Sync Progress ```bash make primary-status CLOUD=aws ``` Bootstrap typically takes 2–4 hours via state-sync. All three chains (P, X, C) must report `isBootstrapped: true` before proceeding. ### Register Your Validator on the P-Chain Validator registration requires staking AVAX on the P-Chain: - **Fuji testnet**: 1 AVAX minimum - **Mainnet**: 2,000 AVAX minimum Register using [Core Wallet](https://core.app/) or the Avalanche CLI. You will need your validator's NodeID, which is displayed by `make primary-status`. ### Back Up Staking Keys ```bash make backup-keys CLOUD=aws ``` Staking keys are uploaded to S3 with KMS encryption. The validator instances have an IAM role that grants access to the backup bucket — no manual credential configuration required. ## Staking Key Management Staking keys are the cryptographic identity of your validator. Losing them means losing your NodeID and any associated staking position. ```bash # Backup all validator keys to S3 make backup-keys CLOUD=aws # Restore keys to a specific node make restore-keys CLOUD=aws SOURCE=primary-validator-1 TARGET_IP=10.0.1.50 # List existing backups aws s3 ls s3://$(terraform -chdir=terraform/aws-primary-network output -raw staking_keys_bucket)/ ``` Always back up staking keys immediately after deployment and after any key rotation. Keys are encrypted with KMS — they cannot be read even if the S3 bucket is compromised without KMS access. ## Database Snapshots Create lz4-compressed snapshots of synced nodes for faster bootstrapping of new nodes. A pruned mainnet snapshot is approximately 400GB and restores in minutes compared to hours for state-sync. ```bash # Create a snapshot from a synced validator make create-snapshot CLOUD=aws NODE=primary-validator-1 # Create with a custom name make create-snapshot CLOUD=aws NODE=primary-validator-1 NAME=mainnet-2025-02 # List available snapshots make list-snapshots CLOUD=aws # Restore a snapshot to a node make restore-snapshot CLOUD=aws TARGET=migration-target make restore-snapshot CLOUD=aws TARGET=migration-target SNAPSHOT=mainnet-2025-02 ``` Snapshots are stored in S3 with KMS encryption and SHA256 checksums for integrity verification. ## Validator Migration Migrate a validator to a new instance with approximately 30 seconds of downtime. This is useful for hardware upgrades, instance type changes, or region moves. >Network: Validating (active) Note over New: Phase 1 Sync new node New->>Network: State-sync or restore from snapshot Note over Old,S3: Phase 2 Backup keys Old->>S3: Upload staking keys KMS encrypted Note over New,S3: Phase 3 Prepare migration New->>S3: Download staking keys New-->>New: Stop AvalancheGo Note over Old,New: Phase 4 Execute about 30s downtime Old-->>Old: Stop AvalancheGo New-->>New: Start with staking keys New->>Network: Validating same NodeID `} /> ### Migration Steps ```bash # 1. Prepare the new node (choose one): # Option A: From snapshot (faster — minutes) make prepare-migration CLOUD=aws NODE=migration-target SNAPSHOT=true # Option B: From state-sync (slower — hours) make prepare-migration CLOUD=aws NODE=migration-target # 2. Wait for the new node to fully sync ./scripts/check-primary-sync.sh # 3. Execute migration (~30s downtime) make migrate-validator CLOUD=aws SOURCE=primary-validator-1 TARGET=migration-target ``` ## Cost Estimate (AWS us-east-1) | Component | Instance | Storage | Monthly Estimate | |-----------|----------|---------|-----------------| | Primary Validator | i4i.xlarge | 937GB NVMe (included) | ~$310 | | S3 + KMS (keys + snapshots) | — | ~1GB | ~$1 | | Monitoring | t3.small | 50GB EBS | ~$15 | | **Per validator total** | | | **~$326/mo** | ## Terraform Configuration Reference Edit `terraform/aws-primary-network/terraform.tfvars`: | Variable | Default | Description | |----------|---------|-------------| | `primary_validator_count` | 1 | Number of Primary Network validators | | `enable_staking_key_backup` | true | Enable S3 backup with KMS encryption | | `ssh_public_key` | — | SSH public key for node access | | `ssh_private_key_file` | — | Path to SSH private key | Node runtime configuration is at `configs/primary-network/node/primary-validator-node-config.json`, which includes `state-sync-enabled: true` and `state-sync-min-blocks: 100000` for faster initial bootstrap. ## Next Steps - [Operations guide](/docs/tooling/avalanche-deploy/operations) — Rolling upgrades, monitoring, health checks - [Troubleshooting](/docs/tooling/avalanche-deploy/troubleshooting) — Common issues and solutions # Avalanche Deploy (/docs/tooling/avalanche-deploy) --- title: Avalanche Deploy description: Deploy production-ready Avalanche infrastructure on AWS, GCP, Azure, or Kubernetes using Infrastructure as Code with Terraform, Ansible, and Helm. index: true --- [`avalanche-deploy`](https://github.com/ava-labs/avalanche-deploy) is an Infrastructure as Code toolkit that automates provisioning, deployment, and operations for Avalanche nodes. It supports both **L1 blockchains** and **Primary Network validators** across AWS, GCP, Azure, and Kubernetes. ## L1 Deployment Launch your own EVM blockchain with validators, RPC nodes, monitoring, and optional add-ons. ## Primary Network Validators Operate production validators for the P-Chain, X-Chain, and C-Chain. Earn staking rewards. ## Which Method Should I Use? | | Terraform + Ansible | Kubernetes | |---|---|---| | **Best for** | Production with full ops tooling | Existing clusters or local dev | | **Cloud support** | AWS, GCP, Azure | Any cluster | | **Staking key backup** | S3 + KMS | Manual | | **DB snapshots and migration** | Built-in | Manual | | **Local development** | No | Yes (kind) | | **Add-ons** (Blockscout, faucet, etc.) | Yes | Monitoring only | **Not sure?** Start with **Terraform + Ansible on AWS** for the most complete experience. Use **Kubernetes** if you already have a cluster or want to test locally with kind. ## Operations and Troubleshooting # Operations and Maintenance (/docs/tooling/avalanche-deploy/operations) --- title: Operations and Maintenance description: Day-2 operations for Avalanche infrastructure — AvalancheGo upgrades, monitoring, health checks, staking key backup, database snapshots, and rolling restarts. --- This guide covers ongoing operations for infrastructure deployed with `avalanche-deploy`. All commands use `make` targets that wrap Ansible playbooks. ## Health Checks Run comprehensive health checks across all nodes: ```bash # Basic health checks make health-checks # Include L1 chain status make health-checks CHAIN_ID=$CHAIN_ID ``` Health checks verify: - AvalancheGo service status (systemd) - NodeID and version - P-Chain, X-Chain, and C-Chain bootstrap status - L1 block number (if `CHAIN_ID` is provided) ## Monitoring ### Deploy Prometheus and Grafana ```bash make monitoring ``` **Access Grafana**: `http://:3000` (default credentials: `admin`/`admin`) ### Pre-Built Dashboards | Dashboard | Metrics | |-----------|---------| | Avalanche L1 | Block height, transaction throughput, validator status | | L1 EVM | Gas usage, contract calls, pending transactions | | P-Chain | Staking metrics, validator set changes | | System Health | CPU, memory, disk, network for all nodes | Prometheus is pre-configured to scrape both AvalancheGo metrics and node_exporter system metrics from all hosts. ## Viewing Logs ```bash # View logs from all nodes make logs ``` Or SSH directly to inspect a specific node: ```bash ssh -i ~/.ssh/avalanche-deploy ubuntu@ \ "sudo journalctl -u avalanchego -f --no-pager -n 100" ``` ## Rolling Restart Restart all nodes one at a time with health checks between each restart. This ensures zero downtime: ```bash make rolling-restart ``` The playbook: 1. Stops AvalancheGo on one node 2. Starts AvalancheGo 3. Waits for the node to report healthy 4. Moves to the next node ## Upgrading AvalancheGo Perform a zero-downtime rolling upgrade to a new AvalancheGo version: ```bash make upgrade VERSION=1.14.1 ``` Subnet-EVM is bundled with AvalancheGo v1.12.0+ and updates automatically with each AvalancheGo upgrade. No separate plugin management is needed. The upgrade playbook follows the same rolling pattern as restarts: one node at a time with health checks between each upgrade. ## Staking Key Backup and Restore ### Backup ```bash # Backup all validator keys to S3 (KMS encrypted) make backup-keys CLOUD=aws ``` Keys are encrypted with AWS KMS. Validator instances access the S3 bucket via IAM role — no credentials stored on disk. ### Restore ```bash # Restore keys from one node to another make restore-keys CLOUD=aws SOURCE=primary-validator-1 TARGET_IP=10.0.1.50 ``` ### List Backups ```bash aws s3 ls s3://$(terraform -chdir=terraform/aws-primary-network output -raw staking_keys_bucket)/ ``` ## Database Snapshots Create lz4-compressed snapshots of node databases for fast bootstrapping: ```bash # Create a snapshot make create-snapshot CLOUD=aws NODE=primary-validator-1 # Create with custom name make create-snapshot CLOUD=aws NODE=primary-validator-1 NAME=mainnet-2025-02 # List snapshots make list-snapshots CLOUD=aws # Restore a snapshot make restore-snapshot CLOUD=aws TARGET=migration-target ``` Snapshots include SHA256 checksums for integrity verification. Enable integrity checking with: ```bash cd ansible && ansible-playbook -i inventory/aws_hosts playbooks/15-restore-snapshot.yml \ --limit migration-target \ -e verify_integrity=true ``` Verified restore mode requires approximately 3x the snapshot size in free disk space (download + extract + verify). ## Reset L1 Chain Data Wipe L1 chain data on all nodes for redeployment. This preserves staking keys and Primary Network data: ```bash make reset-l1 ``` ## Tear Down Infrastructure Permanently destroy all cloud resources: ```bash # Destroy L1 infrastructure make destroy # Destroy Primary Network infrastructure make primary-destroy CLOUD=aws ``` This permanently deletes all VMs, disks, and networking. Staking keys previously backed up to S3 are preserved, but node databases are permanently lost. ## Command Reference ### L1 Operations | Command | Description | |---------|-------------| | `make status` | Check node sync status | | `make health-checks` | Run comprehensive health checks | | `make logs` | View node logs | | `make rolling-restart` | Zero-downtime rolling restart | | `make upgrade VERSION=x.y.z` | Rolling AvalancheGo upgrade | | `make monitoring` | Deploy Prometheus + Grafana | | `make reset-l1` | Wipe L1 chain data (keeps keys) | | `make destroy` | Tear down all infrastructure | ### Primary Network Operations | Command | Description | |---------|-------------| | `make primary-status CLOUD=aws` | Check Primary Network node status | | `make backup-keys CLOUD=aws` | Backup staking keys to S3 | | `make restore-keys CLOUD=aws SOURCE=... TARGET_IP=...` | Restore staking keys | | `make create-snapshot CLOUD=aws NODE=...` | Create database snapshot | | `make restore-snapshot CLOUD=aws TARGET=...` | Restore database snapshot | | `make list-snapshots CLOUD=aws` | List available S3 snapshots | | `make prepare-migration CLOUD=aws NODE=...` | Prepare node for migration | | `make migrate-validator CLOUD=aws SOURCE=... TARGET=...` | Execute validator migration | | `make primary-destroy CLOUD=aws` | Tear down Primary Network infra | # Troubleshooting (/docs/tooling/avalanche-deploy/troubleshooting) --- title: Troubleshooting description: Common issues and solutions for Avalanche Deploy — SSH connectivity, node sync, L1 creation, RPC access, genesis configuration, snapshots, migration, and add-on debugging. --- ## Connection Issues ### Ansible Cannot Connect to Nodes **Symptom**: SSH connection timeouts or permission denied errors. **Solutions**: 1. Verify the SSH key path in `ansible/inventory/_hosts` matches your key 2. Check that your security group allows SSH (port 22) from your current IP 3. Confirm the instance is running: `make status` ```bash # Test SSH manually ssh -i ~/.ssh/avalanche-deploy ubuntu@ ``` Terraform auto-detects your operator IP for firewall rules. If your IP changes (VPN, new network), re-run `make infra` to update security groups. ### Nodes Not Syncing **Symptom**: P-Chain stays at `NOT_BOOTSTRAPPED`. **Solutions**: 1. Check node logs for errors: `make logs` 2. Verify the P2P port (9651) is open in your security group 3. Ensure nodes can reach Primary Network bootstrap nodes ```bash # Check node health ssh ubuntu@ "curl -s localhost:9650/ext/health" ``` ## L1 Creation Issues ### "Insufficient Funds" **Symptom**: The `create-l1` tool fails with an insufficient funds error. **Solution**: Fund your P-Chain address: 1. Get test AVAX from the [Builder Hub Faucet](https://build.avax.network/tools/faucet) 2. Use Core Wallet to cross-chain transfer from C-Chain to P-Chain ### "Illegal Name Character" **Symptom**: Chain creation fails with an illegal name character error. **Solution**: Chain names must be alphanumeric only — no hyphens, underscores, or special characters: ```bash # Incorrect --chain-name=my-chain # Correct --chain-name=mychain ``` ## RPC Access Issues ### Cannot Reach RPC Endpoint **Symptom**: Connection refused when accessing RPC on port 9650. **Explanation**: Validators do not expose port 9650 publicly for security. Only RPC nodes have this port open. **Solutions**: 1. Use RPC node IPs (not validator IPs) 2. Use the eRPC load balancer at `http://:4000` 3. For development, use an SSH tunnel: ```bash ssh -i ~/.ssh/avalanche-deploy -L 9650:localhost:9650 ubuntu@ ``` ## Genesis Configuration ### "Warp Cannot Be Activated Before Durango" **Symptom**: Chain fails to start with a warp activation error. **Solution**: Ensure your genesis file includes the Durango timestamp: ```json { "config": { "durangoTimestamp": 0 } } ``` ## Snapshot Issues ### Checksum Verification Failed **Symptom**: Snapshot restore fails checksum verification. **Solutions**: 1. Re-download the snapshot (may have been corrupted in transit) 2. Try a different snapshot: `make list-snapshots CLOUD=aws` 3. Skip verification if needed (not recommended): remove `-e verify_integrity=true` ### Insufficient Disk Space **Symptom**: Not enough space during snapshot creation or restore. **Solutions**: 1. Use a larger instance type with more storage 2. Clean up temporary files: `sudo rm -rf /tmp/snapshot*` 3. For verified restore mode, ensure 3x the snapshot size is available (download + extract + verify) ## Migration Issues ### Target Node Not Fully Synced **Symptom**: Migration fails with a sync check error. **Solution**: Wait for all chains to complete syncing before migrating: ```bash ./scripts/check-primary-sync.sh ``` All chains (P, X, C) must report `SYNCED`. ### Both Validators Appear Active After Migration **Symptom**: Both old and new validators show as active briefly after migration. **Explanation**: This is expected. The old validator becomes inactive after missing its next validation slot. Verify the migration was successful: ```bash curl -s http://:9650/ext/info -X POST \ -H "Content-Type: application/json" \ -d '{"jsonrpc":"2.0","id":1,"method":"info.getNodeID"}' | jq .result.nodeID ``` The NodeID should match the original validator. ## Add-On Issues ### Blockscout Not Indexing **Symptom**: Block explorer shows no transactions. **Solutions**: 1. Wait for initial indexing to complete (can take hours for large chains) 2. Check logs: `docker logs -f blockscout-backend` on the RPC node 3. Verify the RPC connection in the Blockscout configuration ### Faucet Not Dispensing **Symptom**: Faucet returns an error or shows 0 balance. **Solutions**: 1. Verify the faucet wallet is funded on your L1 chain 2. Check logs: `docker logs -f faucet` 3. Confirm the chain ID matches your L1's EVM chain ID 4. Access the faucet on port `8010` (not the default RPC port) ### eRPC Returning 502/503 Errors **Symptom**: Load balancer returns 502 or 503 errors. **Solutions**: 1. Verify upstream RPC nodes are healthy: `make health-checks` 2. Check eRPC configuration: `cat /etc/erpc/erpc.yaml` on the monitoring host 3. Check eRPC logs: `docker logs -f erpc` ## Getting Help - Run health checks: `make health-checks` - Check node logs: `make logs` - Review the [Avalanche Discord](https://discord.gg/avalanche) for community support - [Open an issue](https://github.com/ava-labs/avalanche-deploy/issues) on the repository # Chains (/docs/tooling/platform-cli/chains) --- title: Chains description: Deploy new blockchains on existing subnets --- Deploy a new blockchain on an existing subnet using the `chain create` command. ## Create a Chain ```bash platform chain create \ --subnet-id 2QYfFcfZ9... \ --genesis genesis.json \ --name mychain \ --key-name mykey ``` ## Flags | Flag | Description | Default | |------|-------------|---------| | `--subnet-id` | Subnet to create chain on (required) | | | `--genesis` | Path to genesis JSON file (required, max 1 MB) | | | `--name` | Chain name | `mychain` | | `--vm-id` | VM ID | Subnet-EVM | ## Genesis File The genesis file must be valid JSON and under 1 MB. For Subnet-EVM chains, you can use the standard Subnet-EVM genesis format. ```json { "config": { "chainId": 99999, "feeConfig": { "gasLimit": 8000000, "targetBlockRate": 2, "minBaseFee": 25000000000, "targetGas": 15000000, "baseFeeChangeDenominator": 36, "minBlockGasCost": 0, "maxBlockGasCost": 1000000, "blockGasCostStep": 200000 } }, "alloc": { "0xYourAddress": { "balance": "0x295BE96E64066972000000" } } } ``` # Command Reference (/docs/tooling/platform-cli/command-reference) --- title: Command Reference description: Complete reference for all Platform CLI commands and flags --- ## Global Flags Available on all commands: | Flag | Short | Description | Default | |------|-------|-------------|---------| | `--network` | `-n` | Network: `fuji` or `mainnet` | `fuji` | | `--key-name` | | Load key from keystore by name | | | `--ledger` | | Use Ledger hardware wallet | `false` | | `--ledger-index` | | Ledger BIP44 address index | `0` | | `--rpc-url` | | Custom RPC URL (overrides `--network`) | | | `--network-id` | | Network ID for custom RPC | auto-detect | | `--allow-insecure-http` | | Allow plain HTTP for non-local endpoints (unsafe) | `false` | | `--private-key` | `-k` | Private key (deprecated, prefer `--key-name`) | | ## version ```bash platform version ``` Prints the CLI version. ## keys Manage persistent keys stored in `~/.platform/keys/`. ### keys generate ```bash platform keys generate --name [--encrypt=false] ``` | Flag | Description | Default | |------|-------------|---------| | `--name` | Key name (required, 1-64 chars: `[a-zA-Z0-9._-]`, starts with alphanumeric) | | | `--encrypt` | Encrypt with password (AES-256-GCM + Argon2id) | `true` | ### keys import ```bash platform keys import --name [--private-key ] [--encrypt=false] ``` | Flag | Description | Default | |------|-------------|---------| | `--name` | Key name (required) | | | `--private-key` | Private key string (prompted if omitted) | | | `--encrypt` | Encrypt with password | `true` | ### keys list ```bash platform keys list [--show-addresses] ``` | Flag | Description | |------|-------------| | `--show-addresses` | Show P-Chain and EVM addresses | ### keys export ```bash platform keys export --name --output-file platform keys export --name --unsafe-stdout ``` | Flag | Description | Default | |------|-------------|---------| | `--name` | Key name (required) | | | `--format` | Output format: `cb58` or `hex` | `cb58` | | `--output-file` | Write key to file (permissions forced to 0600) | | | `--unsafe-stdout` | Print private key to stdout (unsafe, required if no `--output-file`) | `false` | ### keys delete ```bash platform keys delete --name [--force] ``` | Flag | Description | |------|-------------| | `--name` | Key name (required) | | `--force` | Skip confirmation prompt | ### keys default ```bash platform keys default [--name ] ``` Shows current default if `--name` is omitted. Sets default if `--name` is provided. ## wallet ### wallet balance ```bash platform wallet balance ``` Displays P-Chain address and AVAX balance. ### wallet address ```bash platform wallet address ``` Displays P-Chain and EVM addresses derived from the key. ## transfer ### transfer send ```bash platform transfer send --to